id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
240464353 | pes2o/s2orc | v3-fos-license | Synoptic Causes and Socio-Economic Consequences of a Severe Dust Storm in the Middle East
: Dust storms represent one of the most severe, if underrated, natural hazards in drylands. This study uses ground observational data from meteorological stations and airports (SYNOP and METARs), satellite observations (MODIS level-3 gridded atmosphere daily products and CALIPSO) and reanalysis data (ERA5) to analyze the synoptic meteorology of a severe Middle Eastern dust storm in April 2015. Details of related socio-economic impacts, gathered largely from news media reports, are also documented. This dust storm affected at least 14 countries in an area of 10 million km 2 . The considerable impacts were felt across eight countries in health, transport, education, construction, leisure and energy production. Hospitals in Saudi Arabia, Qatar and the UAE experienced a surge in cases of respiratory complaints and ophthalmic emergencies, as well as vehicular trauma due to an increase in motor vehicle accidents. Airports in seven countries had to delay, divert and cancel flights during the dust storm. This paper is the first attempt to catalogue such dust storm impacts on multiple socio-economic sectors in multiple countries in any part of the world. This type of transboundary study of individual dust storm events is necessary to improve our understanding of their multiple impacts and so inform policymakers working on this emerging disaster risk management issue.
Introduction
Dust storms are a characteristic feature of the climate of the Middle East, where major regions of wind erosion activity are located on the Tigris-Euphrates alluvial plains (eastern Syria, Iraq and the Iran-Iraq border), the deserts of the Arabian Peninsula, and southeastern Iran [1][2][3][4]. Dust storm events often transport large amounts of mineral dust over many hundreds of kilometers and have multiple Earth system impacts [5,6]. Desert dust transported by storms in the Middle East and Southwest Asia has effects on marine primary production [7], marine sediments of the Indian Ocean and adjacent seas [8] and the Indian summer monsoon [9].
Desert dust also presents a range of hazards to human society during its entrainment, transport and deposition [10] and the socio-economic impacts of dust storms can be classified as disasters according to the terminology adopted by the UN Office for disaster risk reduction (UNDRR), which defines a disaster as "a serious disruption of the functioning of a community or a society at any scale due to hazardous events interacting with conditions of exposure, vulnerability and capacity, leading to one or more of the following: human, material, economic and environmental losses and impacts" [11]. Nonetheless, dust storms are generally underrated relative to other types of natural disaster [12]. Most of the research that has been conducted into the direct and indirect impacts of desert dust on society is focused on individual sectors, including human health [13,14], solar and wind power production [15,16], the transport industry [17], the oil and gas industry [18], agriculture [19,20], water quality [21] and changes to the albedo of ice, with consequences for runoff and water availability [22]. There are few studies of how individual dust storm events impact multiple socio-economic sectors, and those dust events that have been examined in this way are entirely confined to effects in a single country [23][24][25][26][27], despite the fact that long-range transport of dust frequently crosses international boundaries [28][29][30][31][32]. The aim of this paper is to start filling this gap in the literature by analyzing a severe dust storm that occurred in early April 2015 affecting a very large area of the Middle East and Southwest Asia. Using a range of observational and reanalysis datasets, the synoptic causes of this major dust storm are evaluated prior to systematic documentation of the event's impacts on numerous socio-economic sectors in several countries. As far as we are aware, this study represents the first attempt to catalogue such transboundary dust storm impacts on multiple socio-economic sectors in multiple countries anywhere in the world.
The Severe Dust Storm of Early April 2015
The dust storm studied here occurred in the An Nafud Desert in northern Saudi Arabia on 1 April 2015 and transported material initially eastward, affecting southern Iraq, Kuwait and southwestern Iran on the same day, and progressively south-eastward to affect Bahrain, Qatar, United Arab Emirates and the lower Persian Gulf on 2 April, Oman and Yemen on 3 April, crossed the Red Sea to affect Eritrea, Djibouti and Somaliland on 4 April and crossed the Arabian Sea to reach the west coast of India by 6 April [33]. From the source area in northern parts of the Arabian Peninsula to the coast of India, and from southern Iran to the Red Sea coast of Africa, this severe dust storm affected at least 14 countries in an area roughly 4000 km × 2500 km, or 10 million km 2 .
A number of studies have focused on various aspects of this severe dust storm, including synoptic analyses and modelling simulations [34,35], impacts on aerosol optical properties and the radiation budget [36], radioactivity associated with the event [37] and the long-range transport of mineral dust to India [38,39]. The impacts of this dust storm on air quality and health in Qatar were the subject of studies [40][41][42], but no other socioeconomic impacts have been the focus of specific investigation, to our knowledge, in any of the 14 countries affected.
Observed Meteorological Data
Observational data from synoptic meteorological stations (SYNOP reports) and airports (meteorological aerodrome reports, or METARs) were used to establish the occurrence of atmospheric dust, visibility and wind speed. Visibility data related to the presence of hydrometeors such as fog and mist are separated by considering relative humidity (RH) greater than 75% and present weather as filters. The present weather codes used at meteorological stations that reflect dust-related phenomena are shown in Appendix A.
Moderate Resolution Imaging Spectroradiometer (MODIS) Aerosol Data
Data on atmospheric aerosols were taken from the level-3 atmosphere daily global product (MOD08_D3), which contains daily grid average values (1 degree equal-angle latitude-longitude grid) of parameters related to atmospheric aerosol particle properties. These parameters are derived from the four level-2 MODIS atmosphere products MOD04_L2, MOD05_L2, MOD06_L2, and MOD07_L2. In this study, the Deep Blue (DB) retrieved Aerosol Optical Depth (AOD) data were used. The DB algorithm, when applied to MODIS inputs, retrieves AOD at 550 nm from land scenes that are free of clouds and snow/ice.
ERA5 Global Reanalysis Dataset
In order to analyze the weather conditions and other parameters related to the dust episode, the ERA5 reanalysis dataset was used. ERA5 is the fifth generation of the European Centre for Medium-Range Weather Forecasts (ECMWF) atmospheric reana-lyzes of the global climate and available at http://apps.ecmwf.int/datasets (accessed on 10 December 2019). ERA5 provides hourly estimates of a large number of atmospheric, land and oceanic climate variables and is based on the Integrated Forecasting System (IFS) Cy41r2. It has a 31-km horizontal resolution and 137 vertical levels, from the surface of the Earth to 0.01 hPa, or about 80 km. The IFS is coupled to a soil model, the parameters of which are also designated as surface parameters [43].
CALIPSO
The Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) satellite provides data about the vertical structure of aerosol distribution. This is of great importance especially in places where no ground station data are available [44]. The instrument transmits polarized light and measures the return signal. The presence and shape of aerosols determine the amount of depolarization ratio and attenuated backscattering signals. This dust storm event was observed and tracked by three CALIPSO products: aerosol sub-type, depolarization ratio and attenuated backscattering signals.
Media Reports
Details of socio-economic impacts related to the dust storm were gathered from media reports. The news media consulted were: al Jazeera, Arab News, Doha News, Gulf
Analysis of SYNOP Stations and Satellite Remote Sensing of the Dust Event
Visibility data recorded at synoptic meteorological stations is used with MODIS DB AOD land images to track the dust storm and the transport of dust across the Arabian Peninsula and beyond between 31 March and 3 April, as shown in Figure 1. Largescale dust-raising began on 1 April between 06:00 UTC (at which time reporting stations in northern Saudi Arabia recorded visibilities >3000 m) and 09:00 UTC (when Gassim reported a visibility of <800 m and three other stations reported visibilities of <1500 m). By 12:00 UTC, visibility had deteriorated to <800 m at Al-Dawadami and Al-Qaisumah in Saudi Arabia and Nasiriya in Iraq. At 12:00 UTC on 2 April, visibilities <800 m were reported at the southern end of the Persian Gulf (in the UAE and southeastern Iran). By 12:00 UTC on 3 April, visibilities <800 m were reported at stations in Yemen and southwestern Saudi Arabia and visibilities <1500 m were also reported on the Makran coast of Iran and Pakistan.
Synoptic Analysis of the Dust Storm
The synoptic conditions associated with the evolution of the dust storm were reviewed using ERA5 reanalysis data. The meteorological dust genesis mechanism was analyzed using the geopotential height, wind vectors and air temperature fields at 850 hPa, as shown in Figure 2. A cold front passed over the Arabian Peninsula with south-westerly winds leading the frontal passage in the eastern coastal area (Figure 2b). On 2 April (Figure 2c), a low-geopotential height center was visible in southeast Iran, with temperatures of more than 297 K, extending as a trough across the Persian Gulf to Saudi Arabia. The presence of a core of high geopotential height over Saudi Arabia created a strong geopotential gradient along the Persian Gulf and strong winds from southern Iraq and northern Saudi Arabia down the Persian Gulf. These winds, generated behind a low-pressure system as it moved across the dust source region in northern Saudi Arabia, represent a typical post-frontal dust storm, the like of which is common in northern areas of the Arabian Peninsula during winter and spring months [45,46].
Synoptic Analysis of the Dust Storm
The synoptic conditions associated with the evolution of the dust storm were reviewed using ERA5 reanalysis data. The meteorological dust genesis mechanism was analyzed using the geopotential height, wind vectors and air temperature fields at 850 hPa, as shown in Figure 2. A cold front passed over the Arabian Peninsula with south-westerly winds leading the frontal passage in the eastern coastal area (Figure 2b). On 2 April (Figure 2c), a low-geopotential height center was visible in southeast Iran, with temperatures of more than 297 K, extending as a trough across the Persian Gulf to Saudi Arabia. The presence of a core of high geopotential height over Saudi Arabia created a strong geopotential gradient along the Persian Gulf and strong winds from southern Iraq and northern Saudi Arabia down the Persian Gulf. These winds, generated behind a low-pressure system as it moved across the dust source region in northern Saudi Arabia, represent a typical post-frontal dust storm, the like of which is common in northern areas of the Arabian Peninsula during winter and spring months [45,46]. The mid-tropospheric situation, shown in Figure 3, indicates the eastward movement of a trough from the eastern Mediterranean to the southern part of the Persian Gulf between 31 March and 3 April. The aggregation of geopotential height contours over the Persian Gulf, southern Iraq, northeastern Saudi Arabia and southeastern Iran indicates the formation of a pressure gradient in these areas leading to an increase in wind speed, especially on 1 and 2 April. Zonal winds in the upper troposphere (200 hPa) and temperature from 31 March to 3 April are shown in Figure 4. Ahead of the front, the subtropical jet stream, with core wind speeds of >70 m/s, is clearly seen on 31 March stretching from northern Egypt to western Iran. On 1 April, the polar jet stream, behind the front, becomes visible over Greece and Turkey with a core wind speed >40 m/s, and these two jets converge into a single jet stream on 2 April and begin to dissipate the following day. This jet stream convergence translates momentum towards the surface, with a powerful effect on near-surface winds, leading to widespread dust-raising on 1 April. Over the following days, the suspended dust moves in a generally southeastward direction following the surface wind pattern. Zonal winds in the upper troposphere (200 hPa) and temperature from 31 March to 3 April are shown in Figure 4. Ahead of the front, the subtropical jet stream, with core wind speeds of >70 m/s, is clearly seen on 31 March stretching from northern Egypt to western Iran. On 1 April, the polar jet stream, behind the front, becomes visible over Greece and Turkey with a core wind speed >40 m/s, and these two jets converge into a single jet stream on 2 April and begin to dissipate the following day. This jet stream convergence translates momentum towards the surface, with a powerful effect on near-surface winds, leading to widespread dust-raising on 1 April. Over the following days, the suspended dust moves in a generally southeastward direction following the surface wind pattern.
CALIPSO Data Analysis
The spatial and vertical distribution of dust during this event was also monitored using output from the CALIPSO satellite. Figure 5 shows lidar data recorded as CALIPSO passed over the study area between 31 March and 3 April 2015. The left panel shows the CALIPSO path and right panel shows CALIPSO aerosol subtype products. The strength of attenuated backscatter signal ( Figure 6) reveals weak dust between latitudes 25°-35°, and longitude 54°-60° on 1 April 2015, while on 3 April 2015, a significant amount of dust is observed in this region. The average backscatter signal near 3.5 × 10 −3 was observed on 1 April 2015 that has increased to 8 × 10 −3 in this area.
CALIPSO Data Analysis
The spatial and vertical distribution of dust during this event was also monitored using output from the CALIPSO satellite. The depolarization ratio ( Figure 6) at 532 nm is the ratio of perpendicular polarization to parallel polarization of the 532 nm attenuated backscatter coefficient. The depolarization ratio of the backscattered signal is affected by aerosol shape (irregular particles increase the depolarization ratio). Nearly spherical particles show a depolarization ratio near to zero, while large values are reported for non-spherical particles. Therefore, it serves as a good indicator for discriminating dust aerosols from water clouds, ice clouds and smoke [47]. The depolarization ratios were between 0.1 and 0.2 on 31 March (Figure 6). The values increased to between 0.3 and 0.4 on 1 April, and the higher values are also observed on 2 April, although at a lower altitude in comparison to the previous day. The depolarization ratios on 3 April 2015 were between 0.2 and 0.4. In general, the average depolarization ratios during 31 March to 3 April were between 0.3 and 0.4 which is typical for dust [48][49][50]. The comparison shows that there is consistency between the dust optical properties derived from the lidar and other observations in the region.
Socio-Economic Impacts
This severe dust storm resulted in a wide range of socio-economic impacts across the region. The effects were felt in several sectors-including health, education, construction, transport, leisure and energy production-across at least eight countries, as summarized in Table 1. Others Saudi Arabia: National holiday declared on 2 April due to inclement weather, particularly for schools, government and private establishments; trains to and from Riyadh cancelled UAE, Abu Dhabi: Adnoc Cycle Challenge postponed due to health and safety considerations. Desert Challenge Rally final stage cancelled due to health and safety considerations. Yas Drag Night (car and motorbike drag racing event) postponed. UAE, Dubai: Dubai Municipality prohibited swimming at Dubai beaches Oman: Power output from photovoltaic (PV) system at Muscat reduced by 37% on 3 April, the day after passage of the dust storm front, due to soiling and increased atmospheric turbidity Source: media reports, [42,51].
Concerns over the health effects of the storm resulted in the closure of schools in Qatar and Saudi Arabia. At 22:35 local time on 1 April, the Supreme Education Council in Qatar cited "extreme weather conditions" in a tweet announcing that students at all schools would have a holiday the following day, and that all school examinations scheduled to be held that day would be postponed. Schools were also closed for the day on 2 April in central and eastern provinces of neighbouring Saudi Arabia.
An impromptu national holiday was announced in Saudi Arabia, and in Abu Dhabi several recreational events were cancelled due to health and safety considerations. The final stage of the Abu Dhabi Desert Challenge Rally was called off because it was impossible to provide air evacuation to the rally drivers, in case of accidents. Helicopter support is essential to ensure competitor safety, but the choppers were unable to take off in visibility as low as 100 m.
In parts of the Persian Gulf visibility reached near-zero on 1-2 April along with high winds. King Abdul Aziz Port in Dammam, Saudi Arabia, suspended shipping arrivals and departures on 1 April. The following day, Shahid Rajaee port outside Bandar Abbas, Iran's largest container port, was closed. The Coast Guard in Qatar, in cooperation with the Emiri Air Force, responded to an SOS from a fishing boat on the evening of 1-2 April, and eventually rescued 11 fishermen who had experienced zero visibility and 45-knot winds.
The potential impact on human health is indicated by PM10 data collected from various sources and shown in Table 2. National 24-h mean safety standards were greatly exceeded in Qatar, the UAE, Iran and even in India. The immediate health impacts in Qatar, where the dust-front of the storm arrived at around 22:00 local time on 1 April, have been assessed by Irfan et al. [41,42]. Qatar's only tertiary care centre, Hamad General Hospital, activated its full emergency department incident response early in the morning (between 04:30 and 05:00) on 2 April due to the surge in the number of cases. Most of these were people suffering from respiratory complaints, vehicular trauma and ophthalmic emergencies. Within 12 h of the onset of the dust storm, the hospital's emergency department received 254 cases with respiratory illness, which compares with an average of 20-40 respiratory cases per day for the 7 days before and after the storm. A similar spike occurred in motor vehicle crash cases, which increased five-fold during the storm, even though it is almost certain that fewer drivers were on the roads. A notable increase in motor vehicle accidents also occurred in Saudi Arabia and the UAE. Mumbai 7 April 102.5 [38] * National standards from [52]. ** Exceeded the PM analyzer's upper calibration limit of 10 mg/m 3 . *** Modelled value.
Airports in seven countries across the region had to delay, divert and cancel flights during the dust storm due to a combination of severely reduced visibility and high wind speeds. An impression of how the dust-front progressively affected regional visibility is shown in Figure 7, created using METARs at five airports on 1 and 2 April. The arrival of the dust-front was also accompanied by high winds. At Riyadh international airport, visibility was reduced to 0 m at 17:35 local time, when wind speed was 46.3 km/h. At 18:00, gusts of 68.5 km/h were recorded. At Bahrain international airport, visibility dropped from 7000 m at 20:30 local time to 100 m at 21:00, at which time the wind speed was 14.8 km/h. At 22:00, visibility was still 100 m, but wind speed was 37 km/h, with gusts up to 59.3 km/h. Some airports ceased to produce METARs for several hours during the storm, including Hamad international airport in Doha where the last report on 1 April was made at 14:00 and reports did not resume until 06:30 on 2 April. An indication of how flight schedules were affected by the dust storm is given i Table 3 which shows the daily overall on time performance (OTP) at four internationa An indication of how flight schedules were affected by the dust storm is given in Table 3 which shows the daily overall on time performance (OTP) at four international airports from 1 to 3 April. OTP is defined as the percentage of flights that arrive within 15 min of scheduled arrival time, and is typically around 90%, but dropped to 75% at Hamad international airport, Doha, Qatar on 1 April and even lower at Dubai and two airports in Saudi Arabia at Riyadh and Dammam. On 2 April, the OTP at both Riyadh and Dammam was less than 7%, and less than 41% at Dubai and Doha, where better instrument landing technology allowed more air traffic to keep to schedules. The OTPs for all four airports improved on 3 April, although they were still low. The dust storm affected 76% of 1526 scheduled flights on Saudi Arabia's national airline, Saudia, over the period 1-3 April. A total of 465 Saudia flights or 33% of those scheduled-both domestic and international-were cancelled over those three days, with another 678 flights delayed, and 19 others re-routed to other destinations as a result of the dust storm [53]. Riyadh's King Khaled international airport, which was closed for seven hours until its runaway was cleared for landing at 04:00 on 2 April, reported that 164,500 of its passengers were affected by the storm over the three days 1-3 April.
In Yemen, the storm affected Operation Raahat, an Indian mission to airlift civilians from the war-torn country. An Air India pilot described visibility at Sanaa airport, which was below 50 m when he landed on 3 April, as like "flying almost blind-folded" [54].
Across the Persian Gulf, airports on the coast of Iran were also affected. Data from the Civil Aviation Organization of the Islamic Republic of Iran shown in Table 4 indicates that 15 domestic flights were cancelled or diverted over the three-day period due to disruption by the dust storm at airports at Booshehr and Kharg Island in the northern Persian Gulf and at Kish Island, Qeshm Island, Bandar Abbas and Chabahar further to the southeast. Most of these flights were in or out of Tehran's Mehrabad domestic airport, but a flight to Mashhad was also diverted. An insight into the impact of this dust storm event on the generation of electricity by the region's solar power plants is given by [51] who studied the effects on the performance of photovoltaic (PV) panels at two locations in Oman: Muscat and Nizwa. They note that the dust-front arrived over Oman during the nighttime of 2-3 April, so that the highest potential effect due to backscattering and absorption of solar radiation by dust in the atmosphere did not apply. On 3 April, PV performance at Muscat was 37% less than the previous day, largely due to soiling of the panels by dust deposited, but with some decrease due to atmospheric scattering. As atmospheric dust cleared over the following days, PV performance improved. It is interesting to note that there was no evidence of soiling observed at the Nizwa site, suggesting that strong winds had effectively cleaned the panels.
Discussion
The hazardous aspects of dust storms have become an issue of increasing concern in many parts of the world in recent years, a rise to prominence reflected in the fact that member states of the UN General Assembly have adopted resolutions on combating sand and dust storms every year since 2015. This emerging dust storm disaster risk is also likely to be intensified in many drylands because of climate change: projections indicate an expansion of the global dryland area [55] and the risk of drought is also expected to increase [56], both trends that would lead to higher levels of dust storm activity. The Middle East is an important region in this regard, being located centrally in the so-called Dust Belt, a wide stretch of dryland with numerous persistent dust sources that extends from the west coast of the Sahara to northeast Asia [57]. Middle Eastern dust storms on a geographical scale as large as the severe event of early April 2015 do not occur every year, but they are not infrequent. Events on a similarly large scale have been reported in early March 2009 [58,59], and mid-March 2012 [60,61], and each of these events affected multiple countries. The synoptic climatology of such large-scale Middle Eastern dust storms is relatively well understood, but studies of the socio-economic impacts of such events are rare. Indeed, to the best of our knowledge this paper presents the first attempt to document dust storm impacts on multiple socio-economic sectors in multiple countries in any part of the world.
Dust storms occur rather more frequently than most other types of natural hazard and, as this paper demonstrates, their impacts on society can be widespread, severe and complex. However, policymakers hoping to tackle this emerging disaster risk management issue face a lack of information and a poor understanding of the socio-economic impacts of the phenomenon. A significant attempt to support countries in collecting data on the impacts of dust storms has been made recently by a body of the UN Economic and Social Commission for Asia and the Pacific (ESCAP): The Asia and Pacific Centre for Development of Disaster Information Management (APDIM). UNESCAP-APDIM [62] has produced guidelines on monitoring and reporting the effects of sand and dust storms through the Sendai Framework Monitoring to help supply decision-makers with the sort of information they need on which to base policy. Many of the impacts of dust storms are transboundary, as in the example studied in this paper, so documentation, assessment and monitoring of the impacts of these events is an international issue, requiring international cooperation [63][64][65]. Multi-country transboundary studies of individual dust storm events are required to fully understand their multiple impacts, and this paper is intended to serve as an early example of what we hope will become a more common type of assessment.
Conclusions
The severe dust storm that originated in the An Nafud Desert of Saudi Arabia on 1 April 2105 transported dust across an area of 10 million km 2 over the following days, affecting at least 14 countries in the Middle East, Southwest Asia and the Horn of Africa. The synoptic meteorology of this event is analyzed using ground observational data from meteorological stations and airports, multiple satellite observations and reanalysis data (ERA5). This dust storm was created by the powerful winds associated with cyclogenesis involving the intrusion of a polar cold front into a subtropical warm front. High atmospheric concentrations of dust and associated very low visibility across the Arabian Peninsula resulted in socio-economic impacts in several sectors-including health, education, transport, construction, leisure and energy production-across eight countries.
A marked increase in motor vehicle accidents occurred in Saudi Arabia, Qatar and the UAE, where hospitals experienced a surge in cases of respiratory complaints, vehicular trauma and ophthalmic emergencies. Airports in seven countries across the region had to divert, delay and cancel flights during the dust storm due to the combination of severely reduced visibility and high wind speeds. The OTP at Riyadh and Dammam international airports was <7% on 2 April and more than 1000 flights on Saudi Arabia's national airline, Saudia, were impacted over the period 1-3 April.
This study clearly demonstrates the multiple socio-economic impacts associated with such a severe dust storm, impacts that are transboundary and in this case were felt across eight countries. This type of transboundary study of individual dust storm events is necessary to improve our understanding of their multiple impacts and so inform policymakers whose job it is to tackle this emerging disaster risk management issue. Acknowledgments: This research was partly supported by the Islamic Republic of Iran Meteorological Organization (IRIMO). We want to thank the European Centre for Medium-Range Weather Forecasts (ECMWF) for providing reanalysis data, and the National Aeronautics and Space Administration (NASA) and the Centre National d'Etudes Spatiales (CNES) for providing and supporting level-3 MODIS gridded atmosphere daily global joint products and CALIPSO datasets. We also thank M.K. for PM10 readings, weatherspark.com (accessed on 1 October 2021) for METAR data and OAG for On Time Performance (OTP) flight data. Four anonymous reviewers also assisted with pertinent critical comments.
Conflicts of Interest:
The authors declare no conflict of interest.
Appendix A Table A1. SYNOP present weather (WW) codes relevant to atmospheric dust.
Haze 6
Widespread dust in suspension in the air, not raised by wind at or near the station at the time of observation 7 Dust or sand raised by wind at or near the station at the time of observation, but no well-developed dust whirl(s) or sand whirl(s), and no dust storm or sandstorm seen 9 Dust storm or sandstorm within sight at the time of observation, or at the station during the preceding hour 30 Slight or moderate dust storm or sandstorm-has decreased during the preceding hour 31 Slight or moderate dust storm or sandstorm-no appreciable change during the preceding hour 32 Slight or moderate dust storm or sandstorm-has begun or has increased during the preceding hour 33 Severe dust storm or sandstorm-has decreased during the preceding hour 34 Severe dust storm or sandstorm-no appreciable change during the preceding hour 35 Severe dust storm or sandstorm-has begun or has increased during the preceding hour 98 Thunderstorm combined with dust storm or sandstorm at time of observation-thunderstorm at time of observation Source: extracted from [66]. | 2021-11-03T15:09:43.410Z | 2021-10-30T00:00:00.000 | {
"year": 2021,
"sha1": "51a84dd4608d0a4a6e8e61f679554a8ebc312623",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4433/12/11/1435/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "ae7df3ee1285d1368e6a5f300e98c24a1f3d1a2a",
"s2fieldsofstudy": [
"Environmental Science",
"Geography"
],
"extfieldsofstudy": []
} |
17413453 | pes2o/s2orc | v3-fos-license | Vacuum Boundary Effects
The effect of boundary conditions on the vacuum structure of quantum field theories is analysed from a quantum information viewpoint. In particular, we analyse the role of boundary conditions on boundary entropy and entanglement entropy. The analysis of boundary effects on massless free field theories points out the relevance of boundary conditions as a new rich source of information about the vacuum structure. In all cases the entropy does not increase along the flow from the ultraviolet to the infrared.
Introduction
In quantum field theory the vacuum state encodes all physical properties of the theory. Indeed, any other state can be generated by the action of field operators on the vacuum. In particular, the effects generated by non-trivial topological structures of space or change of boundary conditions can be directly analysed from the changes induced on the vacuum structure. Among the most famous vacuum effects are the phenomenon of spontaneous symmetry breaking and the Casimir effect [1].
In particle physics, the main interest usually focuses on the behaviour of Green's and other quantum field correlation functions at short distances which provides information about high energy particle scattering processes. These observables are very insensitive to space topology or field boundary conditions [2]. However, for strongly correlated or confining theories long distance properties become very important, for instance, to point out the existence or not of confinement or mass gap. The existence of deconfining transitions in those theories (e.g. non-abelian gauge theories) can be directly extracted from the analysis of the structure of the vacuum state. Another rich source of information about the theory is encoded in the behaviour of non-local observables like free energy or entropy that can be defined by exploiting analogies with thermodynamics.
The interest on observables of this type has been recently boosted by the development of quantum information theory. The entanglement entropy [3] provides a good measure of the vacuum entanglement structure. It can also be used to point out the existence of phase transitions since it is unbounded for critical systems and bounded for systems with a finite mass gap [4]. It has been also pointed out that the confinement mechanism might be related to vacuum entanglement [5]. Another thermodynamic observable, the boundary entropy [6] [7] is related to the number of boundary states. Both new types of entropy do not scale with the volume of the space, unlike the standard bulk entropy and other extensive quantities. The entanglement entropy scales in the critical case with the area of the boundary where the fluctuating modes of the vacuum are traced out [3] [8]. This behaviour is characteristic of black hole physics and is one of the key features of the AdS/CFT correspondence.
By their own nature it is quite possible that both new entropies shall depend on the global properties of the configuration space. In this note we analyse the dependence of those quantities on the space topology and field boundary conditions as well as its physical implications for quantum field theories.
Boundary conditions and conformal invariance
Let us consider a real scalar free field theory defined in a bounded domain Ω in IR D with regular and smooth boundary ∂Ω. The quantum dynamics is governed by the Hamiltonian Unitarity requires that H has to be selfadjoint. In particular, this implies that one must fix the boundary conditions of the fields φ in a way that the Laplace-Beltrami operator −∆ is selfadjoint and positive. The boundary conditions which define a selfadjoint operator −∆ are given by [9] ϕ − iφ = U (ϕ + iφ) in terms of an unitary operator U ∈ U(L 2 (∂Ω, )) which acts on the boundary values ϕ of the quantum fields φ and their normal derivatives ∂ n ϕ =φ. Notice that not all unitary operators give rise to positive Laplace-Beltrami operators, but to have a consistent quantum field theory for all values of m one needs to consider only boundary conditions which satisfy both requirements. The set of boundary conditions which are compatible with unitarity is given by unitary matrices U with eigenvalues λ = e iα in the upper unit semi-circumference 0 ≤ α ≤ π. For a single real scalar field defined on the two-dimensional space-time IR × [0, L] the set of compatible boundary conditions is a four-dimensional manifold which can be covered by two charts parametrised by where where B = a b c d is any real matrix with ad + bc = −1, ac ≤ 0 and bd ≤ 0.
In the massless case m = 0 the theory is conformally invariant. However, most of the compatible boundary conditions (3) (4) break conformal invariance [6]. Only the boundary conditions corresponding to unitary matrices U with eigenvalues ±1 preserve conformal invariance [10,11]. In the two-dimensional case the set of conformally invariant boundary conditions is given by Neumann (U = Á), Dirichlet (U = −Á) and quasiperiodic (U α ) boundary conditions [10]. All other compatible boundary conditions break conformal invariance and are not invariant under renormalization group transformations. They describe renormalised trajectories of the renormalization group flowing towards one of the conformally invariant boundary conditions [11].
Boundary effects in conformal field theories
The infrared properties of quantum field theory are very sensitive to quantum field boundary conditions [2]. In particular, the physical properties of the quantum vacuum, free energy and vacuum energy exhibit a very strong dependence on the type of boundary conditions.
The vacuum state of the free field theory is gaussian and the vacuum energy density E 0 = tr √ −∆ + m 2 is ultraviolet divergent. However, for finite cylindric domains of the form S D−1 × [0, L] the finite size corrections ǫ c of the asymptotic expansion of the vacuum energy density for large values of cylinder base radius Λ and generatrix L with Λ >> L >> 1 are not divergent [1]. In the massless limit m → 0 the coefficient ǫ c of this term becomes universal (i.e. independent of L) but is highly dependent on the boundary conditions * . For instance, in two dimensions for quasi-periodic boundary conditions this first finite size correction is The values and signs of this finite size contribution to the energy are very different for periodic (α = π/2, ǫ c = −π/6), antiperiodic (α = 3π/2, ǫ c = π/12) and Zaremba (α = π, ǫ c = π/48) boundary conditions [13]- [18]. is Apéry's constant [19]. Similarly, in three-dimensional cylindric domains S 2 × [0, L] we have for the same boundary conditions −π 2 /90, 7π 2 /720 and 7π 2 /11520, respectively [19]. In a similar manner the free energy of the system at finite temperature 1/T with the boundary conditions (2) has the following asymptotic expansion for large volumes and low temperature 0 << L << T << Λ [7,21], This is in agreement with the asymptotic expansion of vacuum energy density (7) and for the same reason does not present any logarithmic dependence in the smaller transverse size scale L.
In the asymptotic regime of low temperature and large volumes 0 << T << L << Λ we have There is a similar expansion for the entropy The third term of this expansion s b , known as boundary entropy [6] [7], is finite and depends on the boundary conditions of the fields. In two dimensional conformal theories this entropy s b = log g can be formally associated with the number of boundary states g [6], but in many cases g = e log s b is not integer and does not correspond to a simple counting of boundary states [7]. It has been conjectured that the quantities g and s evolve with the renormalization group flow in a non-increasing way [7] s U V ≥ s IR , g U V ≥ g IR as it corresponds to any type of thermodynamic entropy [7] [22]. This conjecture is known as g-theorem and has been verified in many cases [23] [22] although not yet proved for the boundary renormalization group flow. The conjecture can be verified in the case of a two-dimensional free real scalar field defined on Ê × [0, L]. The partition function for anti-periodic boundary conditions, once properly renormalised, can be exactly calculated and it is given by where q = e −2πT /L andq = e −2πL/T . From (11) it follows that Casimir coefficient is in this case ǫ c = π 12 . For Zaremba boundary conditions [24] we have which leads to the Casimir coefficient ǫ c = π 48 . For periodic boundary conditions there are zero modes which generate infrared divergences. The partition function (density) is given by [25] But, the infrared problem is so severe that affects the consistency of the theory [26]. In any quantum field theory the Schwinger functions must satisfy the Osterwalder-Schrader reflection positivity property in order to preserve unitarity and causality. However, in a free theory of two-dimensional massless bosons the two point function is neither positive nor reflection positive [27]. One way of solving all these problems is to consider a compactification of the scalar field Φ = e iφ/R to a circle of unit radius. In that case the correlators of the compactified field Φ satisfy the reflection positivity requirement and theory becomes consistent [27].
In that case the partition function acquires some additional contributions dues the compactification of zero-modes. In particular, these contributions give rise to the following partition function for periodic boundary conditions. However, for the rest of quasiperiodic boundary conditions (α = π/2) there is no contribution of the compactification of zero modes and the partition function is directly given by where ǫ = | α 2π − 1 4 |. In particular, this means that for antiperiodic and Zaremba boundary conditions there is no modification of (11) and (12), respectively.
For Neumann boundary conditions the partition function is also modified by the presence of compact zero modes in a similar way that for the theory with Dirichlet boundary conditions, where The boundary entropy can easily be computed for all those cases and the results are: The singularity observed for quasiperiodic boundary conditions at ǫ = 0 is due to the existence of zero-modes which once properly incorporated into the compact theory give rise to the correct value for periodic boundary conditions (14) (15) with vanishing boundary entropy. Notice also that g Z = √ g D g N as corresponds to the factorisation property of counting boundary states. The g-theorem holds along the renormalised flow of Robin boundary conditions which interpolate between Dirichlet (U = −Á) to Neumann (U = Á) boundary conditions through Zaremba (U = σ 3 ) boundary conditions [28] g D > g Z > g N provided that R < 1/ √ 2π. The boundary entropy exhibits a monotone behaviour similar to that of the central charge or the bulk entropy.
Entanglement Entropy
There is another type of entropy associated to the vacuum state of a field theory. If we ignore some field degrees of freedom of the theory one can consider the effective physical (mixed) states by tracing out those degrees of freedom. In this way mixed states with finite entropies can effectively appear in quantum field theory at zero temperature from pure states. The mechanism of tracing out degrees of freedom is a kind of quantum version of the renormalization group. In particular, the vacuum state generates by this mechanism a family of mixed states whose entropies provide measures of its degree of entanglement. These mixed states are generated by integration of the fluctuating modes of the vacuum state Ψ 0 in bounded domains Ω 1 of the physical space Ê D [3], i.e.
The entropy of this state S Ω 1 = −T r ρ Ω 1 log ρ Ω 1 (vacuum entanglement entropy) is ultraviolet divergent, but once regularised exhibit a very interesting asymptotic behaviour which is similar to that of the boundary entropy analysed in the previous section [8][22] [29][30] [31]. For massless scalar theories the entropy presents the following asymptotic behaviour in terms of the diameter L 1 of Ω 1 and the ultraviolet short distances cut-off a introduced to split apart the domain Ω 1 and its complement Ê D \ Ω 1 . In the three-dimensional case, this asymptotic behaviour follows an area law similar to the black hole area law [3,8]. In general, for D > 1 the coefficients C i are not universal because they are regularization dependent. However, for one-dimensional spaces, although the formula (24) suggests that C 0 could be universal, it does not happen. In fact, the asymptotic behaviour of the entanglement entropy is not given in that case by (24) because that entropy acquires a leading logarithmic correction which obviously implies that the constant term is highly dependent on the regularization method. However, it turns out that the value of the coefficient of this logarithmic term C is universal and equal to 1/3 of the central charge c of the conformal invariant theory.
In the case of a massless scalar boson c = 1 and C = 1/3 [32]. The question is whether this value is dependent or not on the boundary conditions of the fields when the theory is defined on a large bounded domain Ω ⊃ Ω 1 . It is remarkable that coefficient c 1 = 1/3 turns out to be independent of the choice of boundary condition in Ω = (0, L) when Ω 1 = (L/2 − l/2, L/2 + l/2) is chosen to have half of the size of the interval. This result can be easily understood as a consequence of the fact that the entanglement entropy is basically due to the behaviour of field correlations at the interface between Ω 1 and its complement Ω \ Ω 1 which does not involve the boundary values of the fields. On the other hand the finite part C 0 is highly dependent on the ultraviolet regularization method. However, when Ω 1 reaches the boundary of the whole space Ω the entropy has the same asymptotic behaviour [33,36] but with a different coefficient for the asymptotic logarithmic term and a different finite term which is related to the boundary entropy [7] and, thus also dependent on the boundary condition. The behaviour of this quantity along the boundary renormalization group flow has then the same monotone behaviour that the boundary entropy. A similar phenomenon occurs in 2+1 dimensions with the constant term. In general, the entropy is given by The logarithmic term is absent for domains Ω and Ω 1 with smooth boundaries ∂Ω and ∂Ω 1 , whenever Ω\Ω 1 is a connected manifold [37]. In a regularized theory the smoothness condition requires that the curvature of the boundaries must be always much larger than the ultraviolet cut-off a [38]. In that case, the remaining constant C 0 has a special behaviour because not only is regularisation independent but also independent on the size of Ω 1 . C 0 can be split in two terms C 0 = C ′ 0 + C * 0 , one C ′ 0 which contains all possible dependences on the prescription used for the definition of the Ω 1 perimeter L 1 , and another one C * 0 which is absolutely prescription independent. In a massive theory, if L 1 is much larger than the inverse of the mass gap 1/m, there is a prescription which uniquely fixes the ambiguities involved in such a splitting [39] [40].
If Ω 1 is decomposed as the disjoint union of three similar domains Ω 1 = Ω α ∪ Ω β ∪ Ω γ , one can define and the result is independent of the Ω 1 decomposition and the perimeter definition prescription. The constant C * 0 is also shape independent and only really depends on the topology of the domain Ω \ Ω 1 . It defines a topological invariant entropy S top = C * 0 associated to the quantum vacuum [39] [40], which measures its degree of topological entanglement. It can be shown that S top = − log D, where D is the total quantum dimension of the underlying topological theory. In our case case it is easy to show that D = 1, which means the vanishing of the topological entanglement entropy, and that result is independent of the boundary conditions. In more general theories like the SU(2) WZWN theory with level k the topological entanglement entropy is given by [39] S top = log The quantum dimension D is non-integer in that case but it is a real topological invariant.
Conclusions
The novel thermodynamic quantities associated to field theories like boundary entropy and vacuum entanglement entropy reveal new interesting properties of vacuum structure. The boundary entropy is associated to the existence of boundary states and, thus, is very sensitive to the boundary conditions of the fields. The role of the vacuum entanglement entropy focuses on the measure of the amount of entanglement of the quantum vacuum and is absolutely independent of the type of boundary condition, whenever the domain where the quantum fluctuations of the fields are integrated out does not reach the boundary of the space. However, when this domain reaches the boundary, the entanglement entropy becomes dependent on the boundary conditions, displaying a monotone behaviour along the boundary renormalization group flow similar to that of the boundary entropy.
We have explicitly verified the behaviour of boundary and entanglement entropies under changes of boundary conditions for low dimensional massless free field theories. The boundary entropy varies for quasiperiodic boundary conditions and Robin boundary conditions, whereas the entanglement entropy only changes when the entanglement domain reaches the boundary or changes its topology. The same behaviour appears in three-dimensional field theories where the finite term of the asymptotic behaviour of the entanglement entropy can be related to a new topological invariant (topological entanglement entropy). For free scalar field theories we have shown that this topological invariant is trivial for connected convex domains, but self-interacting field theories and non-connected domains might have non-trivial topological entanglement entropy, which provides a basis for robust codes in quantum computation [39] In all analysed cases the boundary entropy does not increase along the boundary renormalization group flow from the ultraviolet to the infrared [7] [28]. There are two interesting problems which remain open: the effect of interactions on both types of entropies associated to the quantum vacuum and their behaviour for topological field theories. Both problems deserve further analysis. | 2008-03-18T00:43:27.000Z | 2008-03-18T00:00:00.000 | {
"year": 2008,
"sha1": "26adfe4e7dc9148d85c77664fdf8be8ffe169936",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0803.2553",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "26adfe4e7dc9148d85c77664fdf8be8ffe169936",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
230726695 | pes2o/s2orc | v3-fos-license | 2.5D LES SIMULATION OF AN AIRFOIL SHOCK WAVE REDUCTION BY USING POROUS MEDIA
Supersonic flight has become a practical reality since the 1950s. One of the first ways to study high speed effects of the shock waves is to evaluate the aerodynamic coefficients of an airfoil. The work described herein refers to a series of 2.5D LES numeric simulations, to investigate the behavior of the shock wave on the airfoil. To reduce the unwanted effects, a porous surface is placed on 80% of suction and pressure side of a NACA 0012 airfoil. Solving the motion equations was carried out with Ansys Fluent. Qualitative comparison consists in the pressure contours visualization for different angles of attack, showing how shock waves form on the airfoil surfaces. After plotting the polar diagrams, CL=f(AoA) and CL=f(CD), a quantitative comparison was made between the baseline airfoil and the same airfoil but with porous media on each surface side.
Introduction
Current literature regarding porous media focuses mainly on the flow through such a medium and its engineering applications, such as thermal transfer or acoustic lining [1]. One approach to define and characterize porous media is presented by W. Ehlers and J. Bluhm [2]. A review of thin porous media state of the art is briefly analyzed in [3], with a main focus on new developed theories that address the matter, such as GDL (Gas diffusion layers), MPN (molecular pore network), PTM (pore topology method). With this baseline, experiments and numerical applications were conducted in the medical field and biotechnology [4,5] and innovative technologies in the energetic field [6]. Recent studies show how applications of porous media can improve flow in a wind tunnel. Khalid et al. use porous boundaries and manage to obtain better results when simulating the flow through a wind tunnel. Porous media was used in their paper [7] to reduce the wall interference effects.
A lesser known application of porous media is the behavior of shock waves in regard to it and may provide genuine optimization solutions. A study referring to steady shock waves in porous aluminum that concentrates mainly on the interaction between the metal and the shock wave is presented in [8]. Its conclusions sustain the theory that porous metals help in the mitigation of shock waves formation. Furthermore, alongside with researches regarding porous metals, there are also papers analyzing the behavior of shock waves in porous plastic solids [9]. Cohen and Durban attempt to determine the influence of porosity on plastic solids using as a baseline the Gurson model from 1977. Their findings in [9] conclude that porous medium can influence the presence of shock waves by delaying their appearance and also such a medium slows down their propagation speed.
G. Savu published his research regarding the porous airfoil in transonic flow in the early 80's. He and his colleague carried on numerical and experimental work on the behavior of a porous airfoil with a plenum chamber placed on the suction side. They observed, in a supersonic wind tunnel, that the shock wave formed on the upper surface of the airfoil can be splitted in smaller waves by using porous media on that surface [10].
Another relevant point of view is offered by Gubin in his paper [11], in which he states that when the porous area expands and becomes greater than that of the shock wave, the technique will have a greater impact.
In order to study the flow over an airfoil with porous media on the suction side it is necessary to review the patterns regarding shock wave formation over a standard airfoil model. This phenomenon is analyzed in [12], where the author uses a NACA 0012 airfoil and considers angles of attack varying from 0° to 5° in a transonic regime with Mach numbers in the range of 0.2 and 0.8.
In previous years, due to software development in the field of computational fluid dynamics, the technology advances made it possible to detect shock waves through some specific methods, more accurately than in the early case studies. These methods include density gradient maxima, normal Mach number and characteristics which are briefly presented with their advantages and disadvantages in [13]. Based on these methods, researchers have determined other optimized ways to detect shock waves, taking into account its mathematical definition and implying the use of eigenvectors and the Riemann invariants [14].
The 2.5D LES (Large Eddy Simulations) are known to give more accurate results and be more efficient in comparison with other CFD approaches, such as 2D URANS (Unsteady Reynolds-Averaged Navier-Stokes) or 2.5D URANS, in terms of aerodynamic performance evaluations [15][16][17].
This paper is aiming to investigate the behavior of the shock wave formed on an airfoil placed in a high speed flow. The 2.5D LES simulations was carried to cover a set of geometrical and aerodynamic configurations representative for shock wave regime.
CFD Setup
With a physics-based understanding of the flow instabilities that occur at high speeds over an airfoil, we can develop a viable control technology for flow instabilities that will allow the airfoil to operate at an increased efficiency in that regime. Due to the dimensional nature of the unsteady aerodynamic phenomenon occurring during the appearance of a shock wave, the full extent of the implications and interactions can be captured by fully viscous 2.5/3D unsteady CFD simulations. However, lower order models are known to provide valuable guidelines regarding the properties of unstable flow patterns in flow around airfoils.
High-fidelity computational approaches like Large Eddy Simulation (LES) [18] or Detached Eddy Simulation (DES) [19] are suitable for understanding flow dynamics associated with aerodynamic flows. They are resolving (not modelling) a large range of flow scales.
The CFD used in this study integrates the LES discretized fully compressible Navier-Stokes equations describing the conservation of mass, momentum and total energy. Therefore, the Navier-Stokes equations have to be filtered with respect to the grid size in order to obtain the LES governing equations [20]. (1) In this paper three cases were numerically investigated starting with the NACA 0012 baseline airfoil. In the other two cases the structure of porous media was changed: in the first case the airfoil was perforated on 80% of its upper/lower surface, fourteen times and in the second case the airfoil has twenty-six holes to simulate a porous media. Fig. 1 presents the geometries of the studied cases. Accurate CFD methods require domains large enough to minimize the boundary effects on the resulted prediction of the shock wave evolution. In Fig. 2, the computational domain used in all three simulations is presented. In terms of boundary conditions, pressure far field condition with Mach number value at the domain border was imposed while for the airfoil, a no-slip wall BC was used.
Fig. 2. Computational domain
Mesh quality is a major factor in order to avoid slow convergence, or even convergence problems. Because of its complex dimensional structure, producing a good mesh of is not trivial.
The mesh was generated with ICEM CFD using block-structured meshing, controlled in terms of skewness, growth and aspect ratio. For representation purposes only, a course mesh generated by ICEM CFD for this study can be found hereafter. The target final mesh size will be about 2.5 million grid cells for a case, with a ratio between two neighbour cell around 1.05. The first wall cell size was defined in order to reach a y + of the order of unity at the first point away from the solid walls.
Results
In order to determine the best size and ratio of the porous wall placed on an airfoil, in reducing the shock wave, several simulations were carried out, using the LES methods. It is a well known issue that on the airfoil in a high speed flow the shock wave has a bad influence on aerodynamic efficiency, generating entropy and leading to boundary layer interactions which are difficult to control in flight. For the baseline, the NACA 0012 was used and after that it was modified for the other two cases, where PM was installed on 80% of the airfoil surface. In Fig. 4 the static pressure flow field for the three studied cases its presented at different angle of attack. 5 shows the variation of the lift coefficient for different angles of attack. The maximum value for the airfoils with PM is obtain at AOA=4 degrees, compared to the baseline where the CL max is at AOA=2 degrees. A note must be made that the NACA 4 series is not suitable for high Mach number, hence the atypically low angles of attack.
Conclusion
The first major conclusion that can be drawn from this study is that porous walls have a definite effect on the high speed aerodynamics of airfoils.
Lift coefficients improved on both cases, with a finer orifice airfoil having a maximum at lower AoA compared to the other cases. Due to the greater size of the orifices, the first porous wall case PM1 behaved similar to a flat plate with a qualitative evolution similar to the baseline -although with better quantitative values. The more fine porous wall, PM2, also showed clear improvement in both lift and lift-to-drag ratio, but with the added note that the peak performance was registered at a lower value of the AoA than the baseline and PM1.
Not very intuitively, the drag bucket of PM2 was more similar to that of the baseline -to the extent that one can define such a notion when using this family of airfoils under these conditions. The coarser orifice wall of PM1 lead to the overall extension of this region of interest, therefore making it more useful to the designers who must factor in stability and range of motion.
It is unclear wheather or not the shape of the plenum is relevant to the overall performance but this parameter is worth further investigation.
Future Work
As future studies an experimental model of a centrifugal stator with porous media on the blades will be manufactured using the National Research and Development Institute for Gas Turbine -COMOTI technological research infrastructure (Fig. 8a). After that a tests campaign will be carried out to validate the numerical results (Fig. 8b). | 2020-12-10T09:05:03.928Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "7e82a5c91f6061eaedc5674658416a133e86b4f9",
"oa_license": null,
"oa_url": "https://doi.org/10.3897/arb.v32.e11",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "55bb790ad621da150ff2b39536182c4b1cc15886",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": []
} |
10312080 | pes2o/s2orc | v3-fos-license | Transepithelial transport of a viral membrane glycoprotein implanted into the apical plasma membrane of Madin-Darby canine kidney cells. II. Immunological quantitation
The envelope of vesicular stomatitis virus was fused with the apical plasma membrane of Madin-Darby canine kidney cells by low pH treatment. The fate of the implanted G protein was then followed using a protein A- binding assay, which was designed to quantitate the amount of G protein in the apical and the basolateral membranes. The implanted G protein was rapidly internalized at 31 degrees C, whereas at 10 degrees C no uptake was observed. Already after 15 min at 31 degrees C, a fraction of the G protein could be detected at the basolateral membrane. After 60 min 25-48% of the G protein was basolateral as measured by the protein A-binding assay. At the same time, 25-33% of the implanted G protein was detected at the apical membrane. Internalization of G protein was not affected by 20 mM ammonium chloride or by 10 microM monensin. However, the endocytosed G protein accumulated in intracellular vacuoles and redistribution back to the plasma membrane was inhibited. We conclude that the implanted G protein was rapidly internalized from the apical surface of Madin-Darby canine kidney cells and a major fraction was routed to the basolateral domain.
In the preceding paper we developed another approach to study the traffic to and from the cell surface in Madin-Darby canine kidney (MDCK) cells (17). In this case the proteins were not introduced into the plasma membrane from within after synthesis, but inserted there from the outside by low pHinduced fusion of the viral envelope with the cellular plasma membrane (16, 3 l, 32). MDCK cells are polarized epithelial cells, the plasma membrane of which is differentiated into two structurally and functionally different domains separated by tight junctions, namely the apical surface facing the growth medium and the basolateral surface facing the neighboring cells and the substratum (4, I l, 19,20,24). Normally during vesicular stomatis virus (VSV) infection of MDCK cells the G proteins are mainly transported to the basolateral plasma membrane (25). In the present study we implanted the G protein of VSV into the apical plasma membrane of these MDCK cells. Our previous morphological study (17) showed that the implanted G proteins are rapidly endocytosed and that some of them are distributed to the basolateral surface. In the present study we used a protein A-binding assay to characterize the internalization and redistribution of the implanted G proteins in more detail.
MATERIALS AND METHODS
The cells, virus preparations, the implantation procedure of the G protein into the apical plasma membrane of MDCK cells, and the immunofluorescence staining technique are described in our previous study (17).
Protein A-binding Assay:
The assay was adapted from that described for chicken embryo fibroblasts infected with Semliki Forest virus (7). To assay antigens in the apical cell surface, we fixed cultures directly before the ~2~I protein A-binding assay. To assay the whole cell surface, we made the basolateral antigens accessible for the reagents prior to fixation by washing briefly twice with 2 ml of PBS lacking Ca ++ and Mg ++ and incubating the cells for 5 min at 31°C with 5 mM EGTA (see also, in reference 17, Fig. 2). Both solutions were prewarmed to 31"C. Fixation was performed in the cold with 3% (wt/vol) formaldehyde, which was thereafter quenched with 50 mM NI-LCI at room temperature. All solutions used for EGTA-treated cells after the fixation step lacked Ca ++ and Mg ++. Fixed cells were overlaid with 250 #l of antibody diluted in PBS containing 0.2% (wt/vol) of gelatin (PBS-gelatin) and incubated for 30 min at room temperature. After three washes with PBS-gelatin the cells were incubated with 250 #l of ~2Sl protein A prepared in PBS-gelatin for 30 rain at room temperature. The label was removed and the plates were washed TIlE JOURNAL OF CELL BIOLOGY • VOLUME 97 SEPTEMBER 1983 638-643 four times with PBS-gelatin. The cells were soluhitized in 0.5 ml of 2% (wt/ vol) SDS prewarmed to 80"C, incubated for 1 h at 37"C and scraped off the plates for counting of cell-associated radioactivity. Nonspecific binding of 12sI protein A was determined using antibodies against fowl plague virus glycoproteins (15). The background values, which were 3-10% of those obtained with the appropriate antibody, were substracted from the experimental values. The concentrations of the antibodies used were titrated using untreated cells (antiaminopeptidase) or cells fused with 1 ug of VSV (protein) and EDTA-treated to remove unfused viruses (anti-VSV antibody). Nearly saturating concentrations (0.28 ug/ml of anti VSV antibody or 20 ug/ml of anti-aminopeptidase antibody) were used. The specificity of ~2~I protein A was confirmed in competition experiments using unlabeled protein A. A 50%-inhibition of binding of '251 protein A was obtained with a concentration of 24 and 30 ng/ml of unlabeled protein A for anti-VSV antibody and anti-aminopeptidase antibody, respectively.
Degradation of Viral Proteins: Degradation of the viral proteins was monitored by following the total cell-associated radioactivity and the total and trichloroacetic acid-soluble radioactivity of the incubation medium as described in Marsh and Helenius (12).
Fluid-phase Uptake: Huorescein-conjugateddextran(FITC-dextran) was used to determine the fluid phase uptake (27) of MDCK cells. The cultures were overlaid with 1.0 ml of FITC-dextran (20 mg/ml) in minimal essential medium containing 0.2% (wt/vol) bovine serum albumin and antibiotics (pH 7.3) and incubated at 31"C. At indicated time points, duplicate plates were transferred on ice, and the monolayers were washed ten times with 2 ml of cold PBS. The cells were scraped offthe plates, pelleted at 3,000 rpm, washed twice with 5 ml of PBS and lysed with 2 ml of PBS containing 0.1% SDS. The fluorescence intensity of the lysates was measured using an exitation wavelength of 490 nm and an emission wavelength of 520 rim. The endocytosed volume was determined by comparison with a standard curve.
12Sl Protein A-binding Assay
To implant the envelope glycoprotein G of VSV into the apical surface of MDCK cells, 0.5-1.25 ~g of VSV (protein) was added together with 125,000 cpm of [3H]uridine labeled VSV in binding medium (pH 6.3) to plates containing 2 x 106 MDCK cells, and incubated for 1 h in the cold. The unbound virions were washed away with binding medium. In the presence of calcium the tight junctions remain intact and only the apical surface is accessible to the virus (5,14,17). From each amount of virions added ~ 13% was bound to the cell surface (Fig. 1A). The rest of the plates were then put on a waterbath of 37"C and overlaid for 20 s with fusion medium (pH 5.4) prewarmed to 37"C to allow the envelope of the virions to fuse with the apical membrane of the cells. The monolayers were then treated with 0.5 mg/ml of trypsin for 90 rain in the cold to release bound but unfused virions from the cell surface. For each amount of VSV added to the cells, 3% was fused with the cellular plasma membrane (Fig. l A). To another set of cultures 0.5-1.25 ~g of unlabeled VSV (protein) was added. After the binding and fusion steps the unfused virions were removed by a 5-rain incubation with 20 mM EDTA in the cold (17). The cells were then fixed, treated with anti-VSV antibody, followed by 125I protein A, solubi-lized, and counted for cell-associated radioactivity. Fig. 1 B shows that the assay was linear for various amounts of G protein at the apical surface.
Quantitation of G Protein at the Apical Surface
VSV (1 #g of protein) was bound and fused to 2 x 10 6 cells as above, followed by EDTA-treatment to release unfused virions. As previously shown, about 26,000 G proteins are implanted per cell under these conditions (3,17). After implantation the cultures were incubated at 3 l*C and the G protein at the cell surface was quantitated by the ~2sI protein A-binding assay. The amount of G protein at the apical surface decreased rapidly (Fig. 2, solid circles). After 60 min the level of G protein had declined to ~25-33% (range of 10 experiments). The decrease of G protein at the surface was due to internalization rather than release into the medium. This was verified by following the appearance of acid-precipitable radioactivity into the medium during the incubation of cells to which [3SS]methionine VSV was fused (see below). The amount of G protein implanted in the apical plasma membrane did not influence the extent of internalization within the range of 0.1 to 10 #g of VSV (protein) added per 2 x 106 cells (data not shown). No internalization of G protein could be detected at an incubation temperature of 10*C (Fig. 2, squares).
Quantitation of C Protein at the Basolateral Surface
Depletion of Ca ++ ions with EGTA treatment at 37"C opens up the tight junctions of MDCK cells, thus allowing access of the reagents to the basolateral surface (5,14). Cells with 26,000 From these figures the amount of viral protein bound to the cells was calculated and plotted against the respective amount of virus added to the cell (@). The binding efficiency was a constant 13% of the added virus. In another set of cultures fusion was induced by pH 5.4 after the binding step and the unfused virions were removed by trypsin-treatment in the cold. The radioactivity was determined to calculate the cell-associated amount of viral protein (O). For each amount of virus added, 3% was fused with the apical membrane (see reference 17). (B) 0.5-1.25/~g of VSV (protein) was added per plate of 2 x 106 MDCK cells and after the binding and fusion steps the unfused virions were removed with a 5-min incubation of the cells with EDTA in the cold. The cells were then fixed and treated with anti-VSV antibody followed by 52,000 cpm of 12Sl protein A per plate. The cell-associated radioactivity was determined and plotted against the respective amount of virus originally added to the monolayers for implantation. G protein molecules inserted into the apical surface per cell (see above) were incubated at 3 I*C in the presence of 20 vg cycloheximide/ml. 7 min before the indicated time points in Fig. 2A the cells were washed twice with PBS lacking Ca ++ and Mg ÷+, and incubated with 2 mM EGTA at 3 I*C for 5 min. The amount of G protein detected after EGTA treatment at the cell surface with the ~2sI protein A-binding assay was larger than that detected on the apical surface only ( Fig. 2A, open circles). The difference between the values obtained for the surface expression before and after EGTA treatment was taken to represent basolateral G protein (Fig. 2B). After 60 min at 31°C, the fraction of basolateral G protein varied between 25% and 48% in 10 experiments. If cycloheximide was omitted from the incubation medium the same results were obtained. The incubation time with EGTA appeared to be critical. A treatment of 2 to 5 min at 3 I*C was found to give maximal values for I25I protein A-binding. Incubation times > I 0 min gave lower values since the cells begin to round up after prolonged treatments with EGTA and came off the plates in subsequent manipulations (Table I).
PESONEN AND SIMONS Transepithelial Transport in MDCK Cells
An attempt was made to quantitate the amount of G protein in the basolateral plasma membrane directly by treating the cells with saturating amounts of anti-VSV antibody (0.83 vg/ ml) and unlabeled protein A (0.2 vg/ml) for 15 min at 0*C before opening of the tight junctions with EGTA at 3 I°C to render the apical cell surface unreactive in the subsequent 125I protein A-binding assay (Fig. 3). It was not possible, however, to quench >50% of the ~25I protein A-binding activity of the apical surface. Nevertheless, Fig. 3 shows that the amounts of basolateral G protein detected under the above conditions were reasonably similar to figures obtained without quenching of the apical surface with unlabeled protein A. to release unfused virions. The cultures were incubated at 3 I*C in the presence or the absence of 20 mM NH4CI. At different time points acid-soluble and acid-precipitable radioactivity of the medium and the total cell-associated radioactivity were determined. Fig. 4 shows that after a lag period of 30 min acid-soluble radioactivity started to appear in the medium, indicating degradation of the viral proteins. Thereafter, the rate of degradation was 15%/h of the total cellassociated viral radioactivity. Ammonium chloride (20 mM) in the incubation medium inhibited degradation. Within 15 min after the cultures had been shifted to 3 l*C there was an initial loss of ~15% of the cell-associated acid-precipitable radioactivity into the medium (Fig. 4). It most probably was derived from unfused virions left after EDTA treatment which were eluted from the cell surface by neutral pH at 31"C. No further loss of acid-precipitable material occurred.
Implantation and Internalization of G Protein Do Not Induce Random Membrane Uptake
FLUID-PHASE UPTAKE: TO study whether the low pHinduced fusion of VSV with the cell surface or internalization of G protein affected general parameters of the cells, we first measured fluid phase uptake of MDCK cell using fluoresceinconjugated dextran (FITC-dextran) as a marker. Untreated control cultures and cultures with implanted G protein (26,000 molecules per apical cell surface) were incubated at 31"C with 1 ml of minimal essential medium containing 20 mg/ml of FITC-dextran. At different time points cells were harvested by scraping, lysed, and the cell-associated fluorescence intensity was determined. Fig. 5 shows that both sets of . 2) and the cultures were incubated at 31°C in the presence of 20/~g/ml of cycloheximide. One set of cultures was fixed directly and treated with anti-VSV antibody followed by ~2Sl protein A before (0) or after EDTA-treatment (O). Two sets of cultures were treated in the cold with saturating amounts of anti-VSV antibody and unlabeled protein A (see text). One set was then fixed (A) and another treated with EGTA at 31°C before fixation (A). The ~2sl protein A-binding assay was performed using 33,000 cpm of the label per plate. plates displayed the same kinetics of uptake of FITC-dextran. Uptake was linear for at least 3 h. The rate of uptake was 4.5 nl/h/106 cells. Some of the cultures were assayed for apical G protein. The same kinetics of internalization of G protein were obtained in the absence or in the presence of FITCdextran (Fig. 5).
A M I N O P E P T I D A S E: Next we wanted to see whether internalization of G protein from the apical cell surface was paralleled by uptake of an apical membrane protein of the cell, aminopeptidase. The ~2sI protein A-binding assay was used to follow the level of aminopeptidase at the apical membrane during the time when G protein rapidly disappeared from the cell surface. After insertion of G protein into the apical surface as before, the cultures were incubated at 31 °C in the presence of 20 gg/ml of cycloheximide. One set of plates was assayed for G protein and another for aminopeptidase using the respective antibodies. The amount of G protein decreased 78% in 15 min (Fig. 6). However, the amount of aminopeptidase barely changed in 15 min and decreased only 10% in 30 min. The same decrease of aminopeptidase at the surface was obtained for cultures which had been incubated at 3 l °C in the presence of cycloheximide without G protein implantation. Thus, the slight decrease of aminopeptidase at the apical surface was due to the drug or the shift from 37°C to 3 l°C and not to the virus or the low pH-treatment.
Inhibition of Redistribution of G Protein
The effect of ammonium chloride and monensin on O protein redistribution was tested since they have been reported to interrupt recycling of the cell surface receptors (2, 6, 9, 10, 28). When 20 mM ammonium chloride was added to the incubation medium, implanted G protein was internalized rapidly from the apical surface as in the control cultures. The level of apical G protein continued to decline and decreased in 60 rain to 7% at the apical and to 5% at the basolateral surface (Fig. 7). In parallel controls, ~20% of the G protein could be detected at the apical and ~30% at the basolateral surface after a 60-rain incubation. The effect of ammonium chloride on the level of G protein at the cell surface was concentration dependent. Similar effects were obtained with 20 and l0 mM concentrations, but a 2-raM concentration showed no effect within 60 min. Essentially the same results were obtained with l0 ~M monensin instead of 20 mM ammonium chloride. The effect of ammonium chloride and monensin on the fate of the implanted G protein could be seen also by indirect immunofluorescent labeling. In the presence of the drugs G protein disappeared from the apical surface and accumulated in large intracellular vacuoles (Fig. 8). There was no detectable redistribution of G protein to the basolateral surface domain. Thus, G protein accumulated inside the cells in the presence of ammonium chloride or monensin. These results imply that most of the G protein Fig. 2, the monolayers were directly fixed (ac) or incubated at 31 *C for 30 min (d-i) in the absence (d-f) or presence (g-i) of 20 mM ammonium chloride prior to fixation. In a, d, and g the cells were fixed and treated directly with anti-VSV antibody followed by rhodamine-conjugated anti-lgG antibody to stain the apical cell surface. In b, e and h the tight junctions were opened by EGTA treatment at 31 °C before fixation to gain access for the reagents also to the basolateral cell surface. In c, f, and i the cells were permeabilized with 0.1% Triton X-100 after fixation to visualize internalized antigens. For more detail see reference 17. Bar,8 #M. molecules detected at the apical surface under normal incubation conditions did not represent a static and immobile pool, but were also capable of being endocytosed. In contrast to G protein, 20 mM ammonium chloride or 10 #M monensin had no effect on the level of aminopeptidase at the apical surface. In the presence of the drugs the same results were obtained as shown for control cultures in Fig. 6.
D I S C U S S I O N
In this study we used a protein A-binding assay to follow the internalization and the reappearance at the cell surface of G protein after implantation into the apical plasma membrane of MDCK cells. Apical proteins could be monitored with the assay after fixation of the cell monolayer. Only the apical surface domain is accessible to the antibodies because the cells in the monolayer are sealed together by tight junctions (4,5,14,17). Proteins present in the apical and the basolateral surface domains could be monitored after opening the tight junctions by calcium-depletion before fixation. Whether all basolateral proteins became accessible to the reagents after the EGTA treatment is difficult to judge. It is possible that the basolateral values obtained with this assay are underestimated because of steric hindrance through cell-cell and cellsubstratum interactions.
The protein A-binding assay showed that the implanted G proteins were internalized rapidly from the apical surface. The half-life of the implanted G proteins at the apical surface was <10 min at 31"C. At 10*C internalization was not observed. The process was thus similar with regard to kinetics and temperature-dependence to receptor-mediated endocytosis (see reference 26). The implantation procedure did not perturb the properties of the apical membrane as judged by two parameters. First, we found no change in the rate of fluid phase endocytosis after fusion of the virus with the plasma membrane. Second, we could not detect any appreciable loss of aminopeptidase from the apical membrane during the internalization of implanted G protein.
The morphological studies in the preceding paper (17) showed that the G protein was not only endocytosed after implantation, but a portion was redistributed to the basolateral surface. Some also seemed to be recycled to the apical surface. Here the redistribution of the endocytosed G protein to the cell surface could be quantitated using the protein Abinding assay. The rate of the appearance of G protein at the basolateral surface varied between experiments (cf. Figs 2 and 7), but 15 min after implantation some basolateral G protein was usually detectable. After 60 min the fraction of G protein at the basolateral surface was estimated to be 25-48% of the implanted proteins. The routing of G protein to the basolateral cell surface could be almost completely inhibited both by the carboxylic ionophore monensin which catalyzes the exchange of Na ÷ and H ÷ across biological membranes (22) and by ammonium chloride. The latter weak base accumulates in acidic compartments and increases their pH (18, 21). Using these drugs the apical surface practically cleared of G protein and the protein accumulated in intracellular vacuoles. Previous studies have shown that the recycling of cell surface receptors can be inhibited to varying degrees by these drugs (2, 6, 9, 10, 28). Exactly how they exert their action is not known, but evidence is accumulating that endosomes might be the site at which these drugs affect recycling. Recent studies have shown that endosomes as well as lysosomes have an acidic pH (13, 18, 23, 29). An increase in the endosomal pH may prevent dissociation of ligand-receptor complexes and recycling to the cell surface. The immunoperoxidase labeling studies revealed most of the intracellular G proteins in endosomes, some in multivesicular bodies, and very rarely in secondary lysosomes in the first 10 min after implantation (17). Moreover, virtually no degradation of the viral polypeptides was observed within 30 min after implantation at 3 I*C. Thus, lysosomes may not be an obligatory intermediate on the transepithelial transport route. It seems more likely that the endosome is the organelle from which the internalized G protein is routed to the basolateral surface (cf. reference 1) and that it is here that monensin and NH4CI block the transepithelial transport of the G protein. Further studies are underway to characterize the organelles involved in the transepithelial route.
Our studies suggest that the apical and the basolateral surface domains are connected by an intracellular route in MDCK cells. An implication of these findings is that continuous sorting of membrane components would have to take place to maintain the unique composition of the apical and the basolateral surface domains. We are now studying the fate of the influenza virus hemagglutinin implanted by low pHfusion into the apical plasma membrane. Hemagglutinin appears at the apical surface of MDCK cells after de novo synthesis (25). A future goal of this work will be to find out where in the cell apical and basolateral proteins are sorted from each other during endocytosis. | 2014-10-01T00:00:00.000Z | 1983-09-01T00:00:00.000 | {
"year": 1983,
"sha1": "0d1e59b3b906b4250e222a5b2347d55b2c8eb2cb",
"oa_license": "CCBYNCSA",
"oa_url": "https://rupress.org/jcb/article-pdf/97/3/638/1401226/638.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "0d1e59b3b906b4250e222a5b2347d55b2c8eb2cb",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
36361712 | pes2o/s2orc | v3-fos-license | The Aftermath of Efficacy
As a member of the US Preventive Services Task Force for 12 years and its chair for 5, I have given countless presentations on Task Force recommendations. During questions and comments, clinicians often voice frustration and even hostility to the recommendations because the Task Force does not
A s a member of the US Preventive Services Task Force for 12 years and its chair for 5, I have given countless presentations on Task Force recommendations. During questions and comments, clinicians often voice frustration and even hostility to the recommendations because the Task Force does not advise on exactly what should happen next. My response is that, indeed, the science is the easy part and that effective implementation is hard, particularly in a health care environment that discourages system innovation. Although the process of sifting through the medical literature to determine the quality of evidence that supports (or fails to support) a preventive intervention is time consuming, costly, and, at times, tedious, it is not particularly complicated. If the Task Force has set up the analytic framework correctly, identifi ed the key questions, and carefully followed protocol in completing the evidence review, the conclusions usually follow easily. Screening for colorectal cancer is a good exam-ple of where the science showing effi cacy of screening is fairly straightforward, but where the paths to effective implementation are not. In its updated recommendation released in October, 2008, 1 the Task Force has judged that screening works, but then what?
Four articles in the current issue of the Annals of Family Medicine address "then what" in different ways. Potter and colleagues show that offering fecal occult blood kits to patients during fl u shot clinics increased screening from 57% to 84%. 2 Using data from the Behavioral Risk Factors Surveillance System, Cardarelli and Thomas show that having a personal health care provider is associated with a 3-times higher likelihood of screening. 3 Jimbo and colleagues examined reasons that positive fecal occult blood tests were not followed up, fi nding that such decisions were at variance from established guidelines or could not be determined in nearly one-half. 4 Finally, Wilkins' group conducted a quantitative meta-analysis of the literature on the outcomes of screening colonoscopies performed by primary care physicians, showing that quality, safety, and effi cacy are similar to indicators proposed by specialty professional groups. 5 These new articles are high-quality work, showing imagination and skill on the part of the investigators. My comments briefl y address the audiences of physicians, patients, and policy makers.
Physicians eager to implement prevention in practice have long recognized the importance of an established relationship and of using every opportu-nity to sneak in indicated preventive interventions, the messages from Carderelli and Thomas and Potter et al, respectively (although the Potter et al strategy falters as the list of potential add-on interventions grows). Screening and follow-up according to protocol are essential to realize the full benefi ts, the message of Jimbo et al. Health care teams seeking to implement prevention in practice, perhaps as part of their work on the medical home, should be encouraged by these new studies to establish relationships, order indicated prevention during unrelated offi ce visits, and adhere to proven protocols. Such resources as the AHRQ's Put Prevention into Practice program for clinicians already promote these messages. 6 Although the issue of colonoscopy is clearly of interest to individual clinicians, in my view the obstacles to having primary care physicians perform colonoscopies are principally political, economic, and medicolegal, not the issue of clinical competence, as examined by Wilkins et al. Nonetheless, their new meta-analysis is published at a critical time as the debate evolves with payers, specialty professional groups, and malpractice carriers.
And the patient? First, recall that although the benefi ts of colorectal screening are huge when the small risk reduction from screening is multiplied across the population, even in ideal circumstances most patients do not directly benefi t: that is they do not now and never will have colorectal cancer. They may experience some reassurance from a negative test result, but they otherwise experience only the inconvenience, discomfort, and costs of screening, the follow-up of false-positive results, and, in the case of screening colonoscopy, rare morbidity and mortality from the procedure itself or from removing polyps that would never become cancers. Further, if the large clinical trials are true, even among those who are screened, colorectal cancer mortality is reduced by no more than one-third, probably less, and all-cause mortality perhaps not at all. The number needed to screen to prevent 1 colorectal cancer-associated death is more than 1,000. 7 Thus it is diffi cult enough for a patient to benefi t under the best circumstances, so that the obstacles addressed in these 4 studies further illustrate the challenge.
Against this sobering backdrop, I marvel at how clinicians and patients maintain enthusiasm for screening. Family physicians who meticulously adhere to screening protocols may prevent only a handful of colorectal cancer deaths during an entire career, and they may not recognize it when it occurs (the problem of proving that something did not happen), so they receive little positive reinforcement for their effort. Most patients benefi t only in the reassurance from a negative test result and along the way may have unneeded testing in response to positive screening test results that prove to be false-positives. These issues are fully aired in Gilbert Welch's highly recommended 2004 book on cancer testing, Should I Be Tested for Cancer: Maybe Not and Here's Why. 8 I believe the most important audience for this new research should be policy makers, underscoring issues long neglected in our broken health care system and extending far beyond the particular question of colorectal cancer screening. Here is more evidence that having a personal health care provider matters. Here is more evidence that we need systems to deliver indicated services regardless of reason for visit.
Here is more evidence that we need systems to insure adherence to proven clinical protocols. Here is more evidence that we need to fi nd ways around irrational limitations on clinicians who could competently provide indicated services. These fi ndings are potentially useful in the provision of many clinical interventions, not just those related to cancer screening. I hope that the current national environment for health care, with its many voices clamoring for change, will at last make it possible to address these issues as public policy is reshaped. | 2018-04-03T03:45:03.789Z | 2009-01-01T00:00:00.000 | {
"year": 2009,
"sha1": "8c15d1075995d79359306360395b8f24012dd374",
"oa_license": null,
"oa_url": "http://www.annfammed.org/content/7/1/3.full.pdf",
"oa_status": "GOLD",
"pdf_src": "Highwire",
"pdf_hash": "a9b5de3bfd244e6fe08be45b860b424a85d855ab",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
212878288 | pes2o/s2orc | v3-fos-license | Evaluation of left ventricular systolic function by Myocardial Deformation Imaging in asymptomatic HIV patients
Background and Aims: Despite improvements in clinical care, evidence from both industrialized and developing countries indicates that the prevalence of subclinical cardiac dysfunction in individuals with well-controlled HIV infection may approach 50% and represent a newly recognized comorbid condition. The aim of our study was to reveal abnormalities in cardiac function using conventional transthoracic echocardiography and left ventricular strain imaging in HIV infected patients without cardiovascular disease. Methods: This was a hospital based, single center descriptive cross-sectional comparative study conducted in National Academy of Medical Sciences (NAMS), Bir Hospital which included HIV patients with baseline examination including a patient medical history, clinical examination, baseline CD4 count, viral load and a standardized transthoracic echocardiography and strain imaging examination and the findings were compared among age and sex frequency matched healthy adult population. Results: Our study enrolled 142 patients out of which 95 HIV positive patients (mean age 36.7±9.2 years with 58% female) and 47 healthy control (mean age 33.7±8 years with 57.4% female). The median duration of HIV diagnosis was 7 years (IQR 2, 10) and median CD4 count was 464 cells/mm3 (IQR 259,750). There was no significant difference in conventional echocardiographic parameters between two groups except for transmitral E velocity that was lower in HIV group (p value of 0.001). The HIV population has lower mean global longitudinal strain (GLS) value of -19.92% ± 2.54 SD compared to the healthy control population with mean of -21.39% ± 1.54 SD (p value of 0.001) and patients with CD4 count less than 300 cell/ mm3 had GLS value significantly lower than -18% (p value of 0.05). Conclusion: HIV infected population without established cardiovascular disease have subclinical left ventricular dysfunction revealed by GLS imaging technique.
Introduction Abstract
Despite improvements in clinical care, evidence from both industrialized and developing countries indicates that the prevalence of subclinical cardiac dysfunction in individuals with wellcontrolled HIV infection may approach 50% and represent a newly recognized comorbid condition 1,2 . Many studies reported a strong association between human immunodeficiency virus (HIV) infection and cardiac abnormalities, which are closely associated with high morbidity and mortality 3,4 . Cardiovascular manifestations have two clinical patterns: 6-7% of HIV-infected patients have significant @ Nepalese Heart Journal. Nepalese Heart Journal retain copyright and works is simultaneously licensed under Creative Commons Attribution License CC -By 4.0 that allows others to share the work with an acknowledge of the work's authorship and initial publication in this journal
Methods
The study was a hospital based, single center descriptive crosssectional comparative study conducted for 6 months (December 2018 to May 2019) in National Academy of Medical Sciences (NAMS), Bir hospital which included HIV patients visiting the anti-retroviral therapy (ART) clinic of Bir hospital with the baseline examination including a patient history, medication treatment, baseline CD4 count and viral load. Furthermore, noninvasive cardiac test such as heart rate, blood pressure, and a standardized transthoracic echocardiography and strain imaging examination was a part of the study protocol.
To compare the findings, we also included age and sex frequency matched healthy adult population. We recruited 95 HIV patients by sample size calculation based on estimated prevalence of systolic cardiac dysfunction of 34.3% by a large, prospective multicenter HIVheart study 1 . The inclusion criteria were patient with HIV -Infection and age >18 years and the exclusion criteria were patients with pre-existing cardiac disease, pregnant women, patients with diabetes, hypertension, dyslipidemia, renal disease, liver disease, Centers for Disease Control and Prevention (CDC) Class C patients and refusal to give consent. We also included control group in 1 : 2 case ratio with total of 47 healthy adults with age and sex frequency matched group to compare our findings between HIV vs. Non-HIV patients. Formal permission for the study was taken from the institutional review board of the NAMS and informed consent was taken from the study population.
Transthoracic echocardiography was performed with a commercially available imaging system with PHILLIP Affiniti 50c Echocardiography machine using a 2.5 MHz phased array transducer. Cardiac dimension and cardiac function was measured according to the recommendations of the American Society of Echocardiography 9 . Global longitudinal strain (GLS) was calculated by the mean longitudinal strain of the six walls (basal, mid and apical segments) in apical view. The software automatically displayed an epicardial tracing to include the entire myocardial width, and was later adjusted manually for optimal tracking. GLS value less than -18% was considered abnormal for detecting LV systolic dysfunction as a value above −20% with a standard deviation of ~±2%, the value cited by the American Society of Echocardiography, is likely to be normal 10 . Intraobserver reproducibility was determined by an echocardiographer's own analysis and inter-observer reproducibility was determined by the analyses of two echocardiographers for strain imaging after 2
Results
Our study enrolled 142 patients out of which 95 were HIV positive patients with mean age 36.7±9.2, 58% female and 47 healthy control with mean age 33.7±8, with 57.4% female population. The clinical characteristics of the HIV-infected population are presented in table 1 and comparison with control group are presented in table 2. The median duration of HIV diagnosis was 7 years (IQR 2, 10) and median CD4 count was 464 cells/mm 3 (IQR 259,750). Among the HIV patients 71.6% had viral copies <50 copies/ml and 87.4% were taking antiretroviral therapy (ARV). The median duration of ARV duration was 5 years (IQR 1, 10). With respect to age, sex and blood pressure there were no significant differences among the two groups, however the HIV group has mean BMI of 22.5 ± 2.87 significantly lower than control group (BMI 23.79 ±3.13 ) (p value of 0.02).
The conventional and strain echocardiographic parameters of HIV infected population and healthy control population are shown in table 3 and 4 respectively. There was no significant differences in conventional echocardiographic parameters between HIV population and healthy population except for transmittal E velocity which were weeks on stored offline images of 15 random study population.
Nepalese Heart Journal 2019; Vol 16 (2), 11-15 Evaluation of left ventricular systolic function by Myocardial Deformation Imaging in asymptomatic HIV patients cardiac disease while the remainder is asymptomatic. Among asymptomatic patients, 8-10% develop symptomatic heart disease over a two-to five-year period, which constitutes an independent predictor of mortality 5,6 .
HIV infection itself is accompanied by subclinical left ventricular systolic dysfunction, which is not apparent to standard echocardiography, which can be unmasked through using sensitive echocardiographic techniques. Global longitudinal strain (GLS) echocardiographic imaging, allows for more direct assessment of myocardial muscle shortening and lengthening throughout cardiac cycle by assessing myocardial strain that can unmask left ventricular dysfunction in asymptomatic patients 7,8 . As heart failure is often recognized late in HIV infected patients, early detection of left ventricular function is crucial.
There are very few studies done around the globe studying GLS imaging showing that it can unmask LV systolic dysfunction in asymptomatic HIV patients not apparent to standard transthoracic echocardiography and there are no any studies done in our part of the world. Hence, the present study aim to examine population of asymptomatic HIV-infected patients with GLS imaging for early detection of left ventricular systolic dysfunction.
Statistical analysis
All data were entered into an electronic spreadsheet (Microsoft Excel, Redmond) and the statistical analysis was done using the SPSS version 20 software (SPSS INC, Chicago, III). Categorical variables were analyzed as percentage, continuous variable with normal distribution presented as mean ± SD and continuous variable with skewed distribution presented as median and interquartile range (IQR). After processing of all available information, statistical analysis of their significance was done. Dichotomous variable were compared using Chi-Square test or Fischer's exact test, as appropriate and independent t test for means of continuous variable. 'P' value of less than 0.05 was considered to be significant. Coefficient of variation analysis were performed for intra-and inter-observer reproducibility of strain imaging echocardiography
Discussion
HIV infection itself is accompanied by subclinical systolic dysfunction, not apparent to standard echocardiography that can be unmasked though using sensitive echocardiographic techniques. When present, early management of cardiovascular abnormalities in these patients may improve their well-being and survival. Currently, left ventricular ejection fraction is one of the most commonly used markers to evaluate LV systolic function, as assessed by conventional echocardiography 1 . However, this method has several limitations such as geometric assumptions, foreshortening, load dependency, interobserver variability, and the influence of the heart rate. Strain imaging (SI) has been able to detect subclinical myocardial dysfunction at an earlier stage compared with conventional imaging in a number of diseases 11 . This method has also been shown to evaluate LV systolic functions more comprehensively and reliably than conventional echocardiography methods 12,13 . It is an important marker to detect subclinical LVSD with a high sensitivity and specificity rate 14 . There are very few studies done evaluating GLS for detecting subclinical dysfunction in asymptomatic HIV patients.
The main analysis of the present study showed that asymptomatic HIV-infected patients without cardiovascular disease had significantly lower GLS value despite having normal LV systolic function compared to the that of healthy individuals which were similar to previous study done by Mendes Our study evaluate the relationship of LV function with GLS in HIV patients to baseline CD4 count, HIV viral Load, duration of HIV diagnosis, HAART therapy and HAART therapy duration showed patients with CD4 count less than 300 cell/mm 3 had significantly lower GLS value less than -18% compared with higher CD4 count. Previous study done by Onur et al 18 and Karavidas et al 19 found no relationship between CD4 T-cells and systolic strain. But later study done by Cetin et al 16 showed positive correlation between reduced CD4 count and GLS value, though there were no significant difference between different CD4 level group below and greater than 300. There was no significant differences in conventional echocardiographic parameters between HIV population and healthy population except for transmittal E velocity (P value of 0.001) which were significantly lower in the HIV group and this finding were similar to the study done by Karavidas et al 19 and Mendes et al 15 . The diastolic function abnormalities such as reduced transmitral E velocity, an increased peak of A velocity, and an increased isovolumetric relaxation time and early filling duration has been described in HIV patients with wide variability in its incidence. 20,21 In our study only the reduced transmitral E velocity was found to be significantly lower probably because of impairment of LV contraction as detected with strain imaging, could precede the development of diastolic function abnormalities or smallsized study sample including only asymptomatic young HIV patients may have prevented us from reaching significant differences in other
Conclusion
GLS imaging technique is a valuable tool for detecting subclinical LV dysfunction in asymptomatic HIV which is often overlooked by conventional echocardiography. It is likely that the subclinical LV dysfunction may later progress with higher incidence of cardiomyopathy and heart failure in HIV-infected patients. Thus, a low threshold for using these technique should be applied in asymptomatic HIV patients especially in patients with lower CD4 level for early detection of subclinical LV dysfunction. Although our study revealed that early subclinical dysfunction can be unmasked by GLS imaging, further large scale, long term follow up studies are needed to address the mechanisms involved in it and could also determine whether the reported low GLS value abnormalities would translate into a worse prognosis.
Limitation of the study
The study was a descriptive cross-sectional study, CD4 T-cell counts were only measured in HIV-infected patients, but not in healthy individuals and indicators of myocardial fibrosis such as MRI imaging and biochemical variables as were unable to be investigated. Nepalese Heart Journal 2019; Vol 16 (2), [11][12][13][14][15] Evaluation of left ventricular systolic function by Myocardial Deformation Imaging in asymptomatic HIV patients diastolic parameters.
The strength of the present Study was that our study population included 57% female with larger sample size and younger age population compared to the previous study done by Karavidas | 2019-11-28T12:50:41.534Z | 2019-11-14T00:00:00.000 | {
"year": 2019,
"sha1": "697763234cb24a38fad5b331c0b6ab4236adf887",
"oa_license": "CCBY",
"oa_url": "https://www.nepjol.info/index.php/NHJ/article/download/26310/21914",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "0f659309a22529d5035c87d9184afb65ccac95f3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
5019013 | pes2o/s2orc | v3-fos-license | Designation and Validation of a Posterior Anatomical Plate for the Anterior Column of the Acetabulum
Background Surgical treatment of acetabular fractures is one of the greatest challenges for orthopedic surgeons. Fixation of most displaced fractures requires extensive exposure, which may lead to complications, including blood loss, neural or vascular injury, postoperative infection, wound healing problems, and heterotopic bone formation. Material/Methods This study was conducted to certify an anatomic plate with an anterior column lag screw guiding device to repair the posterior acetabulum. Complete pelvic spiral computed tomography (CT) scan data were collected from 56 patients. The posterior column of the acetabulum was simulated with a lag screw. The guiding device for the plate was designed by measuring the position of the screw point and the direction and maximum diameter of the screw. Results The distance from the screw point to the apex of the greater sciatic notch was farther in women than in men. The distance from the screw point to the ischial spine was also farther in women than in men. The θ angle (front inclination angle) of the screw was lower in women than in men. The ϕ angle (camber screw angle) was greater in women than in men. The success rate when using the guided device was significantly higher than when using traditional pedicle screws. Conclusions The guided device was very useful for improving placement success and accuracy rates of the acetabular posterior anatomical anterior column plate using antegrade lag screws, and for reducing surgical risk and injury.
Backround
Surgical treatment of acetabular fractures is one of the most challenging techniques for orthopedic surgeons [1,2]. Open reduction and internal fixation are the gold standard for displaced fractures involving the weight-bearing dome and fractures with intra-articular fragments [3].Traditional treatment methods involve inter-fixation and restoration of articular anatomy with stable internal fixation to allow early mobilization of the patient. Fixation of most displaced fractures requires extensive exposure, which may lead to complications, including blood loss, neural or vascular injury, postoperative infection, wound healing problems, and heterotopic bone formation [4][5][6][7]. In this study, the anatomic parameters of the nail entry point and nail entry orientation were obtained by computer-aided design and computer-aided manufacturing (CAD/ CAM) methodology and computed tomography (CT) scans. We designed an anatomic plate with a lag screw guiding device to repair the posterior and anterior columns of the acetabulum.
Material and Methods
Between 2012 and 2014, we treated 56 patients (27 men and 29 women) who had acetabular fractures. All patients underwent an emergency multi-directional radiographic examination, CT and three-dimensional (3D)-CT scans (GE CTT 8800, General Electric, Milwaukee, WI, USA). A 3-D model of the reconstructed pelvis was obtained using Mimics 15.0 software (Materilise NV, Leuven, Belgium) ( Figure 1).
Simulated insertion of the acetabular posterior column lag screw
We designed the plate from the sciatic notch by moving along the surface of the column of the acetabulum and stopping at sciatic nodules. The following four points are based on the steel plate design requirements and the operative field of view that should be satisfied to repair the anterior column of the acetabulum: 1) the screw point cannot exceed the height of the sciatic notch vertex, so that the operative field can be exposed during the operation; 2) the screw point must be located in front of the pubic bone; 3) the lag screw cannot enter into the joint cavity or pierce cortical bone; 4) the diameter of the lag screw must be as large as possible (³4 mm) to provide sufficient fixation strength. The reconstructed 3D model was rotated 90° ipsilaterally (equivalent to the pelvic side of the X-ray) using translucent processing software. The cylinder was built by Med CAD (Dallas, TX, USA). The 3D cylinder (i.e., lag screw) was placed virtually in the anterior acetabular column, according to the position requirements, and was observed in the horizontal, coronal, and sagittal planes of the 3D model to ensure that it was not in the joint cavity or piercing the bone cortex. The maximum diameter of the lag screw was determined by gradually increasing the diameter of the 3D cylinder at a rate of 0.1 mm until immediately before it pierced the cortical bone ( Figure 2).
Determining and measuring the screw point and angles
Point (O) on the cylinder going through the rear of the acetabular posterior column is the entering point for the antegrade lag screw, with the greater sciatic notch vertex (A) and the ischial spine (B) as reference points. "Measure 3D Distance": in "Tools" was selected to measure distance OA between the entering point of the screw and the greater sciatic notch vertex, as well as distance OB between the entering point of the screw and the ischial spine. Finally, the entering point of the screw was imported into the posterior acetabular anatomical plate data, and the most proximal screw entering hole was used as the guide for the anterior column antegrade lag screw to prepare the locking hole and calibrate the entering angle of the screw ( Figure 3A).
The entering angle of the antegrade lag screw was measured; that is, the angle formed between the cylinder and the posterior surface of the acetabular posterior column, where parallel line OT and perpendicular line OP of the medial margin of the acetabular posterior column were used as reference lines. "Measure 3D Angle" in "Tools" was selected to measure the angle formed between the cylinder and line OP (extraversion angle, Ð j) representing the tilt angle of the screw in the medial-lateral direction, and the angle formed between the cylinder and line OT (anteversion angle, Ð q) representing the tilt angle of the screw in the anterio-posterior direction ( Figure 3B).
Fabrication and preliminary validation of the guide
The measured parameters were saved as.stl files. The solid pull screw guide device was created using a digital milling machine. The success rate for placing the guide was verified on 19 dried pelvic specimens (11 males and eight females) using the auxiliary pulling screw guide on the left side. The specimens received the traditional screw setting in the right pelvis.
The criterion to determine the performance of screw fixation via the posterior approach was if the guiding needle point was on the superior ramus of the pubis and in front of the iliopectineal tuberosity ( Figure 4); otherwise, screw fixation was unsuccessful if the needle entered the acetabulum or broke through the medial or lateral bone cortex of the pubic ramus.
Statistical analysis
Continuous variables are expressed as means ± standard deviations. Differences between groups were detected by one-way 949 analysis of variance. All analyses were performed using SPSS 20.0 software (SPSS Inc., Chicago, IL, USA). A p value <0.05 was considered statistically significant.
Screw point and angles
The mean OA distance was 21.11±4.19 mm, the mean OB distance was 56.18±2.01 mm, the mean camber angle phi was 68.51±4.52°, and the mean dip angle theta was 73.67±3.17°. The mean maximum diameter of the screws placed was 6.51±2.14 mm. Significant differences were detected in the OA and OB distances, as well as the inclination j angle and anteversion q angle between males and females ( Table 1).
Success rate of the guide-assisted screw set
The success rate of screw guide-assisted placement was 84.21%, and that of conventional pedicle screws was 31.58% (p<0.05). These results suggest that the lag screw guide significantly improved the success and accuracy rates of screw placement.
Discussion
Acetabular fractures are one of the most common but severe and complex injuries seen in the clinic. An epidemiological survey showed that the mean annual incidence rate of acetabular fractures is 3/100,000 [8,9]. Acetabular fractures caused by traffic injuries are increasing due to population aging and increased use of automobiles for transportation [10][11][12][13][14][15]. High quality anatomical reduction and reliable internal fixation are key to improve treatment of intra-articular acetabular fractures [10,16,17]. Reconstruction with plate and screw fixation is the most widely used method to treat an acetabular fracture [15].
In this study, the pelvic anatomy differences between males and females were considered when selecting the entry point. If the female pelvis is small, and the pubic bone is natural, the safe placementof the screw in the anterior column channel may be limited. The screw should be placed in a more upward and outward positon in women than men, to make better use of the medullary cavity of the pubic ramus, and the screw angle is important for choosing the ideal entry point. A hook was designed to touch the greater sciatic notch based on the distance between the screw entry point and the greater sciatic notch vertex. The location of plate and position of the screw point were fixed.
950
The angle of the screw was key in the plate design. The angle between the plane of the tension screw and the screw point was measured, and the screw point was lower in males than that in females. Therefore, the front angle of the tension screw was larger. The screw did not enter or exit the medial acetabular cortical bone. The lag screw path was as far away as possible and in line with the second half of the pubic branch in the horizontal section. | 2018-04-03T05:52:23.234Z | 2017-02-21T00:00:00.000 | {
"year": 2017,
"sha1": "794ca5f36de6e0f7c43edbfac3d41dd7efa93550",
"oa_license": "implied-oa",
"oa_url": "https://europepmc.org/articles/pmc5331886?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "794ca5f36de6e0f7c43edbfac3d41dd7efa93550",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
265540989 | pes2o/s2orc | v3-fos-license | 1 Models of Classroom Assessment for Course-Based Research Experiences
: Course-based research pedagogy involves positioning students as contributors to authentic research projects as part of an engaging educational experience that promotes their learning and persistence in science. To develop a model for assessing and grading students engaged in this type of learning experience, the assessment aims and practices of a community of experienced course-based research instructors were collected and analyzed. This approach defines four aims of course-based research assessment – 1) Assessing Laboratory Work and Scientific Thinking; 2) Evaluating Mastery of Concepts, Quantitative Thinking and Skills; 3) Appraising Forms of Scientific Communication; and 4) Metacognition of Learning – along with a set of practices for each aim. These aims and practices of assessment were then integrated with previously developed models of course-based research instruction to reveal an assessment program in which instructors provide extensive feedback to support productive student engagement in research while grading those aspects of research that are necessary for the student to succeed. Assessment conducted in this way delicately balances the need to facilitate students’ ongoing research with the requirement of a final grade without undercutting the important aims of a CRE education.
INTRODUCTION
Recent educational initiatives in STEM are facilitating wide-spread implementation of course-based research experiences (CRE) because they increase persistence for students across many demographics (Russell et al., 2007;Jordan et al., 2014;Hanauer et al., 2017;Hernandez et al., 2018). This educational approach is characterized by having students involved in conducting and contributing to authentic scientific research projects (Hanauer et al., 2006(Hanauer et al., , 2012(Hanauer et al., , 2016(Hanauer et al., , 2017Hanauer and Dolan, 2014;PCAST, 2012;Graham et al., 2013;Auchincloss et al., 2014;Hernandez et al., 2018). Recent research on the pedagogical approach to teaching a CRE describes how this educational design transitions the ways in which instructors teach and the way in which the relationship between the instructor and the student is conceptualized and manifest (Hanauer et al., 2022). In particular, the hierarchy which is so prevalent in most educational settings is flattened slightly with the instructor and student working together on a shared research project (Hanauer et al., 2022). The expertise of the instructor is utilized in supporting a research process, the outcomes of which are not necessarily known (Auchincloss et al., 2014). For both instructor and student, the research is on-going and to a degree unpredictable. Timing for various outcomes may vary across students and projects, the type of interaction and expertise that the instructor has to provide may change and broadly the instructor and student need to be flexible in the ways in which they interact around the emerging scientific work. Hanauer et al., (2022) describe in detail the nature of this pedagogy and the ways in which instructors work with students in teaching a CRE.
While the pedagogical implementation of a CRE transitions the relations between instructor and student, the institutional requirement for a grade has not changed. Classroom grading is a significant and ubiquitous practice in STEM education in general and is a requirement whether the class is a CRE or not. The specific nature of a CRE raises several problems in relation to classroom grading. How does a teacher maintain the process of "shared" scientific research that is important beyond the classroom, if the instructor is "grading" the student on in-class tasks? When the nature of a class is not dictated by delimited content knowledge or a prescribed set of skills, what are the aims of assessment within a CRE? How does an instructor support and encourage a student during the challenges and potential failures of authentic science, if both student and instructor know that they need to assign a grade for the work being conducted? Broadly the problem of assessing and grading students in a CRE is that the CRE aims to provide a professional, authentic research experience in which the student feels that they are scientists. Grading seems quite artificial in this particular educational design.
Prior approaches to assessing a student's scientific inquiry divide into two camps: analytic schemes and authentic task modelling. Early work used an analytic scheme to define the components of scientific inquiry and suggested methods for assessing each of the parts in isolation. For example, Zachos (2004) delineates the core capabilities of scientific inquiry to include coordinating theories, searching for underlying principles, being concerned with precision, identifying sources of error in measurement and proportional reasoning, and suggest these should be used in the design of a series of performance tasks. Wenning (2007) designed a multiple-choice test of the components of a scientific inquiry such as identifying a problem, formulating a hypothesis, generating a prediction, designing an experiment, collecting and organizing data, using statistical methods and explaining results. Shavelson et al., (1998) proposed using a range of performance tasks to evaluate scientific inquiry abilities of students. What these approaches have in common is the idea that the grading of scientific inquiry can be externalized from the actual research that the student is doing. A set of skills and abilities that are relevant for scientific research are evaluated in a context that is beyond the actual project a student is doing.
The second camp proposed modelling authentic activity. In principle, if a CRE involves authentic research which produces scientific findings useful for a scientific community and the student is seen as a researcher, it would be logical that the evaluation of the student's work would be situated in the ways professional scientists are evaluated. However, practically, waiting for a paper to be published or a poster presented at a professional conference would be problematic both in relation to timing and the threshold level for successful student outcomes. Instead, Hanauer, Hatfull & Jacobs-Sera (2009) proposed an approach termed Active Assessment which analyzes the professional research practices of a specific research project and then uses these as a way of generating a rubric for evaluating student work. Assessment is done on the student as they work through the scientific inquiry they are involved in. A similar approach has been proposed by Dolan and Weaver (2021). What characterizes this approach are the ideas that assessment and grading should be situated in the performance of a student while conducting research in the CRE and that this assessment should be based on professional performance.
However, while this second approach offers a conceptual basis of how assessment in a CRE could be conducted, it is not based on data from actual instructors teaching a CRE. The aim of this study is to look at how experienced instructors in a large-scale CRE program --the Science Education Alliance (SEA) program by the Howard Hughes Medical Institute (HHMI) -describe their processes of assessing their students engaged in course-based research. Working with this large community of experienced CRE instructors over a two-year period, models of CRE assessment were developed. In addition, this current paper builds upon prior research on models of CRE instruction, which were similarly developed with this community of SEA instructors, (Hanauer, et al., 2022). The outcome of this study thus provides insight into how CREs can be assessed and graded while maintaining the pedagogical approach designed to provide an authentic research experience for students.
Issues with Assessment and Grading
In a classic text, Walvoord and Anderson (1998) specify a series of basic roles that grading is expected to perform: 1) It should be a reliable measure of a student's performance of required work; 2) It should be a means of communicating the quality of the student's performance with parents, other faculty, the university, future institutions and places of work; 3) It should be a source of motivation; 4) It should provide meaningful information for feedback to students and instructors to enhance learning; and 5) It can be a way of organizing class work. However, as seen in the scholarship, the implementation of grading is not unproblematic.
As documented over decades, there are questions as to whether grading always fulfills the stated aims above (Jaschik, 2009). Prior research has suggested that STEM faculty have the knowledge to create assessment tasks but often lack an understanding of how to validate these tasks (Hanauer & Bauerle, 2015). Some faculty problematically assume that the way they were graded is a basis for the grading of their own students leading to a persistence of outdated assessment practices (Boothroyd & McMorris, 1992). When considering what to assess and grade, there can be confusion between learning components tied to stated learning objectives of the course and other aspects of being a student such as punctuality, attendance, and participation (Hu, 2005). Additionally, there is little agreement between instructors as to which components should go into a grade with different instructors varying greatly in relation to how assessment is conducted (Cizek, Fitzgerald & Rachor, 1996). Research has also shown that grades can vary in relation to variables such as instructors, departments, disciplines and institutions (Lipnevich, et al., 2020) and in relation to specific student characteristics such as physical attractiveness (Baron & Byrne, 2004) and ethnicity (Fajardo, 1985).
It is important to understand the central role grading plays in the lives of students. Grading can increase anxiety, fear, lack of interest and hinder the ability to perform on subsequent tasks (Butler, 1988;Crooks, 1988, Pulfrey et al., 2011. There are alarming rates of attrition from STEM documented for students who identify as African American or Black, Latino or Hispanic, and American Indian and Alaska Native (Asai, 2020;Whitcomb & Chandralekha, 2021;National Science Board, 2018) and low grades is one of the factors that leads to this outcome (Whitcomb & Chandralekha, 2021). The relationship between grading and persistence is situated in the effect of negative feedback on performance (such as a lower-thanexpected grade) and the individual's sense of selfefficacy in that field (Bandura, 1991(Bandura, , 2005. Students who identify as African American or Black, Latino or Hispanic, and American Indian and Alaska Native may enter the STEM fields with pre-existing fears and anxieties about their work resulting from stereotype threat (Hilts et al., 2018). Negative experiences with grading further exacerbate these feelings leading to a disbelief in their ability to continue in STEM and hence attrition from that course of study (Hilts et al., 2018;Whitcomb & Chandralekha, 2021). Recent research has shown that grading works in two parallel ways: lower grades limit the opportunities that are available to students and increase the negative psychological impact on students' intent to persist in STEM (Hatfield, Brown & Topaz, 2022). As such grading, if not conducted appropriately, could directly undermine the main aim of a CRE -increased persistence in STEM for all students.
METHODOLOGY
Overview: A multi-method, large-scale and multi-year research methodology was employed in this study. Data collection and analysis was conducted over a two-year period in a series of designed stages with full participation from a large group of CRE instructors and a dedicated science education research team. The project developed in the following stages: 1) Survey: The initial stage of the study involved a qualitative and quantitative survey. The qualitative section asked about grading and assessment procedures used by instructors in their CRE courses and asked for a detailed explanation of the way these were used in their courses. The quantitative section used the psychometrically validated scales of the Faculty Self-Reported Assessment survey (Hanauer and Bauerle, 2015) to evaluate the knowledge level of the surveyed faculty. The aim of this first stage of the project was to collect descriptive data on the participants' understanding of assessment and specific information on the way they conduct assessment and grading in their courses. 2) Analysis and Large-Scale Community Checking of Assessment Aims and Practices: Data from the qualitative study was analyzed using a systematic content analysis process and the quantitative data was analyzed using standard statistical procedures. The quantitative data was analyzed in terms of high-level assessment aims and specific grading and assessment practices. All analyses were summarized and then presented in a workshop setting to a cohort of 106 CRE instructors. In a small-focus group format, the aims and practices were presented and instructors provided written feedback on the validity of the analysis, the specification of the high-level aims, the specification of practices and the assignment of the practices to assessment. Instructors responded within the workshop and were subsequently given an additional week to provide online responses to the questions posed. All data was collected using an online survey tool.
3) Analysis and Community Checking of Models of
Assessment and Grading: Data from the first stage of community checking was analyzed for modifications to the assessment aims and the assigned assessment and grading practices. Percentage of agreement with the aims and practices was calculated and modifications to the models were assigned. During this analysis there were no changes to the high-level aims, but several specific practices were added. Once the table of aims and practices had been finalized, the original survey commentary dealing with how assessment and grading were conducted was consulted. Using this commentary and the pedagogical models of CRE instruction (Hanauer et al., 2021), the aims and practices of assessment were integrated with the discussion of CRE instruction. Three integrated models were developed and presented to a dedicated group of 23 instructors for validation process.
Instructors were asked to provide feedback on the quality and descriptive validity of the models, the specification of aims of assessment and the specific practices. Instructors provided feedback during the workshop and for a week after the workshop. All data were collected using an online survey tool. 4) Finalization of the Models: Feedback from the workshop was analyzed for verification of the models and any required modifications that might be needed. Agreement with the models and their components were checked. Following this process, the models were finalized.
Participants: Participants for this study were elicited from the full set of instructors who teach in the SEA program. For the first stage of data collection, a survey request was sent to 330 SEA instructors. 105 faculty responded with 72 instructors providing full answers on the survey. Table 1 presents the instructor demographics. The SEA faculty respondents are predominantly White (³58.1%) and women (³49.5%). A range of academic ranks from instructor to full professor were represented in the sample. As seen in Table 1, the majority of respondents had at least three years of teaching in the program and above 6+ years of teaching postsecondary science. Respondents for the community checking of the model were drawn from the SEA faculty. For each stage 100+ instructors participated. Demographic data was not collected on the participants at the 2 community checking sessions.
Instruments: As described in the overview of the research process, data collection consisted of a qualitative and quantitative initial survey, followed by a large community checking survey and a final assessment model checking survey. A specific tool was developed for each of these stages. The original survey consisted of three sections: 1. Familiarity with Assessment Terms: The first set of items were from the psychometrically validated Faculty Self-Reported Assessment survey (Hanauer & Bauerle, 2015). The survey consists of 24 established terms relating to assessment, organized into two componentsassessment program and instrument knowledge, and knowledge of assessment validation procedures. On a 5-point scale of familiarity (1=I have never heard this term before; 5=I am completely familiar with this term and know what it means), faculty rated each of the terms in relation to their familiarity with the term. The FRAS is used to evaluate levels of experience and exposure of faculty to assessment instruments and procedures. See Table 2 for a full list of the assessment terms used.
Qualitative Reporting of Student Assessment:
The second set of items were qualitative and required the instructor to describe the way in which they assess students in the SEA program, to specify the types of assessment used (such as quiz, rubric…etc.), and to explain what each assessment is used for. Following the first question, faculty were asked to describe how they grade students and what goes into the final grade. Answers consisted of written responses. 3. Self-Efficacy Assessment Scales: The third set of items consisted self-reported measures of confidence in completing different aspects of assessment. The 12 items were taken from the FRAS (Hanauer & Bauerle, 2015) and consisted of a set of statements about the ability to perform different aspects of the assessment process (see Table 3 for a full list of the statement). All statements were rated on an agreement scale (1=Strongly Disagree, 5=Strongly Agree).
In order to collect verbal responses during the community checking stage of this project, participants completed an online survey that was presented following a shared online session in which the analyses of the main aims of assessment and the associated practices were presented (see Table 3). The survey asked for a written response to the following questions relating to each of the specified aims and associated practices: 1. Does this assessment aim make sense to you? Please specify if you agree or disagree that this is an aim of your CRE assessment. 2. For this aim, do the practices listed above make sense to you? Please comment on any that do not. 3. For this aim, are there practices of assessment that are not listed? If so, please list these additional practices and describe what these practices are used to evaluate. 4. Are there aims of assessment beyond the 4 that are listed above? If so, please describe any additional aims of assessment below.
The final community checking procedure involved the presentation of the full models of assessment to the collected participants in a shared online session (see Figures 1, 2 and 3). Following the presentation of the models, the participants were divided into groups and each group was assigned a model to discuss and respond to. Each model was reviewed by two groups, and all responses were collected using an online written survey with the following questions: 1. For each of the instructional models, have the appropriate assessment aims been specified? 2. For each of the instruction models, have the appropriate assessment practices been specified? 3. Overall, do the models present an accurate and useful description of grading practices in the SEA? 4. Please suggest any modifications and comments you have on the model.
Procedures: Data was collected in three stages. The initial stage consisted of an online survey that was distributed to all faculty of the SEA using the web-based platform Qualtrics. Following the informed consent process responses to the qualitative and quantitative items were recorded. The second stage involved the collection of community checking data from SEA instructors. A dedicated online Zoom session was arranged for this during one of the monthly virtual faculty meetings organized through the SEA program. During a onehour session the analysis of the aims of assessment and the associated practices were presented to the faculty. In small groups (breakout rooms), each of the aims and its associated practices were discussed. Following the session, an online survey was sent to faculty to collect their level of agreement with the aims and practices that were presented. They were also asked to modify or add any aims or practices that had been missed in the presented analysis of the original survey. The third stage of community checking data analysis consisted of a second online session during the regular end-ofweek faculty meeting. During a one-hour session, each of the assessment models was presented to the faculty who then discussed them in small groups (breakout rooms). A survey was sent to the faculty during the session to respond to the models and write their responses to the models. All data was collected in accordance with the guidelines of Indiana University of Pennsylvania IRB #21-214.
Analysis: The analysis of the data in this study was conducted in four related stages. The initial survey had both quantitative and qualitative data. The quantitative data was analyzed using established statistical descriptive methods. The qualitative verbal data consisted of a series of written statements relating to the practices used for assessment by the different instructors and the aims of using these practices. Using an emergent content analysis approach, each of the instructor statements was analyzed and coded. Two different initial code books were developed. One dealt with the list of practices used by the faculty; the second involved the explanation of why these practices were used and what the instructor was trying to assess. The data was coded by two trained applied linguistic researchers and following several iterations, a high level of agreement was reached on the practices and aims specified by the instructors. The second stage of this analysis of the verbal survey data consisted of combining the aims and practices codes. The specified practices across all of the instructors for each of the aims was tabulated. A frequency count of the number of faculty who specified each of the practices was conducted. The outcome of the first stage of analysis was a statistical description of the levels of knowledge and confidence of faculty on assessment issues and the specification of four main aims of assessment with associated assessment practices.
The second stage of analysis followed the presentation of the tabulated coded data from the original survey to participants. In this stage of community checking, faculty specified agreement (or disagreement) with the assessment aims and the set of associated practices. The verbal responses were analyzed by two applied linguistics researchers and modifications were made to the tabulated data. The degree of agreement with each of the aims and associated practices was counted. Any additional practices specified by faculty were added to the model. No new aims were specified and as such no changes were made. The table of assessment aims and practices was finalized.
Having established the aims of assessment and related practices, a third stage of analysis involved integrating the emergent assessment aims and practices with models of CRE instruction which had been previously defined for the SEA instructors (see Hanauer et al., 2022 for full details). A team of two researchers worked together to specify the points of interaction between the instructional and assessment components of CRE teaching. Using the qualitative data of the original models and the verbal statements of aims for the assessment data, integrated models of assessment were developed. Following several iterations, three assessment models corresponding to the instructional models were specified.
The final stage of analysis followed the presentation of the models of assessment to the community of SEA faculty. A team of two researchers went over the changes presented by faculty in relation to each of the models. Changes that were specified, such as the addition of specific practices into different models, were made. The outcome of this process was a series of three models that capture the aims and practices of assessment.
Instructor Familiarity and Self-Efficacy with Assessment
To build models of CRE assessment based on qualitative reports from instructors in the SEA program, we first evaluated instructors' knowledge of assessment terms and their confidence in implementing assessment tasks. For instructor knowledge of assessment, we utilized the Faculty Self-Reported Assessment Survey (FRAS) (Hanauer and Bauerle, 2015) -a tool which measures two components of assessment knowledge: 1) knowledge of assessment programs and instruments and 2) knowledge of assessment validation.
For the Program and Instrument component, instructors reported high levels of familiarity (Scale = 1 -5, Grand Mean= 4.26, Std. = 0.55). All items were above 4 (high level of familiarity), except for the terms related to performance assessment. These latter terms, which include Alternative Assessment and Authentic Assessment, were nevertheless familiar to instructors (above 3). The Validation components of the survey, which addresses terms relating to the evaluation and quality control of assessment development, were also familiar to instructors (Grand Mean = 3.34, Std. = .35). This result is in line with prior studies of faculty knowledge of assessment terms (Hanauer and Bauerle, 2015). The results overall for the two dimensions suggest that instructors in this study have the required degree of assessment understanding to be reliable reporters of their assessment procedures and activities.
To augment the FRAS data, self-efficacy data was collected on instructors' confidence in completing assessment related tasks. As shown in Table 3, instructors reported high levels of confidence in their assessment abilities (Scale = 1 -5, Grand Mean =4.04, Std. =.65). The highest confidence was in relation to defining important components of their course and student learning outcomes, while the lowest levels of confidence were in relation to the ability to evaluate, analyze and report on their assessments. The confidence levels for the latter were still relatively high (just below 4) and reflect, to a certain extent, the same trend as seen using the FRAS instrument. Taking into consideration the results of the FRAS and self-efficacy tasks, instructors report moderate to high levels of assessment expertise and confidence, which suggest that these instructors have the required expertise to report and evaluate the aims, practices and models of CRE assessment.
Aims and Practices of CRE Assessment
A fundamental goal of this study was to describe the aims and practices of experienced CRE instructors for assessing students in a CRE. As described in the methodology section, a list of aims and practices for assessment was elicited from the written survey data completed by instructors in the HHMI SEA program, which was then community-checked and modified. Overall, 4 central aims of CRE assessment were defined. For each aim, there were a cluster of assessment practices that were employed to assess student learning, with different instructors utilizing different subsets of these practices. The aims of CRE assessment, the practices related to each of the aims, and the degree of agreement amongst faculty for each aim and set of practices are presented in Table 4 and described below:
Assess Laboratory Work and Scientific
Thinking: The objective of this assessment aim was to assess a student's readiness, in terms of their practices, thought patterns and ethics, to function as a researcher in the laboratory setting. As seen in Table 4, several different practices were related to this aim, which include 1) assessing student behaviors such as participation, attendance, citizenship, collaboration, safety and independence, and 2) assessing students' scientific thinking based on their lab notebooks, data cards, independent research, conference participation and informal discussion. During the community checking stage, 85.95% of the faculty specified that this category was an aim of their assessment program and that the assigned practices were appropriate.
Evaluate Mastery of Concepts, Quantitative
Thinking, and Skills: The objective of this assessment aim was to assess the underpinning knowledge and skills that students need in order to function successfully, as a researcher, in the CRE laboratory setting. The practices related to this assessment aim include 1) the checking of laboratory techniques and skills using practical exams and lab notebooks, 2) the evaluation of required scientific knowledge through exams, tests, quizzes, written reports and articles, and 3) the assessment of quantitative knowledge. During the community checking stage, 80.99% of faculty specified that this category was an aim of their assessment program and that the assigned practices were appropriate.
Appraise Forms of Scientific Communication:
The objective of this assessment aim was to evaluate the ability of students to convey their research and attain scientific knowledge through the different forms of science communication. The practices related to this assessment include 1) oral abilities such as oral presentation, peer review, lab notebook meetings, scientific poster and elevator speech, and 2) literacy abilities such as reading and writing a research paper, report writing, notebook writing, scientific paper reading, literature review, and poster creation. 63.64% of faculty specified that this category was part of their assessment program. 4. Metacognition of Learning: The objective of this assessment aim was to assess the ability of students to regulate and oversee their own learning process. This aim is based on the assumption that being in control of your learning process improves the ability to learn. The practices related to this aim include reflection, discussion and an exit ticket. 76.85% of faculty specified that this category was part of their assessment program.
These four aims and associated practices define a program of assessment for CRE teaching. As depicted in Figure 1, the central aspect of an assessment program for a CRE is to evaluate the ability of a student to work and think in a scientific way. This central aspect is supported by two underpinning forms of knowledge: 1) mastery of concepts, quantitative thinking and skills and 2) the ability to communicate science. Overseeing the whole process is metacognition, which allows the student to regulate and direct their learning process. Accordingly, information on the students' functioning across all these areas are collected as part of the assessment program.
Models of Assessment in a CRE
The assessment program presented in this study is implemented by instructors in conjunction with a program of CRE instruction that has been previously described (Hanauer et al., 2022). The assessment aims and practices described here can therefore be integrated with the aims and practices (or models) of CRE instruction. The stated aims of CRE instruction are 1) Facilitating the experience of being a scientist and generating data; 2) Developing procedural knowledge, that is the skills and knowledge required to function as a researcher; and 3) Fostering project ownership, which include the feelings of personal ownership and responsibility over their scientific research and education (Hanauer, et al., 2022). These aims are directly in line with the broad aim of a CRE in providing a student with an authentic research experience (Dolan & Weaver 2021). In the sections that follow, and using a constructive alignment approach (Ambrose, et al, 2010;Biggs, 1996), the assessment aims and practices uncovered in this study are presented with the associated models of CRE instruction previously described.
Model 1: Assessing Being a Scientist and Generating Data
Being a scientist and generating novel data is a core aspect of a CRE. As shown in Figure 2 and described below, the instructional approach to achieving this aim involves three stages of instruction: a) Stage 1 involves preparing the student with the required knowledge and procedures in order to function as a researcher who can produce usable data for the scientific community. The pedagogy employed here includes the use of explicit instruction to provide students with the foundational knowledge to understand the science they are involved with and protocol training to make sure a student can perform the required scientific task.
Accordingly, assessment in this first stage of the model is aimed at Evaluating Mastery of Concepts and Quantitative Thinking. The assessment practices used here include both exams and in class quizzes, which are well suited for this purpose. Additionally, given that this foundational scientific knowledge must often be retrieved from various forms of scientific communication, including lecture, a research paper, a poster and an informal discussion with an expert, the ability to use scientific communication for knowledge acquisition is also evaluated. Practices such as the evaluation of a literature search report or presentation at a journal club can provide information on how the student understands and uses different modes of scientific communication. Combined, the use of exams, quizzes, literature search reports and journal club participation can provide a rich picture of the foundational knowledge of a student as they enter the process of doing authentic research.
To assess a student's ability to use a range of specific protocol properly, instructors rely on practical exams and a student's lab notebook, which are well established ways of checking whether a student understands and knows how to perform a specific procedure. Beyond these approaches, instructors reported that they used informal discussion, reflective writing, article writing and the lab notebook meeting to evaluate formally and informally whether the students understand how to perform the different scientific tasks that are required of them. This combination of explicit teaching of scientific knowledge and procedures, with formal and informal assessment of these abilities, serves to create a basis for the second stage of this pedagogical model, described below.
b. Stage 2 involves supporting students to manage the process of implementing procedures in order to generate authentic data. A central aspect of this stage is that the student moves from a consumer to a producer of knowledge, and this involves a change in the students' mindset concerning thinking processes, independence, perseverance and the ability to collaborate with others. Importantly, as is the case with science, positive results are not guaranteed and students face the ambiguity of failed outcomes and unclear paths forward. It is for this reason that the pedagogy at this stage involves a range of different supportive measures on the part of the instructor. These include modeling scientific thinking, providing encouragement and enthusiasm, mentoring the student at different points and, most importantly, making sure that the students understand that the scientific process is one that is fraught with challenges that need to be overcome. A lot of instruction is provided at the time that a task or event occurs.
Assessment at this stage is covered by the aim of Assessing Laboratory Work and Scientific
Thinking and the Metacognition of Learning. The scientific thinking of the student is primarily assessed through the discussion of the lab notebook, data and annotation cards, often during lab meetings. Importantly, as reported by faculty, a lot of this assessment is directed by informal discussion with the aim of providing direct feedback to the student so that they can perform the tasks that are required. This is very much a formative assessment approach with direct discussion with the student while they are working and in relation to the research they are doing. There are behaviors that faculty specify are important to track, such as participation, attendance, collaboration, lab citizenship and lab safety. These behaviors are a prerequisite for the research to move forward for the student and the research group as a whole. The use of assessment practices such as reflection and discussion allows the assessment of the degree of independence of the student, in addition to actually positioning the student as independent; the requirement of a reflection task, whether written in one's lab notebook or verbally, situates the students as the researcher thinking through what they are doing. Overall, this stage involves extensive informal formative assessment of where the student is in the process from the practical, scientific and emotional aspects of doing science, combined with a more formal evaluation of the behaviors which underpin a productive and safe research environment.
c. The third and final stage of this pedagogical model involves the actual scientific output produced by the student researcher. A CRE is defined by the requirement that data is produced that is actually useful for a broader community of scientists. If the second stage of the assessment of this pedagogical model is characterized by informal, formative assessment approaches, this final stage is characterized primarily by formal summative assessment. At this stage the student has produced scientific knowledge and is in the process of reporting this knowledge using established modes of scientific communication. The student is assessed in relation to the knowledge they have produced and the way they communicate it. As such, both the aims of Assessing Laboratory Work and Scientific Thinking and the Appraisal of Forms of Scientific Communication are utilized. The lab notebook, data card, annotation, conference presentation, oral presentation and poster all involve a double summative assessment approach: an evaluation of the quality of the scientific work that has been produced and an evaluation of the ability of the student to communicate this knowledge using established written and verbal modes of scientific communication.
This final stage provides the opportunity for evaluating the whole of the research experience that the student has been involved in.
To summarize, the instruction and assessment model of Being a Scientist and Generating Data has three distinct stages. The initial is designed to make sure that the student can perform the required tasks and understand the underlying science. Assessment at this stage is important as the learning involved in this stage is a prerequisite for the second stage of the model. During the second stage, while the student is functioning as a researcher, the primary focus of the assessment model is to provide feedback to the student and the required level of expertise advice and emotional support to allow the research to move forward. This stage is characterized by informal discussion and is primarily a formative assessment approach. The final stage is directed at evaluating the scientific outcomes and the student's ability to communicate them. Assessment at this stage offers a direct understanding of the quality of the work that has been conducted, the degree to which the student understands the work, and the ability of the student to communicate it.
Model 2: Assessing Procedural Knowledge
Being able to perform a range of scientific procedures is a central and underpinning aspect of being a scientist and a core feature of a CRE. Figure 3 presents a pedagogical and assessment model for teaching procedural knowledge. As seen in the previous model, protocols are an important precursor that enables an undergraduate student to conduct scientific research. In model 2, how students learn scientific procedures is further explicated from model 1. As can be seen in Figure 3, there are three stages to the development of procedural knowledge.
a. The first stage involves enhancing the students' content knowledge concerning the science behind the protocol they are using and scientific context of the research they will be involved with. For a student to become an independent researcher, they need to be able to not just follow a set of procedures but also to understand the science that it relates to. The pedagogical practice involved here includes explicit instruction, discussion and reading of primary literature. From an assessment perspective, the evaluation of this underpinning content knowledge is conducted using established practices such as exams, tests and quizzes. In addition, as reported by faculty, this material was informally discussed with students to gauge understanding of the context and role of the procedure.
b. In the second stage, students are taught how to implement the procedure and to think like a scientist. This involves using a protocol, scientifically thinking through the process of using a protocol, and appropriate documentation of the process of using a protocol. Scientific thinking at this stage includes interpretation of outcomes, problem solving, and deciding about next steps. In this way, learning a protocol is not only about being able to perform, analyze and document a procedure appropriately, but also involves the development of independence for the researcher. These two components are related in that if a student really has a full understanding of the procedure, they can also make decisions and function more autonomously. Such mastery is particularly critical in a CRE because the research being conducted is intended to support an ongoing authentic research program. As reported by faculty, there are both formal and informal assessments that facilitate this evaluation. Practical exams allow faculty to really check the performance of a particular procedure and their understanding. Lab notebook evaluation, lab meeting interactions and informal discussion about the work of a student as they perform certain tasks provides further evidence of the student's mastery of the concepts and skills that are involved. These interactions are primarily formative and have the aim of providing feedback for the improvement of the student's understanding of scientific procedures.
An additional level of assessment at this stage relates to the ability of students to document their research in the lab notebook, explain their research in a lab meeting and to converse with peers and instructors about what they are doing. These are all aspect of scientific communication, and assessment at this second stage of learning procedural knowledge includes the aims evaluating mastery of concepts and skills and of an appraisal of scientific communication. Since these are new forms of communication for many undergraduate students, instructors report using rubrics to evaluate and provide feedback on the quality of the communication.
c. The final stage of this model relates to the scientific outcomes of the students' work. At this stage, assessment aims to evaluate the quality of the outcomes of these procedures and the level to which the student really understands what they have done. Evaluation here therefore combines the use of data cards, annotation outputs, lab notebooks, oral presentations, conference participation, and the student's reflections on their own work. As reported by faculty, not all procedures are successful and students are not graded negatively for a failed experiment as long as the procedures, including the thinking involved, follows the scientific process. Thus, as reported by faculty, both the instructor and the student often work collaboratively to evaluate how well the student understands the different procedures they are learning to use.
Model 3: Assessing the Facilitation of Project Ownership
The educational practice of a CRE involves a desired transition of the student from being a more passive learner of knowledge to being an active producer of knowledge who is integrated into a larger community of researchers. This transition, in which the student has a sense of ownership over their work and responsibility over their research and learning, is an aim of CRE pedagogy and has important ramifications to being a student researcher (Hanauer, et al., 2022). Furthermore, prior research has shown that the development of a sense of project ownership differentiates between an authentic research experience and a more traditional laboratory course. Figure 4 presents the pedagogical and assessment model of fostering project ownership. The model has three stages of development.
a. The first stage of fostering project ownership is developing in students a broad understanding and ability to perform a range of scientific protocols. This is because project ownership requires the belief and the ability to actually do science. It is an issue of selfefficacy and mastery of concepts and skills. As such, the first stage of assessment involves evaluating the degree of mastery a student has over a specific protocol. As opposed to prior models, this is enacted here through formative, informal discussions, which also serves to enhance that mastery.
b. The second stage of the model aims to develop the student's sense of personal responsibility. Primary to this process is the promotion and encouragement of the student's independence. This can involve both emotional supports, the provision of resources, and the allotment of time for the student to ponder the work that they are doing. As reported by faculty, not every question has to be or can be answered immediately. Allowing a student to think about their work and what they think should be done is an important aspect of a CRE education. Accordingly, a central component of the assessment model here is having the student reflect on their work. The task of assessment here thus expands beyond the instructor to student as well.
A different aspect of both fostering and assessing responsibility and ownership over one's research involves a series of behaviors related to scientific work. Faculty report assessing lab citizenship, collaboration and lab safety protocols. Being responsible includes behaving in appropriate ways in the laboratory and as such these aspects of the students' work are evaluated. Some faculty also reported that having the student propose projects that extend the ongoing classroom research project allowed them to assess the degree of independence of the student.
c. The final stage of the model involves situating the student-researcher within a broader scientific context. Talking with the student about future careers and educational opportunities, and providing encouragement and enthusiasm for the work the student is doing positions the student at the center of their own development. Project ownership involves pride in the research one is doing and seeing ways in which this work can be developed beyond the specific course. Once again, reflection plays a central role in assessing and facilitating this, and occurs as an informal and ongoing process.
In parallel, the outcomes of the research the student does is reported using established modes of scientific communication. A student is responsible for reporting their work using oral presentations, scientific posters, research papers and reports. At this point, they will receive feedback on their work in both formal and informal ways. One important aspect of this reporting is the real-world evaluation of their output. Other peer student researchers may respond, in addition to faculty and scientists beyond the classroom. Having ownership over one's research also includes an understanding that the work will be evaluated beyond the classroom grade and that the work itself is part of a far larger community of scientists. In this sense, the evaluation of the scientific output facilitates ownership of the research itself.
DISCUSSION
The main aim of this paper is to explore how assessment of students engaged in course-based research is implemented and aligned with the educational goals of this form of pedagogy. In terms of constructive alignment, the aims of any assessment program should reflect and support defined instructional objectives. Early approaches to the assessment of scientific inquiry, as is typically implemented in traditional labs, focus on mastery of the components of research (see Wenning 2007 for an example). The aim of instruction and assessment within a traditional lab is to make sure that a defined procedure has been mastered by the student so that in some future course or scientific project, the student knows how to perform it. In the traditional lab, grading is evidence of qualification for the student's ability to function in a future scientific activity. Failure, if it happens, is indeed failure and a reason for not progressing further.
In contrast, a CRE aims to provide the student with an authentic research experience in which they are contributors of research data that is useful for advancing science. As such, mastery is a necessary but not sufficient aim of assessment. As specified by instructors in this study, mastery of concepts, quantitative thinking and skills is important in order to conduct and understand a scientific process; but this is situated in relation to the actual performance of scientific research (also an aim of assessment), which involves an understanding of how to communicate science and ownership over one's learning and research activity. Thus, from the perspective of what to assess, it is clear that assessment in a CRE needs a broader approach than the assessment program of traditional labs. In this study, four aims of assessment were defined by experienced CRE instructors: 1) Assessing Laboratory Work and Scientific Thinking; 2) Evaluating Mastery of Concepts, Quantitative Thinking and Skills; 3) Appraising Forms of Scientific Communication; and 4) Metacognition of Learning.
The alignment between these assessment aims and the aims of CRE instruction is further explicated here.
Across the instructional aims of Facilitating Being a Scientist and Generating Data, Developing Procedural Knowledge, and Fostering Project Ownership, the four aims of assessment were seen to provide ways of collecting useful data that supports the progress of students towards these stated aims of CRE instruction. With regard to how assessment data is collected in a CRE, there are particular relationships between formal and informal assessment and the formative and summative approaches. Summative assessment with formalized tools tended to be at the beginning and end of a research process, in relation to first the development of required mastery of concept and skills and last the evaluation of scientific outputs, which are the products of the research. Mastery can be evaluated using tests and exams, while products can be evaluated using rubrics. In contrast, during the process of conducting the research project, the emphasis is on providing feedback to students to help support the ongoing work. This includes the use of a range of laboratory practices, such as lab notebook documentation and lab meetings. And while assessment data is collected, the response is often informal and formative with the aim of supporting the student to further their research.
Beyond collecting assessment data, there is also a particular way in which assessment, evaluation and grading manifest in a CRE setting. The terms of assessment, evaluation and grading are often used interchangeably. But these terms relate to different concepts. Assessment is primarily a data collection and interpretation task; evaluation is a judgement in relation to the data collected; and grading is a definitive decision expressed as a number or letter as to the final quality of the work of a student. The majority of institutions require grades for a CRE. But not all things that are assessed in a CRE need to be graded. In particular, informal discussion with students of the different aspects of the scientific tasks students are performing allows the instructor to provide supportive feedback that facilitates the scientific inquiry. This informal, formative assessment does not require a grade directly. At the same time, there is a role for assessing and grading the underpinning knowledge, behaviors (such as lab citizenship, attendance, participation, collaboration and lab safety), and scientific outputs of the students. Thus, there is a two-tiered assessment and grading process in which, during the process of scientific inquiry, which is the majority of the course time, assessment data is collected but not graded; however, the knowledge, skills, behaviors and outcomes are graded. Since the aim of the whole course is to give the student the experience of being a researcher and to produce scientific data, providing facilitative feedback based on assessment during the research process helps the student to complete the tasks in a meaningful way. The grading of the underpinning knowledge, skills and behaviors also facilitates the work that is conducted in laboratory. Without appropriate mastery and behavior, the lab research will not be possible. Thus, once again, the form of assessment supports the progress of authentic research. As presented in this study, the way to grade a CRE is to differentiate the framing of the research that is conducted from the process of doing the research; provide extensive formative assessment in an informal manner throughout the research process; grade the underpinning components of knowledge, skill and behavior; and provide a final grade which weights the quality of the work and the output that is produced. The aim should be for every student to be successful in the research process and assessment should facilitate this work.
The assessment and grading practices presented here are clearly facilitative of student learning. First, knowledge, skills and behaviors are measured because they are foundational for students to productively engage in their research. Second, a large part of the assessment work is directly aimed at providing feedback without penalizing a student through grade assignment. There is extensive informal formative assessment that can be seen as a departure from assessment in more traditional labs and which approximates the type of facilitation that characterize mentor-mentee relationships in authentic research settings (e.g. in individual undergraduate research experiences, postbaccalaureate research opportunities, or during postgraduate research). This mentor-mentee relationship can build trust and counter stereotype threat to enhance persistence and learning. Additionally, an assessment program with extensive informal formative assessments leaves fewer instances when a student might be penalized by grading and suffer the negative psychological effects associated with lower grading. Third, the components of CRE assessment address a broad range of skills, beyond just mastery of procedures, that a student needs as a scientist and a learner. In particular, included within the aims of CRE assessment are scientific communication and metacognition. Scientific communication is an important component of being a researcher, while metacognition not only provides information that can be used to evaluate where a student is and how they are thinking about their work, but also positions the student as an evaluator of their own work. In this case, the task of assessment itself directs the students towards better learning and might explain why CREs improve student learning despite the CRE content not always being directly aligned with lecture content (in comparison to traditional lab). We hypothesize that these various aspects of CRE assessment contribute to the positive outcomes observed for students across many demographics and when compared to the traditional lab.
As presented in the introduction, a CRE poses quite specific challenges in terms of assessment and grading. A primary concern relates to the need to maintain a professional shared research project with contributions from instructor and student, while still assessing and grading a student. As presented here this delicate balancing act is facilitated by using assessment and grading thoughtfully and in a coordinated manner. If the instructor is providing extensive feedback that supports the work of the student and grades the aspects of science that are necessary for the student to succeed, the relationship with the student is different from a relationship in which the teacher is just grading a student. The assessment models presented here provide a framework to facilitate the aims of a CRE without undercutting the broader aims of promoting student learning and persistence in science, and can serve to inform assessment and grading practices in STEM, more generally. Figure 1 The Core Components of a CRE Assessment Model: Based on the qualitative analysis of faculty descriptions of their assessment and grading practices in a CRE, four central aims of assessment were defined: 1. Assess Laboratory Work and Scientific Thinking; 2. Evaluate Mastery of Concepts, Quantitative Thinking, and Skills; 3. Appraise Forms of Scientific Communication; & 4. Metacognition of Learning. Assessing laboratory work is the central aspect of an assessment program which supports the ability of a student to work and think in a scientific way. Laboratory work and scientific thinking are supported by two underpinning forms of knowledge both of which are assessed: 1) mastery of concepts, quantitative thinking and skills and 2) the ability to communicate science. Metacognition allows the student to regulate and direct their learning process and positions students to see themselves as owners of their own education and research. Together these four aims and associated assessment and grading practices define the assessment program of a CRE. Figure 2 Assessing Being a Scientist and Generating Data: Based on the qualitative analysis of faculty descriptions of the central aims of assessment in a CRE and all associated practices, a model of assessment and grading was aligned with the instruction model of being a scientist and generating data (Hanauer, et al. 2022). The model was validated through largescale community feedback from CRE faculty. This model has three distinct stages. The first stage assesses and grades whether a student can perform the required tasks and understands the underlying science. This knowledge base precedes and supports the actual authentic research of the central stage of the model. In the second stage, while the student is functioning as a researcher, through assessment the instructor provides formative feedback to the student allowing the research to move forward. This stage is characterized by informal discussion and is primarily a formative assessment approach. The final stage is directed at evaluating the scientific outcomes and the student's ability to communicate them. Assessment at this stage offers a direct understanding of the quality of the work that has been conducted, the degree to which the student understands the work, and the ability of the student to communicate it.
Figure 3 Assessing Procedural Knowledge:
Based on the qualitative analysis of faculty descriptions of the central aims of assessment in a CRE and all associated practices, a model of assessment and grading was aligned with the instruction model of developing procedural knowledge (Hanauer, et al. 2022). The model was validated through large-scale community feedback from CRE faculty. This model has three distinct stages. The first stage involves assessing content knowledge concerning the science behind the protocol they are using and scientific context of the research they will be involved with. This knowledge underpins the student's ability to understand the protocol and science they are involved with. The second stage involves assessing whether students know how to implement the procedure, think like a scientist and appropriately use scientific documentation. Assessment during this stage is primarily informal and formative. c. The final stage of this model relates to the scientific outcomes of the students work. At this stage, assessment aims to evaluate the quality of the outcomes of these procedures and the level to which the student really understands what they have done. Figure 4 Assessing the Facilitation of Project Ownership: Based on the qualitative analysis of faculty descriptions of the central aims of assessment in a CRE and all associated practices, a model of assessment and grading was aligned with the instruction model of the facilitation of project ownership (Hanauer, et al. 2022). The model was validated through largescale community feedback from CRE faculty. This model has three distinct stages. In the first stage students a broad understanding and ability to perform a range of scientific protocols is assessed. The ability to take ownership over ones work requires knowledge of how to adequately perform the scientific laboratory work itself. The second stage of the model aims to develop the student's sense of personal responsibility. Assessment practices related to reflection (metacognition) and lab behaviors are assessed in addition to the provision of informal formative responses from instructors. The final stage of the model involves situating the student-researcher within a broader scientific context and assessing the student's ability to report and understand the scientific knowledge they have produced. | 2023-07-15T21:48:05.602Z | 2023-11-28T00:00:00.000 | {
"year": 2023,
"sha1": "53f88145f02c71663f800d714bfa05a980ee3a18",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/feduc.2023.1279921/pdf?isPublishedV2=False",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "53f88145f02c71663f800d714bfa05a980ee3a18",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
} |
2128548 | pes2o/s2orc | v3-fos-license | Copy number rather than epigenetic alterations are the major dictator of imprinted methylation in tumors
It has been postulated that imprinting aberrations are common in tumors. To understand the role of imprinting in cancer, we have characterized copy-number and methylation in over 280 cancer cell lines and confirm our observations in primary tumors. Imprinted differentially methylated regions (DMRs) regulate parent-of-origin monoallelic expression of neighboring transcripts in cis. Unlike single-copy CpG islands that may be prone to hypermethylation, imprinted DMRs can either loose or gain methylation during tumorigenesis. Here, we show that methylation profiles at imprinted DMRs often not represent genuine epigenetic changes but simply the accumulation of underlying copy-number aberrations (CNAs), which is independent of the genome methylation state inferred from cancer susceptible loci. Our results reveal that CNAs also influence allelic expression as loci with copy-number neutral loss-of-heterozygosity or amplifications may be expressed from the appropriate parental chromosomes, which is indicative of maintained imprinting, although not observed as a single expression foci by RNA FISH.
The authors have sought to address an important question in, interaction and interplay between methylation, copy number and gene expression. This is a very important question, particularly in the cancer setting and one which as the authors highlight is generally not considered. The authors show quite nicely the influence of copy number alterations and aberrant methylation on gene expression at imprinted regions. Furthermore gaining an understanding of how imprinted regions are affected will untimely allow inferences to be made about the rest of the cancer genome.
Overall I have few comments:-• The authors have focused exclusively on cancer cell lines and primary tissues primarily from the TCGA. Although this has provided excellent data, it would be interesting to see the interrogation of the combined bisulphite sequencing, expression and copy number data from ENCODE (or those generated by iHEC) which may also be used to further investigate the effect of copy number on methylation.
• The authors perform selected pyro sequencing and targeted bisulphite sequencing. Although these data appear to support their results, it would be nice to have a comparison across all regions. Are there any over lapping ENCODE samples with RRBS or WGBS which would allow all imprinted loci to be assessed, even if just across a small cohort. Or alternately they could compare publically available RRBS/WGBS and copy number data from primary cancers to the data from the TCGA.
• Is it possible to work out the cellular timing of alterations?
• The interaction of methylation and copy number of gene expression is a key question in understanding the functional effects of these perturbations on cancer development. Have the authors attempted to look outside imprinted genes, particularly were non-imprinted genes sit within the same genomic alteration, and as such can they start to tweeze out the relationship between epigenetic and genetic alterations in such regions?
Reviewer #2 (Remarks to the Author): This paper brings together large datasets from the COSMIC and TCGA and reaches the conclusion that the majority of methylation abnormalities at imprinted gene loci occurring in cancer cell lines and primary tumors result from copy number abnormalities and not epimutations. The conclusions drawn by the authors are of interest and the availability of the data at www.humanimprints.net will benefit the scientific community. However I would suggest that the abstract and title should specify that the analysis was for four human cancer types (lung, colorectal, breast and hepatic cancers).
It would be helpful if the authors could clarify at which imprinted loci there were no apparent relationship between CNAs and the parent-of-origin of the amplified/deleted chromosome and which loci (and tumor types) there appeared to be a relationship.
There are some spelling mistakes and in some the grammar required correcting In this manuscript the authors analyze imprinted differentially methylated regions (DMRs) across cancer cell lines and TCGA data. They claim that in the majority of cases methylation at DMRs reflect the underlying copy number of the region. While this result is not surprising, as far as I know, it has never been published before and the author's data may be a useful resource to scientists studying methylation in tumours.
Specific comments. Imprinted loci that are amplified to high levels maybe those that are close to focally amplified oncogenes. For example PPIEL is ~300 kb from the oncogene MYCL1. The authors should comment on the frequency of these occurrences. Figure 1A -It would be helpful if the loci names were appended with chromosomal arm locations so that you could compare this data to the Sky karyotype in figure 1B. The labels of copy number to the left of MDA-MB453 are too small to be seen. Is the SNURF locus in Figure 1A the same as the SNRPN locus in Figure 1D? If so the same name should be used throughout the manuscript.
The authors claim that for chromosomes harboring more than one imprinted domain, each locus is associated with private focal CNAs. However in figure 1D, it appears that in ~25% of cell lines the SNRPN and IGF1R loci are co-amplified due to a chromosomal arm level gain of 15q. These amplifications are neither private nor focal CNAs.
Copy number is not interpretable without understanding the baseline ploidy of tumours. For instance a segment with a copy number of 3 is an amplification in a diploid tumour but a deletion in a tetraploid tumour that has undergone whole genome doubling (MDA-MB-453 is an example of this). The authors use of copy ratios shows a good understanding of this issue. But if tumours shown in figure 1D are of different baseline ploidy than copy number should be shown relative to the baseline ploidy (total alleles/baseline ploidy). Copy number shown in tables also should include baseline ploidy or relative copy number.
Copy number changes shown in tables should also give some indication on whether these changes are due to focal or arm level alterations. One way of doing this is to indicate what % of the total chromosomal arm do these alterations represent. Figure 2C is difficult to understand and should have a legend. It is unclear what the black and white circles underneath methylation maps of loci are suppose to represent.
In figure 3B, it is unclear what exactly the X axes of the graphs are.
In figure 4B the RNA FISH signals for cell line MCF7 are barely visible. Can you show a better representative cell for this figure?
For figure 5B the same comment as figure 1D. If the data in 5B is not relative copy number than how did the authors take in account differences in tumour purity when calculating copy number?
REVIEWER 1.
We thank the reviewer for their positive comments about our work. Question 1. The authors have focused exclusively on cancer cell lines and primary tissues primarily from the TCGA. Although this has provided excellent data, it would be interesting to see the interrogation of the combined bisulphite sequencing, expression and copy number data from ENCODE (or those generated by iHEC) which may also be used to further investigate the effect of copy number on methylation.
Answer 1. In order to compare the imprinted methylation profies for cancer cell lines utilizing the COSMIC datasets with those avalaible from ENCODE, we interrogated the reduced representation bisulphite sequencing (RRBS) datasets for five cell line which were common in both collections (MCF7, HepG2, T47D, HCT116 and A549). Unfortunately copy-number data is not available at ENCODE for these samples. The ENCODE methylation data allowed for the examination of 23-29 of the 37 imprinted DMR analysed by HM450k array (with CpG read depth >10 for 2 replicates). Despite the difference in technology a comparison of the methylation profiles revealed high correlation (Spearman r, A549 r = 0.85; HCT116 r = 0.79; T47D r =0.95; HepG2 r = 0.6; MCF7 r = 0.74. p < 0.0001) suggesting that the HM450k methylation data described in this manuscript can be used with high confidence.
Graphs showing the correlations for methylation levels at imprinted loci as determined by RRBS ENCODE and HM450k methylation array from COSMIC.
Question 2. The authors perform selected pyro sequencing and targeted bisulphite sequencing. Although these data appear to support their results, it would be nice to have a comparison across all regions. Are there any over lapping ENCODE samples with RRBS or WGBS which would allow all imprinted loci to be assessed, even if just across a small cohort. Or alternately they could compare publically available RRBS/WGBS and copy number data from primary cancers to the data from the TCGA.
Answer 2. We tried to untilize ENCODE WGBS datasets but the data was not of sufficient quality to allow comparisons with the HM450k methylation array data. To allow for comaparisons across additional imprinted regions we performed an extended pyrosequencing analysis for more than 20 imprinted DMRs in four cell lines (MCF7, HepG2, T47D and HCT116). This comparison also revealed high correlation (Spearman r, HCT116 r = 0.58; T47D r =0.79; HepG2 r = 0.74; MCF7 r = 0.5. p < 0.01).
Graphs showing the correlations for methylation levels at imprinted loci as determined by pyrosequencing and HM450k methylation array from COSMIC.
A sentence has been added to the manuscript desribing the comparisons between different technologies to quantify methylation and the correlations obtained. It reads "To ensure that the profiles obtained using the COSMIC HM450k methylation dataset accurately reflected the methylation pattern at imprinted DMRs, we compared the profiles for five cell lines with those obtained using reduced representation bisulphite sequencing (RRBS) generated by ENCODE. Despite the difference in technology, a comparison of the methylation profiles revealed high correlation (Spearman r, A549 r = 0.85; HCT116 r = 0.78; T47D r =0.95; HepG2 r = 0.6; MCF7 r = 0.77. p < 0.0001) suggesting that the HM450k methylation data can be used with high confidence".
In addition we have included a brief description of the analysis in the methds section. Question 3. Is it possible to work out the cellular timing of alterations? Answer 3. In an attempt to determine the cellular timing of the alterations we an examined the HM450k methylation profiles in five pateint-derived tumour xenograph models for breast cancers. In each case we examined the profiles of the 37 imprinted DMRs in early and late passage PDTXs but we failed to identify any methylation or CNAs changes. This suggests that cancer-associated abberations had already occurred in the primary tumour and do not continually evolve in this model system.
Heapmap of the HM450k probes located within known imprinted DMRs in control breast tissues (n=19) and the five PDTX models.
Question 4. The interaction of methylation and copy number of gene expression is a key question in understanding the functional effects of these perturbations on cancer development. Have the authors attempted to look outside imprinted genes, particularly were non-imprinted genes sit within the same genomic alteration, and as such can they start to tweeze out the relationship between epigenetic and genetic alterations in such regions?
WDR27
Answer 4. The effect of non-imprinted genes located within the same genomic CNA as the imprinted loci could influence tumour development. In an attempt to understand the impact of additional genes we first looked for known tumour-suppressor and oncogenes mapping within the vicinity of the imprinted loci. In nine cases proven oncogenes mapped near to imprinted regions so that both loci could be coamplified. Similarily, two imprinted regions mapped near tumour-suppressor genes that could be affected by the same deletions. As a general trend, the larger the distance between the cancer-assoicated gene and the imprinted loci the lower the frequency that both regions are involved in the same cytogenetic abberration. However in a high porportion of cases both loci are affected (i.e. PPIEL and MYCL1 co-amplify in 98% of lung, 91% in liver, 97% in breast and 100% in colon cancer cell lines).
Imprinted region
In addition to the classic acquired loss-of-heterozygosity associated with deletions, we show that cnnLOH is also a common chromosomal defect affecting imprinted loci. Since cnnLOH may lead to homozygosity of a pre-existing pathogenic mutations (i.e. silencing tumour-suppressor genes or activating oncogenes) we screened for genes with homozygous mutations mapping within cnnLOH regions harbouring imprinted loci. Of the 280 cell lines analysed, 258 had at least one region of cnnLOH affecting an imprinted locus (48 breast cancer line lines, 44 colon cancer cell lines, 14 liver cancer cell lines, 152 lung cancer cell lines) with 26% also containing homozygous mutated genes described from in the COSMIC database. The majority of mutated genes have not been associated with tumour initiation of progression, however we did identify reoccuring RB1 mutations associated with cnnLOH of chr13q14.2 and PTEN mutation with cnnLOH of chr10q23-26 in lung and breast cancer cell lines.
Furthermore, the analysis of the parent-of-origin of the cnnLOH, as inferred by the methylation profile of the affected imprinted DMRs, suggests that the affected on imprinted gene dosage is complex. For example, cnnLOH for the chr11p15.5 interval, including IGF2-H19, involves the maternally and paternallyderived chromsomes equally in breast and lung cancer cell lines, but is exclusively paternal in colon cancer cell lines consistent with previous observations that IGF2 over-expression is oncogenic. For cases also harboring mutated genes, it seems that the presence of the genetic variant is more influential than the parent-of-origin of the cnnLOH, with the exception of RB1 mutations in lung cancer cell lines, in which all cases (7/7) were associated with hypermethylation at the RB1 imprinted DMR.
A new results section has been added to the manuscript desribing the associations between known cancer-associated gene and nearby imprinted loci. It reads "
The influence of nearby non-imprinted genes on CNAs
The effect of non-imprinted genes located within the same CNAs as the imprinted loci could also influence tumor development. In an attempt to understand the impact of additional genes we identified nine oncogenes and two tumor-suppressor genes that map within the vicinity of the imprinted loci (Supplementary Table 4). In general, the larger the distance between the cancer-associated gene and the imprinted loci the lower the frequency that both regions are involved in the same cytogenetic abberration. However, in many cases both loci are affected. For example, the imprinted PPIEL loci us located ~373 kb from the MYCL1 oncogene and co-amplification of both regions was observed in 98% of lung, 91% in liver, 97% in breast and 100% in colon cancer cell lines.
In addition to the classic acquired LOH caused with deletions, we show that cnnLOH is also a common chromosomal defect affecting imprinted loci. Since cnnLOH may lead to homozygosity of pathogenic mutations (i.e. silencing tumorsuppressor genes or activating oncogenes) we screened for genes with homozygous mutations mapping within cnnLOH regions harbouring imprinted loci. Of the 280 cell lines analysed, 258 had at least one region of cnnLOH affecting an imprinted loci with 26% also containing homozygous mutated genes described from in the COSMIC database. The majority of mutated genes have not been associated with tumor initiation of progression. However we did identify reoccuring mutations of RB1 associated with cnnLOH of chr13q14 and PTEN mutations with cnnLOH of chr10q23-26 in lung and breast cancer cell lines (Supplementary Table 5).
Further analysis of the parent-of-origin of the cnnLOH, as inferred by the methylation profile of the affected imprinted DMRs, suggests that the influence on imprinted gene dosage is complex. For example, cnnLOH for the chr11p15.5 interval involves both the maternally and paternally-derived chromosomes equally in breast and lung cancer-derived cell lines, but is exclusively paternal in colon cancer cell lines consistent with previous observations that IGF2 over-expression is oncogenic (The Cancer Genome Atlas Network, 2012). For cases also harboring homozygous mutated genes, it seems that the presence of the genetic variant is more influential than the parent-of-origin of the cnnLOH, with the exception that RB1, in which all lung cancer cell lines (7/7) were associated with hypermethylation at the RB1 imprinted DMR".
Furthermore an two additional tables have been added to the supplementary information describing the frequency of co-amplification or co-deletion incorporaing cancer genes and imprinted loci (Supplementary Table 4) and a list of cnnLOH regions incorporating imprinted loci associated homozygously mutated genes (Supplementary Table 5).
REVIEWER 2.
Answer to general commnets. We thank this reviewer for their suggestions. Unfortunately due to manuscript reformating (i.e shortening the title to only 15 words and cutting the abstract from 250 to 150 words) we are inable to specify that we analysed four human cancer types.
Question 1. It would be helpful if the authors could clarify at which imprinted loci there were no apparent relationship between CNAs and the parent-of-origin of the amplified/deleted chromosome and which loci (and tumor types) there appeared to be a relationship.
Answer 1. We have now included an additonal supplementary figure representing a summary metric for each imprinted region / cancer type that signifies the estimated the proportion of methylation variability explained by copy-number. A sentence has been added to the manuscript that reads, "An estimate of the proportion of methylation variability explained by copy-number alone is shown in the supplementary information ( Supplementary Fig. 2)." In addition we have included a brief description of the analysis in the methds section.
REVIEWER 3.
We thank the reviewer for their constructive comments. Question 1. Imprinted loci that are amplified to high levels maybe those that are close to focally amplified oncogenes. For example PPIEL is ~300 kb from the oncogene MYCL1. The authors should comment on the frequency of these occurrences.
Answer 1.
Please see our answer to reviewer 1 question 4. Question 2. Figure 1A -It would be helpful if the loci names were appended with chromosomal arm locations so that you could compare this data to the Sky karyotype in figure 1B. The labels of copy number to the left of MDA-MB453 are too small to be seen. Is the SNURF locus in Figure 1A the same as the SNRPN locus in Figure 1D? If so the same name should be used throughout the manuscript.
Answer 2. We have ammended Fig.1 as suggested. Unfortunately due to space restrictions we could not label the imprinted loci on the SKYE karyotype. However we have now included fully labelled chromsome ideograms on our lab webpage that accompanies the supplementary information.
The SNRPN gene has many transcript isoforms with the DMR regualting the imprinting throughout the 15q region located within the promoter of the specific isoform names SNURF (SNRPN Upstream Reading Frame). Therefore the nomenclature for the locus and specific DMR are correct.
Question 3. The authors claim that for chromosomes harboring more than one imprinted domain, each locus is associated with private focal CNAs. However in figure 1D, it appears that in ~25% of cell lines the SNRPN and IGF1R loci are co-amplified due to a chromosomal arm level gain of 15q. These amplifications are neither private nor focal CNAs.
Answer 3. We thank the reviewer for spotting this mistake. We have now ammended the sentence to reads "For chromosomes harboring more than one imprinted domain, the CNAs may be focal or alterations involving the entire chromosome arm." Question 4. Copy number is not interpretable without understanding the baseline ploidy of tumours. For instance a segment with a copy number of 3 is an amplification in a diploid tumour but a deletion in a tetraploid tumour that has undergone whole genome doubling (MDA-MB-453 is an example of this). The authors use of copy ratios shows a good understanding of this issue. But if tumours shown in figure 1D are of different baseline ploidy than copy number should be shown relative to the baseline ploidy (total alleles/baseline ploidy). Copy number shown in tables also should include baseline ploidy or relative copy number.
Answer 4. We fully agree with the reviewer that understanding baseline ploidy of the tumours is important. We have therefore generated new tables (Supplementary Table 1) and figures ( Supplementary Fig. 1) showing the relative baseline ploidy (total alleles/relative copy-number). In addition we have included a sentnence in the results section addressing CNAs at imprinted domains that reads, "In all cases an estimated ploidy baseline (total alleles/baseline ploidy) was also calculated (Supplementary Table 1; Supplementary Fig. 1) since total copy number >2 could represent amplification in a diploid tumor but a deletion in a hyperploid tumor " Question 5. Copy number changes shown in tables should also give some indication on whether these changes are due to focal or arm level alterations. One way of doing this is to indicate what % of the total chromosomal arm do these alterations represent.
To describe the precise size of the deletions and amplifications for each affected imprinted region in all cancer cell lines would result in an enormous supplementary table that would be extremely difficult to use. However to address this comment we have generated maps of all cytogenetic abberations for each imprinted region for the cell lines from four cancer types (the same as Fig. 1D) which are avilable on our laboratories webpage. Question 6. Figure 2C is difficult to understand and should have a legend. It is unclear what the black and white circles underneath methylation maps of loci are suppose to represent.
Answer 6. For clarity we have ammended the legend for Fig. 2C, it now reads "Two different bisulphite PCRs were performed per region to confirm the strand-specific methylation profile as determined by cloning and direct sequencing. Each circle represents a single CpG dinucleotide on a DNA strand (results for multiple DNA strands are depicted as rows), filled circles indicate a methylated cytosine, and open circles an unmethylated cytosine." Question 7. In figure 3B, it is unclear what exactly the X axes of the graphs are.
Answer 7. The x-axis on the graphs in Figures 3B and 5C represents the number of cells lines with with cancer-associated methylation changes. We have ammended the figure and legend that now reads "The left column graphs reveal that cell lines with the highest hypermethylation burden for CIMP regions are similarly hypermethylated at bivalent domains. The middle column is a comparison between the methylation profiles of imprinted DMRs irrespective of CNA status and CIMP. The right row is the same comparison but with only imprinted domains with a normal copy-number. For each type of loci the number of genes analysed is indicated on the X-axis." Question 8. In figure 4B the RNA FISH signals for cell line MCF7 are barely visible. Can you show a better representative cell for this figure?
Answer 8. The intensity of the RNA FISH signals for the ncRNA KCNQ1OT1 is quantitative therefore we are reluncant to manipulate the images. To make the RNA FISH signals more visible we have now included a zoomed in insert panels in Fig. 4B and supplementary Fig. 5. The figure legends now read "Representative RNA-FISH analysis of KCNQ1OT1 lncRNA-coated territory (green signal, white arrows) of individual nuclei with inserts representing zoomed in immages of FISH signals". Question 9. For figure 5B the same comment as figure 1D. If the data in 5B is not relative copy number than how did the authors take in account differences in tumour purity when calculating copy number? Answer 9. All CNAs deplicted in the figure are relative copy-number and tumour-purity was not taken into account. The SNP array data used for copy-number calling was processed data available for TCGA, therefore any corrections would be impossible to perform. The same is true for the methylation analysis and it for this reason that we included supplementary table 4 describing the tumour characteristics. | 2018-04-03T01:01:45.843Z | 2017-09-07T00:00:00.000 | {
"year": 2017,
"sha1": "1b3656af5bbedf9cb38fd71f90ec79299c6cafb0",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41467-017-00639-9.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d52293a8faf64107457b5b4a9e016afba1aa5aa8",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
119341980 | pes2o/s2orc | v3-fos-license | Evidence of chaotic modes in the analysis of four delta Scuti stars
Since CoRoT observations unveiled the very low amplitude modes that form a flat plateau in the power spectrum structure of delta Scuti stars, the nature of this phenomenon, including the possibility of spurious signals due to the light curve analysis, has been a matter of long-standing scientific debate. We contribute to this debate by finding the structural parameters of a sample of four delta Scuti stars, CID 546, CID 3619, CID 8669, and KIC 5892969, and looking for a possible relation between these stars' structural parameters and their power spectrum structure. For the purposes of characterization, we developed a method of studying and analysing the power spectrum with high precision and have applied it to both CoRoT and Kepler light curves. We obtain the best estimates to date of these stars' structural parameters. Moreover, we observe that the power spectrum structure depends on the inclination, oblateness, and convective efficiency of each star. Our results suggest that the power spectrum structure is real and is possibly formed by 2-period island modes and chaotic modes.
Introduction
The launch of space telescopes such as MOST, CoRoT, & Kepler satellites (Walker et al. 2003;Baglin et al. 2006;Borucki et al. 2010) announced the beginning of the precise study of the stellar oscillations in stars other than the Sun. Since then, the high quality of the light curves has allowed the precise characterization of the mode parameters of different kinds of stars, and the study of their variation with time and their connection with the stellar structure.
Although the power-spectral structure of the stars with solar-type oscillations is well known, this is not the case for δ Scuti stars. The power spectrum of these stars shows a complex structure with dominant peaks of moderate amplitudes and many hundreds of lower amplitude peaks that form a flat plateau (e.g. Poretti et al. 2009), the so-called "grass". After observation of the "grass", a long-standing debate about its origin started, including the possibility of it arising from spurious signals produced during the analysis of the data (Balona 2014b).
A huge theoretical effort has been made to find a possible physical phenomenon behind this power-spectral structure. Some of these arguments are: 1. Less effective disc-disc averaging of the flux owing to the geometry of the δ Scuti star. Therefore, it is possible to find modes with higher degrees than in the spherical symmetric case l > 4 (Balona & Dziembowski 1999). Although Balona & Dziembowski (2011) find that most of δ Scuti stars do not seem to have enough density of peaks to support this possibility, several stars seem to show modes with high degrees, up to l = 20 (Kennelly et al. 1998;Poretti et al. 2009). 2. A granulation background signal due to the effect of a thin outer convective layer (Kallinger & Matthews 2010). This effect is found to be more important in cool δ Scuti stars (Balona 2011). 3. Variations with time that produce sidelobes of the main peak of the spectra. Balona & Dziembowski (2011) find that around ∼45% of the spectra of Kepler δ Scuti stars have one-sided sidelobe. They discard effects such as binarity because these yield amplitude-symmetric equal-spaced multiplets (Shibahashi & Kurtz 2012). However, there are other causes that produce variations with non-symmetric amplitude multiplets, such as resonant mode coupling (RMC; see Barceló Forteza et al. 2015, and references therein). 4. A magnetic field in a rotating star splits each peak of the rotational multiplet into (2l + 1), meaning that one mode is split into (2l + 1) 2 peaks (Goode & Thompson 1992). Magnetic fields have been detected in the surface of ∼7% of main sequence and pre-main sequence intermediate-mass and massive stars (Mathis & Neiner 2015). However, δ Scuti stars with measurable magnetic fields are not common because only one δ Scuti star shows a magnetic field (Neiner & Lampens 2015) and another is suggested to be magnetic from its chemical abundance (Escorza et al. 2016). 5. The oblateness of the star produced by high rotation rates is the cause of the appearance of a significant number of chaotic modes (Lignières & Georgeot 2009).
The determination of the fundamental structural parameters of these stars such as mass, inclination, rotation rate, and convective efficiency can help us to unveil which of these mechanisms are responsible for this kind of power spectral structure. Four Table 1. Typical values of the stellar characteristics of δ Scuti stars by Breger (2000), Aerts et al. (2010) interesting δ Scuti stars observed by CoRoT and Kepler satellites that have been characterized in this paper are CID 546 1 , CID 3619 1 , CID 8669 1 , and KIC 5892969. Their differences in the power spectra help us in our aim. In Sect. 2 we describe the main characteristics of these kind of stars. The way in which the their oscillations are analysed to obtain the parameters of the modes is presented in Sect. 3. Results for each target star are commented in Sect. 4. In Sect. 5 we estimate their structural parameters. The power-spectral structure is deeply studied and discussed in Sect. 6. In the last section we present our conclusions.
δ Scuti type stars
δ Scuti stars are classical pulsators with oscillation frequencies between ∼60 and ∼900 µHz (e.g., Zwintz et al. 2013). These stars are located on or slightly off the main sequence, with spectral types between A2 and F5 (Breger 2000). They are intermediate-mass stars that show fast rotation rates as it is common in stars within their mass domain or of higher mass (Royer et al. 2007). In fact, one of the reasons that δ Scuti stars can be separated from RR Lyrae stars is their higher rotational velocity, vsini > 10 km/s (see Peterson et al. 1996). Other typical characteristics of δ Scuti stars are detailed in Table 1.
Hybrid stars
Several subgroups can be distinguished from the main class of δ Scuti stars pulsating with nonradial p-modes, such as High Amplitude δ Scuti stars (HADS), SX Phe variables, or δ Scu/γ Dor hybrid stars (Breger 2000). This last group comes from the observation of g-modes in δ Scuti type stars with frequencies typical of γ Doradus stars, meaning ν ∼ [6 − 60] µHz. Uytterhoeven et al. (2011) point out that a star can be classified as hybrid when all of the three following conditions are accomplished: 1) Typical frequencies of both kinds of stars are detected.
2) The amplitudes of both domains are comparable, within a factor 5.
3) There are two independent frequencies in both domains with amplitudes higher than 100 parts per million (ppm) If the star is hybrid it would be a δ Scu/γ Dor or a γ Dor/δ Scu star depending on which part is the dominant one (Grigahcène et al. 2010). In this way, it is found that a great amount of δ Scuti and γ Doradus stars are hybrids, ∼36%. Other studies suggest that all δ Scuti stars are hybrids (Balona 2014a).
However, other magnitudes that help us to differentiate δ Scuti from γ Doradus are the convective efficiency (Γ), where g is the surface gravity, and the kinetic energy of the waves (E kin ), where ν 0 and A 0 are the frequency and amplitude of the mode with maximum power, respectively. These magnitudes have a dominant value of log Γ < −8.1 and log E kin > 10.1 for δ Scuti stars when the amplitude is measured in ppm and the frequency in µHz (see Uytterhoeven et al. 2011). Both quantities are related to the convective zone of the star which is more efficient in γ Doradus stars. We use all of these tools to find out whether some of our selected stars are hybrid stars and if they present some other differences in their power-spectral structure.
Rotational effect
Taking into account the effect of rotation on the perturbation analysis of a spherically symmetric star, the modes split into multiplets. For low rotation rates (Ω), these multiplets present (2l + 1) symmetric peaks, split approximately a multiple of the rotational splitting (s). For higher rotation rates, second-order effects have to be taken into account and the symmetry is broken (Saio 1981;Dziembowski & Goode 1992). Besides the appearance of the asymmetry, the multiplet is also globally shifted. To go even further, a third-order correction has already been studied by Soufi et al. (1998).
The different contributions can be estimated thanks to two dimensionless magnitudes (Goupil et al. 2000) as where scales the effect of centrifugal force with gravity and µ scales the effect of the rotational rate to oscillation frequencies.
All of these effects are higher for lower frequency g-modes than for higher frequency p-modes. However, they can produce observable shifts up to 1 µHz.
Rotation also produces a deformation of the star (Cassinelli 1987). Under the assumption that the rotation is uniform and the surface of the star is approximately a Roche surface (Pérez Hernández et al. 1999), an averaged effective gravity (g e f f ) can be defined as where R is the radius of the star with spherical symmetry. With these assumptions it is also possible to obtain the polar radius R p , Assuming that the volume is constant compared with a spherically symmetric star, the oblateness O of the star is also defined as The oblateness of a star increases with higher . However, rotation has a maximum limit at which the centrifugal force will destroy the star. This limit is the so-called break-up frequency (Ω K ): Therefore, for a stable δ Scuti star, Ω/Ω K ≤ 1 has to be accomplished.
Another known effect of the rotation is gravity darkening (von Zeipel 1924), meaning an increase of the temperature from the equator to the poles. It follows a potential law as where the value of β depends of the evolutionary stage of the star (Claret 1998). Taking into account this effect, the measurements of T e f f and log g will vary depending on the inclination angle of the star (i). Therefore, these variations have to be carefully treated to obtain the proper main characteristics of the star.
Analysis of CoRoT & Kepler δ Scuti light curves
The data we use were obtained by the CoRoT and Kepler satellites. The CoRoT satellite (Convection, Rotation, and planetary Transits) was developed and operated by the French space agency CNES with international contributions of ESA, Austria, Belgium, Brazil, Germany and Spain. The objective of the mission was to search for exoplanets and to perform asteroseismic studies. Two different channels were designed: the exo-channel data are sampled every 512 s and their photometric precision is between 40 and 90 ppm (Auvergne et al. 2009), whereas the seismo-channel has a much shorter cadence of 32 s and a substantially higher photometric precision of between 0.6 and 4 ppm (see Auvergne et al. 2009). As described in Boisnard & Auvergne (2006), two kinds of campaigns were planned of different duration: Long Runs (LR) of approximately 150 d, and Short Runs (SR) with durations around 30 d. These runs were carried out in two different fields: one close to the galactic anticentre direction(a) and the other close to the galactic centre direction (c). The light curves of each star can then be classified with the nomenclature ¡duration¿¡direction¿¡number¿. For example, LRa01 is the first Long Run to point close to the anticentre of the galaxy.
The Kepler mission was designed by NASA to survey a single region of our own galaxy to detect and characterize Earth-sized planets close to their habitable zones using the transit method (Thompson et al. 2012). The maximum duration of their light curves are up to four years. Because the original field is no longer observable, the mission was renamed as K2 (Howell et al. 2014). There are two different cadences available depending on the star: Long Cadence (LC) of ∼29.5 min or the Short Cadence (SC) of ∼1 min. Moreover, the data are downloaded in three-months bases called quarters (Q¡number¿, Haas et al. 2010).
The asteroseismic analysis was performed by a set of different programs called δ Scuti Basics Finder (δSBF) that we built using IDL language programming. We analysed the light curves of δ Scuti stars with the three stage Method described in detail in Barceló Forteza et al. (2015). This iterative method allows us to interpolate the light curve using the information of the subtracted peaks, minimizing the effect of gaps, considerably improving the background noise, and avoiding spurious effects (García et al. 2014). Thus, we get very accurate results in terms of frequencies, amplitudes, and phases. In addition, we take into account the energy of the signal for each peak i: where RMS i is the root mean square of the residual signal after the subtraction of the highest i peaks, and for which i=0 is the original signal.
The next step in our strategy was to characterize each δ Scuti star, finding its structural parameters such as mass (M) or oblateness (O). This is possible thanks to the regularities present in the power-spectral structure of these kind of stars, such as the large separation (∆ν) and the rotational splitting (s).
The last step consists in studying each star's power-spectral structure in order to find any relation between their characteristic parameters, such as density of peaks (n mean ), and their structural parameters. These two steps will be further discussed in the sections that follow.
Characterization of four δ Scuti stars
We tested our method with an already known δ Scuti star: CID 546. We also analysed the light curves of CID 3619, CID 8669, and KIC 5892969 stars. Figures 1 to 4 show the four power spectra and their frequency contents. The highest amplitude peaks can be found at Appendix A.
CID 546
The δ Scuti star CID 546 (HD 50870) is a F0IV star with M v ∼1.67 at an approximate distance of 277 pc (V ∼ 8.88; McCuskey 1956), observed by CoRoT close to the anticentre direction of the Galaxy. It has been observed during 114.4 d between 2008 November 13 and 2009 March 8 (LRa02). Its known parameters are listed in the first rows of Table 3. Recently, Mantegazza et al. (2012) discovered that CID 546 is a long-period spectroscopic binary star with a cooler companion. Nevertheless, additional observations that cover more of the orbital period are necessary to confirm their results, which they consider to be preliminary. No evidence of binarity is found in the photometric analysis.
When our analysis is applied to the power spectrum of this star, we obtain 1513 peaks higher than 10 ppm with a signal-to-noise ratio (SNR) greater than or equal to four. These peaks carry 99.77% of the full signal. Seventeen peaks with amplitudes higher than 400 ppm carry 97.37 % of the signal and appear in three different frequency ranges (see Table A second, which is half the value of the previous one, from 80 to 100 µHz, approximately; and the third regime close to 400 µHz. The analysis also reveals 1446 peaks with amplitudes lower than 130 ppm that only carry 0.95% of the energy of the signal. The flat plateau is clearly visible and could be differentiated from noise after extracting hundreds of peaks (see Fig. 1).
Comparing the results with those obtained by Mantegazza et al. (2012), our method finds all peaks with amplitudes higher than 50 ppm and frequencies higher than 1 µHz, with mean relative differences between both results of 5 × 10 −3 % in frequency and 10% in amplitude. The differences between frequencies obtained for both methods are ∼0.01 µHz, one order of magnitude lower than the frequency resolution. Therefore, it can be concluded that the method produces accurate results and allows us to study real light-curves of δ Scuti stars from both the CoRoT and Kepler satellites.
CID 3619
The F0V star CID 3619 (HD 48784) has M v ∼1.87 at a distance of approximately 91 pc (V ∼ 6.65; Charpinet et al. 2006) and was observed by CoRoT close to the anticentre direction of the Galaxy. The satellite followed it for 25.3 d during 2008 March 5-31 (SRa01) and also for 40.7 d during 2011 November 29 -2012 January 9 (SRa05).
The analysis of the power spectrum of the SRa01 light curve detects 163 peaks down to 5 ppm with a SNR greater than or equal to four. These peaks carry 96.45% of the full signal. The same analysis was performed for the SRa05 light curve obtaining 508 peaks down to 2.5 ppm with the same lower limit for the SNR and carrying 96.8 % of the full signal. We found 37 and 42 peaks with energies higher than 0.1%, respectively. Of all these peaks, only twenty are detected in both runs (see Table A.2), which show slight differences in frequency, up to 0.2 µHz, but higher changes in amplitude, up to the 73%, and phase, up to 1.5π.
The power-spectral structure of this star shows a frequency range that includes the typical regimes of both γ Doradus and δ Scuti stars. Moreover, the ratio between mean amplitudes of both regimes is around A δS cu /A γDor ∼4, which means that CID 3619 is a hybrid δ Scu/γ Dor star candidate. The power spectrum of CID 3619 does not show the flat plateau present for CID 546 (see Figures 1 and 2).
CID 8669
The A5 δ Scuti star CID 8669 (HD 181555) is of absolute magnitude M v ∼2.19 at a distance of approximately 116 pc (V ∼ 7.52; Charpinet et al. 2006) and was observed by CoRoT close to the direction of the centre of the Galaxy. It was observed for 156.6 d between the 2007 May 11 and 2007 October 15 (LRc01).
The power spectrum of this star shows 3175 peaks higher than 3 ppm with a SNR greater o equal to four. These peaks carry 99.83% of the full signal. Thirty-one peaks with amplitudes higher than 200 ppm carry 95.71 % of the energy of the signal (see Table A.3). The analysis also finds 3054 peaks with amplitudes lower than 70 ppm that only carry a 1.75% of the energy of the signal. The flat plateau is also visible and it has a higher density of peaks than CID 546 (see Figures 1 and 3).
KIC 5892969
The stellar characteristics of the faint δ Scuti star KIC 5892969, K p ∼12.445, have been studied spectroscopically by Huber et al. (2014). This star has been observed by the Kepler satellite during 1470 d, from Q0 to Q17, in LC and its oscillation modes have been studied in Barceló Forteza et al. (2015). Since the Nyquist frequency of KIC 58929's power spectrum is lower than the typical frequency range of δ Scuti stars, we cannot ascertain a priori whether there is a flat plateau (see Fig. 4).
Searching for possible spectral regularities
As Suárez et al. (2014) stress, the mode organization for δ Scuti stars includes regularities as the large separation with a negligible variation from the non-rotating case. Using a dense sample of representative models, they obtain the following scaling relation: where ρ is the mean density of the star. This relation is somewhat similar to that found to solar-type oscillators (Kjeldsen & Bedding 1995): In fact, they point out that the minimum error of these analyses is around 11 to 21% and is due to the stellar deformation. This is in agreement with several previous theoretical studies claiming that regularities in the p-modes of δ Scuti stars are related to the spherical large separation (e.g. Pasek et al. 2012).
On the observational side, many δ Scuti stars show frequency spacings (e.g.: García Hernández et al. 2009;Zwintz et al. 2011b). Some of these spacings have been interpreted as a combination of frequencies (Breger et al. 2011) or the signal of the rotational splitting (Zwintz et al. 2011a). However, several δ Scuti stars known to be eclipsing binaries have been analysed (e.g.: da Silva et al. 2014). As binary stars, it is possible to calculate their stellar characteristics such as mass or radius. With all these data, García Hernández et al. (2015) find a similar relation to the previous one proving that the above scaling relation is independent of the rotation rate.
Therefore, using previously known parameters, the ∆ν − ρ relation, and considering it is possible to delimit the value of these two regularities. Once we have the mean density of the star and its rotation, the mass and the radius can be estimated using the Stephan-Boltzmann law and the surface gravity acceleration (Eq. 4). Moreover, we can also obtain a mass estimate with the mass-luminosity relation (Ibanoǧlu et al. 2006).
We used the following four methods to look for regularities.
Histogram of differences
Breger et al. (2009) use a histogram of differences between all the detected modes and find that the radial modes (l = 0) are not the only kind of mode that allow us to find the large separation. García Hernández et al. (2009) stress that high amplitude modes carry this signature, and including the lower amplitude modes powers other periodicities. These other regularities make it very difficult to determine the large separation. Therefore, we used the histogram of differences between pairs of frequencies within the typical δ Scuti star range of frequency oscillations and only taking into account the, approximately, 50 highest amplitude peaks (see top left panel in Fig. 5).
Rotational multiplets can be present in the set of frequencies chosen for the analysis, allowing us to also see the rotational splitting. As we mention in Sect. 2.2, the higher the rotation rate, the greater the deviation from a symmetric splitting, and the more difficult it is to observe in the histogram (Goupil et al. 2000). For low rotation rates, s ∼ 1 µHz, the splitting is easily detectable and contributions due to twice and thrice the splitting are also detected. Moderate rotation rates, s ∼ 5 µHz, perturb this structure but the splitting is still dominant. For higher rotation rates, s ∼ 10 µHz, it is not possible to observe a dominant peak, owing to the lack of symmetry between the peaks of the rotational multiplet.
To obtain an accurate value of these parameters with this method, it is important that the binning of the histogram reaches a compromise between the possible variation with frequency of the periodicities and the accuracy we want to reach. We used a bin of 0.5 µHz to look for rotational signatures and bins up to 1.5 µHz to find the large separation. The results of this method (see Table 2 and top left panel of Fig. 5) take into account possible multiples of the rotation, multiples or submultiples of the large separation, and their split peaks (∆ν ± s). Notes. Each column points to the results of each method: (1) The histogram of diferences (see Section 5.1), (2) the autocorrelation function (see Section 5.2), (3) the spectrum of the subspectrum analysis (see Section 5.3), and (4) the echelle diagram (see Section 5.4).
Autocorrelation function
The autocorrelation function compares a power spectrum with itself as a function of lag. This function has a higher value when the variations of the original spectrum increase and decrease similarly to the shifted spectrum. Then, when the lag coincides with one of the possible regularities of the spectrum, the value of the autocorrelation function increases. Reese et al. (2013) test this method with artificial spectra, taking into account a different number of modes and calculating their visibilities with different inclinations and rotation rates. They conclude that it is possible to obtain regularities corresponding to the large separation and half its value, and also the rotation rate and twice its value. These peaks are reinforced when the range of observed frequencies spans a large enough interval and does not include too many modes in the artificial light-curve. This last condition is important to avoid powering other regularities that can be present in the spectrum, as also happens with the histogram of differences. Therefore, we calculated the autocorrelation function of the artificial spectrum that is built, by considering only the approximately 50 highest amplitude modes (see top right panel in Fig. 5).
Once the autocorrelation function was calculated, we looked for its highest values in a 1 to 100 µHz lag interval. The large separation is found by looking for the peak with highest number of consecutive submultiples within its error. Then, we looked for its closest peaks to find possible rotation signatures. The last step was to try to find the rotation signature by looking for one peak within the 1 to 18 µHz range that has consecutive multiples within its error. This method usually finds one of the split peaks of the large separation as the dominant peak (∆ν ± s). We correct it by adding or subtracting the rotational splitting (see Table 2 and top right panel of Fig. 5).
TRUFAS: the spectrum of a subspectrum
This method uses part of the TRUFAS algorithm, originally built to detect p-mode oscillations in solar-like stars as described by Régulo & Roca Cortés (2002) and later used to find planetary photometric transits (Régulo et al. 2007). It takes advantage of the properties of the spectrum of the subspectrum FFT {S (ν) · H(ν)} for which the spectral signature of the n > l p-modes can be considered as an equally-spaced frequency set of peaks: where k i and k f are integers with k i < k f ; and the window function is where ν i and ν f are the frequency limits.
It is possible to find the large separation by looking for those values with a higher number of peaks in quefrency space 2 with a significant power excess at q = k/∆ν. This process is repeated for values close to the frequency limits of the subspectrum, only varying by a few µHz. The number of coincidences for each possible periodicity is then counted (see bottom left panel in Fig. 5).
Not only can the large separation be found using this method, but also other periodicities such as the rotational splitting (Roca Cortés & Régulo 2001). The major problem arises when the sought-after periodicity is not exactly uniform (Eq. 13) because our main assumption is broken. Nevertheless, the better the SNR of the observations, the higher the departure from an uniform frequency spacing that the method will be able to accept (Régulo & Roca Cortés 2002).
We considered a frequency range down to approximately three times the highest studied periodicity to clearly detect possible regularities. To achieve a high SNR, we built an artificial light curve. The number of highest amplitude peaks (I) taken into account has to reach a compromise to include the spectral regularities and to avoid powering other periodicities or noise. Looking for this compromise, we tested several values of I. For most cases, taking into account I = 50 peaks allowed us to find the rotational splitting, two times its value, the large separation, and half its value (see Table 2 and bottom left panel of Fig. 5).
Echelle diagram
The echelle diagram takes advantage of the regularity of the p-modes (Eq. 13) and represents the power spectrum in constant slices (Grec et al. 1983). If the value of the slice is the large separation, the modes with the same degree (l) and azimuthal order (m) will appear to be aligned (see bottom right panel in Fig. 5). A possible deviation from this regular pattern is produced by the departure from the asymptotic regime and/or a high rotation rate.
We used this property to delimit the large separation found by previous methods (see Table 2) within a given frequency range. For close values of the preliminary value, the number of consecutive modes aligned within its error is counted. The limit is found when the number of consecutive modes aligned is lower than a threshold. As happens with other methods, a high number of peaks can power other periodicities. Therefore, only the highest amplitude modes were taken into account.
Results
Analysing the power spectra of CID 546 with all the methods already described (see Fig. 5), we find the value for the large separation of ∆ν = 55.4 ± 0.8 µHz and a splitting of s = 7.1 ± 0.2 µHz. All methods detected the signature of the large separation and/or their split peaks as ∼47.8 and 61.8 µHz. In addition, the differences between split peaks, ∼7.1 µHz, are compatible with those produced by the the rotational signature or their multiples. Specifically, using the TRUFAS procedure (see bottom left panel in Fig. 5), strong rotational signatures are detected at 6.0 and 7.8 µHz, and also at twice (12.2 and 15.6 µHz) and at thrice its value (23.3 µHz). This departure from symmetric split peaks, around 0.9 µHz, is in agreement with a moderate rotation rate. Mantegazza et al. (2012) also search for regularities in the power spectrum of CID 546. They found a value for the large separation of ∆ν = 46 ± 6 µHz and a splitting of s ∼ 6.7 µHz. The value of the large separation is based on half of the highest peak of the FFT of the power spectrum (90.3 µHz, see Fig. 17 in their publication). Although this value is compatible with the one we found, it is centred in one of the split peaks. Nevertheless, looking at their figure it is possible to observe three peaks that are consistent with the scenario described above.
The stellar mass is calculated by Mantegazza et al. (2012) using a grid of models. The value they find, 2.10 to 2.18 M , is higher than ours, 1.5 ± 0.3M (see Table 3 and the beginning of Sect. 5 for more details), but their models do not reproduce the expected limits of the modes at the same time as the observed large separation. Nevertheless, both studies find that this star has a moderate rotation rate and low inclination. This is in agreement with its low projected velocity. Notes. (a) The parameters of KIC 5892969 are taken from Huber et al. (2014) and those of CoRoT stars are taken from the CorotSky Database (Charpinet et al. 2006)., (b) Mean density (ρ), mass (M), and radius of a star with spherical symmetry (R; see Section 5)., (c) Centrifugal to gravity force ratio ( 2 ), oblateness (O), polar and equatorial radii of the star (R p and R e ), and gravity-darkening effect (δT e f f ; see Section 2.2)., (d) Convective efficiency and kinetic energy of the wave are both related to de convective layer of the star (see Section 2).
Considering the other three stars, we find that KIC 5892969 has a low rotation rate, Ω/Ω k ≈ 0.14. This is confirmed by the signature of the surface rotation: two high amplitude peaks in the low frequency regime of the power spectra found at 1.235 and 2.465 µHz with amplitudes around 100 and 500 ppm, respectively. Therefore, the values of the polar and equatorial radii are similar to the radius of a star with spherical symmetry (see Table 3). In addition, the values of the mass and radius are equal to those found by Huber et al. (2014), within errors.
In contrast, CID 8669 shows a high projected velocity, v sin i ∼200 km/s, suggesting that this star could be a fast rotator with a very high rotation rate, the same as we find with our methodology (see Table 3). The high oblateness of this star produces a difference of temperature between the poles and the equator of around 320 K, ∼ 4.5 %.
The case of CID 3619 has to be differentiated from the others. We confirm that this star might be a hybrid star because its convective efficiency (log Γ) is higher, and the kinetic energy of the waves (log E kin ) is lower, than the typical values for δ Scuti stars (see Table 3). Claret (1998) estimate that stars with this temperature can present a more efficient convective zone, and that the gravity-darkening effect is less effective, β ∼ 0.32 (see Eq. 8). The variation of its temperature with latitude is then lower than that of CID 546 although its rotation rate and oblateness are higher.
In depth study of the "grass"
Using an acoustic ray model in a uniformly rotating star, Lignières & Georgeot (2009) study the relation between the rotation rate and the power-spectral structure. Depending on the rotation rate regime the spectrum shows several kinds of modes such as the 2-& 6-period island modes that are restricted inside a torus region of the star, whispering gallery modes whose ray trajectories follow the outer boundary thanks to a rotation rate that has not destroyed its torus, and chaotic modes that are produced by rays that are not constrained into a torus.
Lignières ' & Georgeot's results (2009) show that chaotic modes are as visible as 2-period island modes and have higher amplitudes than 6-period island modes and whispering gallery modes when the rotation rate is moderate and the star is equatoron. This is caused by a lower disc-averaging cancellation of the chaotic behaviour than of the structured behaviour. The 2-period island modes have higher amplitudes than chaotic modes when the star is pole-on. On the one hand, for lower rotation rates, only 2-period island and whispering gallery modes are present because the chaotic regions are not developed enough. On the other hand, for higher rotations rates, all modes are present except for the 6-period island mode, whose torus has been destroyed.
Because the four stars we are studying show different rotation rate and oblateness, our sample helps us to analyse how the power-spectral and structural parameters of δ Scuti stars are modified by rotation. In that way, it allows us to compare our results with those predicted by Lignières & Georgeot (2009).
The power spectrum of a δ Scuti star is formed by moderate amplitude peaks grouped in bunches forming a power excess, the so-called envelope, and a high number of low amplitude peaks making a flat plateau or grass (e.g. Fig. 1). To define this power excess, we used the amount of energy of the observed signal carried by the wave (Eq. 9) and we assumed that all peaks that fulfil are part of the envelope. We then estimated different characteristic parameters such as the energy of the power excess or the number of peaks enclosed (N env ).
The flat plateau is a nearly-constant amplitude and mode density regime with a significant decrease at a specific frequency (e.g. Poretti et al. 2009;Mantegazza et al. 2012). We called this the cut-off frequency (ν c ) because the higher frequency modes are possibly not reflected as a result of losing their energy through the atmosphere, as predicted for p-modes in the standard theory. The amplitude decrease of the flat plateau ends when it reaches the noise level (A N ) at the frequency we called "noise frequency" (ν N ). We supposed that all peaks that fulfil are part of the grass. Therefore, we can estimate its energy, and the number of peaks that constitute the grass, N grass . To find its characteristic parameters, we can look for the variation of the density of peaks and also the variation in amplitude with frequency.
We tested these two methods by comparing our results for CID 546 with those obtained by Mantegazza et al. (2012) (see Sect. 6.1 and 6.2). Then, we discuss the results for all the stars in Sect. 6.3
Density of peaks
The density of peaks can be determined with a histogram of analysed peaks per 10 µHz frequency bin (see Fig. 6). The cut-off frequency can then be determined as the frequency whose density value decays more than 1.5 times the standard deviation from the mean density of peaks (n mean ). We also calculated the maximum density of peaks and the frequency at maximum density. We note that the separation between higher density peaks is useful in estimating the large separation.
We find that the density of peaks in the power spectrum of CID 546 decays at a cut-off frequency of 405 ± 5 µHz. This value agrees with that of Mantegazza et al. (2012) found with their analysis.
Grass level
Following the extraction those peaks that are considered as the envelope (see Eq. 15) from the power spectrum, we calculated the mean amplitude of the grass or grass level (A grass ; see Fig. 7). We also find the cut-off frequency as the frequency whose amplitude value decays to more than the standard deviation from the grass level. The noise level is measured as the mean amplitude of the residual power spectrum down to the noise frequency.
Our analysis reveals that the grass level is an order of magnitude higher than the noise level and that the cut-off frequency is equal to 400 ± 10 µHz. The flat plateau is clearly visible in the last two panels of Fig. 1 after the extraction of hundreds of peaks. Our results are consistent with those found by Mantegazza et al. (2012), because they also observe the flat plateau after the extraction of hundreds of frequencies with amplitudes down to 12 ppm. Notes. (a) Expected modes in the characteristic frequency range for a computed model of a star with moderate rotation rate (Lignières & Georgeot 2009)., (b) Since the cut off frequency is not visible, the high frequency limit to calculate the initial number of modes N grass {0} is the Nyquist frequency (see text)., (c) This mean value of N env does not take into account the measurement at ∆t = 1470 d. This duration of the light curve is long enough to observe sidelobes caused by RMC (see Barceló Forteza et al. 2015). Fig. 9. Cut-off frequency (top), maximum amplitude (middle), and mean amplitude of the grass (bottom panel) versus duration of the studied light curve for CID 546 (blue squares), and CID 8669 (purple asterisks). The cut-off frequency of KIC 5892969's power spectrum is not visible therefore only the maximum amplitude of the grass can be properly calculated (red triangles). Each line is the linear fit to the observed data points. The mean amplitude of the grass for CID 546 has been increased by 20 ppm to properly observe its behaviour.
Results
The observed power-spectral structure of these δ Scuti stars consists of a few dominant amplitude modes and a lot of low amplitude peaks with the exception of CID 3619, which is a hybrid star (see Fig 2). Therefore, only considering these three non-hybrid δ Scuti stars, CID 546, CID 8669, and KIC 5892969 together, we find that the mean density of peaks present in their power spectra increases linearly with the duration of the observing campaign (∆t; see Fig. 8). The density of peaks and their increase with time (ṅ mean ) are higher as the rotation rate is higher too (see Table 4). Therefore, the mechanism that produces this high number of peaks is related to the rotation rate, and of the increase in frequency content explains the light curve behaviour with time. Taking into account this relation, and also that the subtracted energy of the signal remains constant or slightly decreases with duration, it is not possible that all these peaks are spurious owing to an imperfect subtraction of the signals, as suggested by Balona (2014b).
The number of modes not caused by time variations, N grass {0}, can be estimated with the y-intercept constant of the n mean -∆t relation while taking into account the observed frequency limits ν ∈ [60, ν c ] µHz (see Fig. 8 and Table 4). Their values are of the same order of magnitude as those chaotic modes estimated by Lignières & Georgeot (2009), which take into account only axisymmetric modes in a characteristic frequency range for a computed model of a star with rotation rate around Ω/Ω K ∼0.59. In addition, the observed number of modes in the envelope, N env (those that fulfil Eq. 15), are also similar to those expected for 2-period island modes. As we can see, a star with higher rotation shows a higher number of chaotic modes because the torus of less-visible modes has been destroyed. The chaotic modes seem to be more visible than the 6-period island modes or the whispering gallery modes due to their irregularity, which makes the cancellation effect less effective.
Moreover, the maximum amplitude and cut-off frequency in CID 546 and CID 8669 are constant with time (see Fig. 9). As the number of lower peaks increases, the mean value of the amplitude of the grass decreases. This is also in agreement with a scenario in which initial 2-period island modes and chaotic modes with time variations are present. In agreement with the predicted visibility (Lignières & Georgeot 2009), CID 546 presents a similar number of 2-period island modes as the other stars in the sample, but they are of higher amplitudes due to its low inclination.
It is not expected that a low rotation rate δ Scuti star has chaotic modes. This is in agreement with the initial number of modes that we estimate for KIC 5892969. This star shows a slight decrease of its maximum amplitude of the grass. Therefore, the high number of peaks present in its power spectrum of the whole light curve can be produced by time variations of the 2-period island modes and some whispering gallery modes.
Finally, although CID 3619 has a higher rotation rate than CID 546, its spectral density is lower and there is not a clear flat plateau. The cause could be CID 3619's more efficient convective zone. Although Balona (2014a) claim that all δ Scuti stars are hybrids, to identify the star as a hybrid or a non-hybrid star with the criteria specified in Sect. 2 could be of importance to explaining the presence, or absence, of the flat plateau.
Conclusions
Using our own methodology (δSBF), we analysed the light curves of four δ Scuti stars, observed by CoRoT and Kepler, from raw data to end products such as the parameters of the modes, the properties of the flat plateau, and possible regularities of the power spectra. We thus determine their observational characteristics producing the best estimates to date of their stellar parameters such as mass, inclination, rotation rate, and convective efficiency. In spite of the high uncertainties in previously known data, the oblateness and the gravity-darkening effect were obtained for all the stars studied. Furthermore, CID 3619 was found to be a hybrid δ Scu/γ Dor star.
Because these four stars show different rotation rates and oblateness values, our sample allows us to study how the power-spectral and structural parameters of δ Scuti stars are modified by rotation. We prove that structural parameters such as oblateness, inclination, and convective efficiency can explain the development of the flat plateau. Therefore the power-spectral structure is formed by an envelope constituted of 2-period island modes, and a grass composed of chaotic modes and peaks due to their variation. In this sense, the spurious signal hypothesis is discarded. Our next step is to perform a study of a much larger sample of δ Scuti stars to provide an in-depth determination of the behaviour of their power-spectral structure.
modes of δ Scuti stars
We present the parameters of the highest-amplitude oscillation modes of each star that we obtain with our method. The results for KIC 5892969 were already published in Barceló Forteza et al. (2015). Because all of these modes accomplish the condition announced in Eq. 15, they form part of the so-called envelope. Hundreds or thousands of peaks are identified with a SNR higher than four for each light curve (see from Fig. 1 to Fig. 4).
We note that each oscillation mode of CID 3619's light curve has two different frequencies, one per studied run. Because these runs are separated by approximately four years, the differences in all the parameters might be caused by a modulation mechanism such as RMC. Although the observed frequency variation is of the same order of magnitude as the predicted one (δν/ν 0.1 %, see Moskalik 1985), it is not enough to ascertain which mechanism produces the variation of the envelope modes. Nevertheless, the cause of these variations is beyond of the scope of this work. Notes. (a) The terms of the modes are those used for the SRa01 light curve. The numbering is different for SRa05 since the modes have amplitude variations between each run and our method analyses the highest amplitude mode in each iteration., (b) The phases are all with respect to the initial time of SRa01: t J2000 =2986.4802 d. Notes. (a) The phases are all with respect to the initial time of the run: t J2000 =2687.0916 d. | 2017-03-14T21:18:41.000Z | 2017-03-14T00:00:00.000 | {
"year": 2017,
"sha1": "c88dbc3aca18075ab4bc4efe0ebfae2cf8bfe2ac",
"oa_license": null,
"oa_url": "https://www.aanda.org/articles/aa/pdf/2017/05/aa28675-16.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "c88dbc3aca18075ab4bc4efe0ebfae2cf8bfe2ac",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
31026343 | pes2o/s2orc | v3-fos-license | Avoiding Unnecessary Fine-Needle Aspiration Cytology by Accuractely Predicting the Benign Nature of Thyroid Nodules Using Ultrasound
Objective: The objective of this study was to describe a reliable ultrasound based index scoring system based on ultraound characteristics to identify benign thyroid nodules and avoid unnecessary fine needle aspiration cytology. Materials and Methods: Patients undergoing ultrasound-guided fine-needle aspiration cytology (FNAC) for thyroid nodules were evaluated prospectively. A total of 284 patients were evaluated from November 2005 to November 2011. There were 284 nodules. Any solid or partly solid focal nodule in the thyroid gland was included in the study. Cysts with no solid component were excluded. We used LOGIQ 9 (GE Healthcare) scanner equipped with a 10--14 MHz linear matrix transducer with color and power Doppler capability. Four US characteristics were evaluated, i.e., nodule margins, echo texture, vascularity, and calcification. Fine needle aspiration (FNA) was performed on all nodules. The nodules were labeled benign or suspicious using an ultrasound index score and the results compared with FNAC. Follicular neoplasms on fine-needle aspiration cytology were further assessed by excision biopsy and histology. Cytology/histology was used as the final diagnosis. Results: In total 284 nodules were analyzed. All the 234 nodules in US labeled benign category were proven to be benign on cytology/histology. Therefore the specificity of ultrasound in labeling a nodule benign was 100%. Twenty of the 50 nodules that were suspicious on US were malignant. The most significant US differentiating characteristics were nodule margins, vascularity, and microcalcification. Conclusion: Our results show that US can accurately characterize benign thyroid nodules using an index scoring system and therefore preclude FNAC in these patients.
INTRODUCTION
Thyroid nodules in the adult population are common. [1] Although the majority of thyroid nodules are benign, a large number undergo cytology/histology to rule out malignancy. [2] Initially thyroid nodule characteristics were studied individually and their association with malignancy has been reported. [3,4] Subsequently, characteristics were grouped to determine if by having more than one characteristic this association changed. [3,5] Currently most authors divide thyroid nodules into benign, follicular lesions, and malignant nodules based on ultrasound (US) appearance. [6] Recently Horvath et al., created a Thyroid Imaging Reporting and Data System (TIRADS) after evaluating the different characteristics that allow for a better selection of nodules submitted for fine-needle aspiration cytology (FNAC); while Park et al., proposed an equation to predict the probability of malignancy in thyroid nodules based on 12 characteristics. [4,6] FNAC remains the gold standard in the characterization of thyroid nodules. [7,8] However, different US criteria are now being used to predict the nature of thyroid nodules. [7,8,[9][10][11][12][13][14][15][16][17][18][19][20][21][22] This is because performing FNAC of all thyroid nodules is a costly venture with a low yield in identifying the small proportion of nodules that actually represent malignant disease. [15] There is no universal agreement about the best way to use US in the management of thyroid nodules. [4,6,23,24] This paper evaluated the use of easily adaptable index score based on US characteristics that have a higher correlation with benign nature of a thyroid nodule.
OBJECTIVE
The objective of this study was to describe a reliable ultrasound based index scoring system based on ultraound characteristics to identify benign thyroid nodules and avoid unnecessary fine needle aspiration cytology.
MATERIALS AND METHODS
The study received approval by the Research and Ethics committee of the Aga Khan University Hospital, Nairobi. From November 2005 we started to recruit patients referred for US-guided FNAC of thyroid nodules at the Radiology Department in Aga Khan University Hospital, Nairobi. Written informed consent was acquired from all subjects. All patients underwent ultrasound examination of the thyroid which was performed by one of two radiologists each with greater than 5 years experience in thyroid ultrasound. We used a LOGIQ 9 (GE Healthcare) scanner equipped with a 10--14 MHz linear matrix transducer with color and power Doppler capability.
All solid or partly solid focal nodules in the thyroid gland were included in the study. When a patient had multiple nodules, the most dominant nodule/s were included. This decision to choose the dominant nodule was left to the discretion of the person performing the ultrasound examination. Cysts with no solid component were excluded because they are almost exclusively always benign. [1] We evaluated four characteristics of each nodule, i.e., border characteristics (margins), echo texture, presence or absence of microcalcification, and vascularity.
Margins
The margin of a thyroid nodule was classified as either regular or irregular. Distinct nodule borders exhibiting a complete halo were called regular whereas indistinct, poorly defined borders with less than 30% circumferential demarcation were defined as irregular [ Figures 1 and 2].
Echogenicity
This was described as either homogenous or heterogeneous based on comparison to the surrounding thyroid tissue
Vascularity
This was assessed by color Doppler and vascularity was compared with the surrounding thyroid gland. It was We then graded the nodules by giving them an index score of one to four using the four major US characteristics in Table 1.
Each nodule had a minimum score of 0 and a maximum of 4. Those with scores of 0 and 1 were labeled as benign [ Figure 1]. Scores of 2, 3, or 4 were labeled "suspicious" [ Figure 9a and b]. In the interpretation of the US features both radiologists arrived at a consensus diagnosis, i.e., benign or suspicious.
US-guided FNAC procedure
a. Superficial nodules: Multiple passes were made using a 25 gauge needle (without any suction). b. Deeper nodules: Multiple passes through the nodule with a 21/23 gauge needle and suction with a syringe. Some authors have used a vacuum gun to aid suction of small nodules that are located deep within the thyroid gland. [17] The nature of a follicular nodule cannot be determined by FNAC; hence histology is imperative for a final diagnosis to be arrived at. [26,27] In our study all follicular nodules underwent an incisional or excisional biopsy for histological diagnosis.
Specificity of US in the characterization of benign thyroid nodules using the major characteristics was then calculated. All the nodules included were analyzed and the gold standard was cytology or histology. All characteristics were tabulated to calculate their level of association with malignancy by determining the percentage of each in the benign and suspicious categories respectively. A P-value was assigned to each to determine the level of significance. A value less than 0.5 was considered significant.
RESULTS
A total of 284 nodules were analyzed. The patient selection was from a pool of consecutive referrals to Aga Khan University Hospital's radiology department in Nairobi. The nodule size ranged from 3 mm to 28 mm (anteroposterior), 10 mm to 20 mm (transverse), and 3 mm to 12 mm (length).
A total of 234 nodules were characterized as benign on US, all of which were benign on FNAC. Therefore the specificity of US labeling a nodule benign was 100%. There were 50 suspicious nodules on US of which 20 turned out to be malignant [ Table 2].
Of the ultrasound characteristics analyzed margins, vascularity, and calcification were found to have significant P-values [ Table 3]. Echogenicity was not significant in identifying benign or malignant nature of the nodule.
DISCUSSION
The widespread use of US in the evaluation of thyroid nodules has created an overwhelming need to establish scientifically sound, straight-forward, and easily adaptable protocols that minimize costs related to nodule management and maximize the benefits of US. Our aim was to suggest an US index score that has the characteristics of such a protocol. We assessed the reliability of our US criteria in labeling a thyroid nodule as benign. Ninety-three percent of the thyroid nodules referred for US-guided FNAC were benign and 7% were malignant. A similar incidence has been published by Lannuccilli et al. [15] All of the 234 nodules that were characterized as benign on US (0-1 category) were confirmed to be benign on cytology/ histology. Based on these findings, if US classifies a nodule as benign FNAC can be deferred. This is reiterated by studies such as Stacul and Kwak et al., which have shown that most malignant nodules have more than two malignant US characteristics. [28,29] Several US characteristics have been studied previously. These include border characteristics (margins), echogenicity, calcifications, vascularity, size, shape, orientation, and acoustic transmission. We only evaluated four of these characteristics in each nodule, i.e., border characteristics (margins), echogenicity, presence or absence of microcalcification, and vascularity. These features were selected because they have been shown to be the most widely looked for, as they have the highest correlation with malignancy when studied in combination. [3,4,5,6,23] They are also similar to those used in a study by Kovacevic et al. [18] In two studies on the same patient population, Koike et al., demonstrated how combining highly sensitive US characteristics of malignancy with more specific FNA cytology can yield a accuracy similar to that using a set of more specific US characteristics to diagnose malignancy. [1,25] We did not assess size and shape in order to determine to what extent they may be linked to the benign nature of a thyroid nodule. Size and shape of the nodule have been shown to be less sensitive and specific indicators of thyroid malignancy. [5] We did not assess nodule orientation or acoustic transmission because they have not been used as frequently as the characteristics we chose to analyze. Other sonographic variables not measured in the study that could have incremental predictive value, include ultrasounddirected qualitative intranodular vascular distribution and quantitative analysis of tumor vascularity (tumor vascular resistive index). [8] These are complex and not easily adaptable in routine US imaging of the thyroid gland.
We have described a method in which one can use US to predict benignity. Table 4 summarizes already published operating characteristics of described methods for using US features of thyroid nodules as predictors of malignancy.
With an index scoring system using the US features it has been shown to have the highest association with malignancy. Based on our scoring system we had a very high specificity to diagnose a benign nodule [ Table 4]. Our high pick up rate of benign nodules may be partly due to the specific characteristics we used, but we must acknowledge the high prevalence of benign nodules regardless of the effectiveness of our index score. Our specificity is high compared with Kwak, Stacul, Koike et al., and more recently Horvath et al., [ Table 4]. This may be because we were targeting benign nodules. Kim et al., had the lowest specificity which may be because they adapted the older method of using just one US feature to assess malignancy, rather than using a combination of US features [ Table 4].
We used an index score consisting of four out of several characteristics that have been shown to be features significantly associated with malignancy. [33] While the most recent publications have moved on to stratify, categorize, and create reporting and data systems, [1,3] we chose to take a simpler and more adaptable yet scientifically sound approach similar to Koike et al. [25] Of the four major US characteristics that were used, margins, calcification, and vascularity were most significant. A small percentage of benign nodules had an irregular margin but they did not have increased central blood flow and no calcification was seen. These findings are reiterated by Moon and Seya et al. [21,31] If a nodule was classified as malignant it most likely had an irregular margin and increased central vascularity. Echo texture and ancillary characteristics such as size and shape did not have significant P-values. This may be because of the sample size and a larger study for further evaluation of these characteristics would be of value because these features may be used as an additional features to evaluate malignancy. Most patients with malignancies have more than two US features characteristic of malignancy. [1,14,15,31] However certain characteristics are more reliable as shown in Table 2. Presence of microcalcification is most significant in predicting malignancy [1] The use of US adds the additional advantage of FNA guidance which is particularly beneficial in patients with nonpalpable, multiple, or heterogeneous nodules for preferentially aspirating a specific segment of the nodule (large or partially cystic nodule) or when nodule palpation is difficult (patients with diffuse glandular disease) or obesity. [30] In the diagnostic management of thyroid nodules, FNAC is still the gold standard despite the growing experience in the use of high resolution US. [8,12,31] FNAC has its limitations. Kim et al. [4] 155 Impalpable nodules versus FNA or surgical pathology Presence of 1 malignant feature 66 Kwak et al. [28] 815 US versus FNA/post op pathology findings 80.6 Koike et al. [25] 329 US plus FNA versus surgical pathology Presence of 1 significant malignant feature 2 or more malignant features 91 Koike et al. [25] 329 US versus surgical pathology 5 US features 92 Park et al. [6] 1694 US vs. US-guided FNA Equation of 12 US features stratification of results into categories Horvath et al. [3] It cannot differentiate a follicular adenoma from a follicular carcinoma. [2,18] Therefore, for these lesions a corresponding histological analysis was taken to be the final diagnosis. FNAC can also be limited by inadequate sampling. [18] In our paper, an attempt to reduce inadequate sampling was made by using US guidance and by having the samples analyzed by a pathologist at the same sitting before the patient left the examination room. [30] The actual effectiveness of different US criteria is still in question and currently being reconsidered and modified. [33] Techniques that combine US features and FNAC are most effective and most accurate for predicting malignancy rather than US alone. [22] Many studies agree that with regard to thyroid nodule management a multidisciplinary approach is best, including clinical examination, laboratory work up, US, FNAC, and surgical excision with biopsy where necessary. A consensus conference statement by the Society of Radiologists in US highlighted six US characteristics that are associated with malignancy. [33] , the most specific of which were analyzed in this paper The authors recognize various limitations of the study. US is a highly operator-dependent examination. As such operator bias could have played a role in the US results.
In the interpretation of thyroid nodules, the presence or absence of abnormal neck lymph nodes was not considered. Elastography is a technique that can also help identify thyroid nodules that are likely to be malignant. [32] We did not analyze elastography but focused on an index score that can reliably identify benign nodules. A combination of US features with other investigations such as, serum thyroid stimulating hormone concentration, galectin-3 expression analysis, and FDG/PET scan would be useful in avoiding the higher costs of thyroid surgical procedures. [28] One of the most significant weaknesses of this paper is the small sample size with few malignant nodules compared with recent publications with almost similar objectives but more complex sonographic pattern recognition methods. However we do believe that this is outweighed by the fact that the simple and straightforward index score used to classify nodules did not need a larger sample size as have recent papers like Horvath et al. [3] In 2002 and later 2007 a paper by Kim and Kovacevic et al. respectively, had even smaller sample sizes and simple classification systems. [4,26] Finally the purpose of this paper is unique to what has already been done; the structure is sound, methodology is straight-forward (justifying the small sample size), and its findings can be applied during routine imaging without significant cost implications. | 2018-04-03T06:18:32.046Z | 2012-04-28T00:00:00.000 | {
"year": 2012,
"sha1": "afa90c671f17847ce8489e78d8ca4ffe2ad21b69",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/2156-7514.95446",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "643e109520b8752f893a4a7bcf0e4fcae600241b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
150162353 | pes2o/s2orc | v3-fos-license | (Im)politeness strategies and use of discourse markers
Abstract: This study aimed to investigate the L2 learners’, EFL teachers’, and American native speakers’ use of discourse markers as hedging devices to mitigate face-threatening acts considering gender, proficiency level, and control–experimental variables. It used open discourse role-play tasks, a self-assessment report of English competence, as well as a seven-scenario questionnaire with a five-point Likert scale and without it for L2 learners to translate into Persian. To this end, three groups of participants took part in the current study: (a) 8 groups of 20 L2 learners; (b) 90 participants (i.e. 30 L2 learners, 30 EFL teachers, and 30 native speakers); and (c) 150 Iranian advanced L2 learners. The results revealed that native speakers significantly surpassed EFL teachers and L2 learners in employing DMs and that instruction and proficiency level played a significant role in L2 learners’ use of DMs. The findings also substantiated that female L2 learners outperformed their male counterparts in using approximators, modals, and passives significantly. Furthermore, based on MAXQDA software, two areas of discrepancies, namely “precision” and “direct reasoning” in Persian versus “approximators” and “indefinites” in English, were found to delve into the subtleties between the two cultures. *Corresponding author: Hamid Allami, Department of Foreign Languages, Yazd University, Yazd, Iran E-mail: hallami@yazd.ac.ir
PUBLIC INTEREST STATEMENT
Since the world is significantly growing smaller, learners of English encounter more pragmatic breakdowns which hinder intercultural communication. As an aspect of pragmatic competence, discourse markers can be used to mark politeness to soften the force of commands. To investigate the pragmatic competence of Iranian learners of English, their teachers, American native speakers, and Iranian native speakers, this study used open discourse role-play tasks and a questionnaire. This perspective article found that women present a higher pragmatic competence in terms of discourse markers to mitigate commands than men do. Also, it indicated that the American native speakers tend to use more discourse markers followed by L2 teachers and their students. In addition, two areas of discrepancies, namely "precision" and "direct reasoning" in Persian versus "approximators" and "indefinites" in English, are found to delve into the subtleties between the two cultures. Such findings offer implications for EFL practitioners and material designers.
Introduction
Not only is defining politeness an elusive phenomenon, but also interpreting impoliteness is problematic since (im)politeness phenomena are not considered as isolated phrases or sentences and are not inherent in the words used (Bargiela-Chiappini, 2003). Since the world is significantly growing smaller, second Language (L2) learners encounter more pragmatic breakdowns which hinder intercultural communication (Bardovi-Harlig & Dörnyei, 1998;Byram, 1997;Kádár & Bargiela-Chiappini, 2010;Meier, 1995;Mohammadi & Tamimi Sa'd, 2014;Mugford, 2008;White, 1993). L2 learners then should be instructed to sound appropriate in different speech events with varying contextual features based on sociocultural assumptions. In this regard, they should be empowered to decide how to react in different (im)polite everyday realities and to understand the consequences of their being (im)polite (Mugford, 2008). Politeness and impoliteness, as suggested by Mills (2009), are linguistically and culturally relative to L2 learners to sound appropriate in different language encounters.
Politeness approaches which emphasize the role of context are called "discursive" or "postmodern" (Culpeper, 2010;Locher, 2006;Mills, 2003). Such approaches are against universalizing generalizations focusing on "the participants' situated and dynamic evaluations of politeness, not shared conventionalised politeness forms or strategies" (Culpeper, 2010, p. 3235). As Haugh (2007) aptly puts it, "the discursive approach abandons the pursuit of not only an a priori predictive theory of politeness, but also any attempts to develop a universal, cross-culturally valid theory of politeness altogether" (2007, p. 297). Therefore, there is an urgent need to analyze the unspoken rules of discourse which imply that certain utterances are manifested as appropriate to show reality and social norms (Mills, 2011). In order to adopt a discursive approach toward analyzing politeness, researchers should delve into longer stretches of interaction which focus on the judgment issues of (im)politeness realized as a resource by the participants.
As an aspect of pragmatic competence, vague language may serve as a politeness strategy to soften the force of commands with the purpose of saving face (Boncea, 2014). Discourse markers (DMs) can be used as hedging devices in a certain context to add to the vagueness of an utterance which can be used to mark politeness and mitigate face-threatening acts (FTAs) (Boncea, 2014;Yates, 2010). In this regard, the appropriate use of DMs as hedging devices is regarded as a considerable challenge for L2 learners at different proficiency levels since they may not be familiar with how to make their language fuzzier to achieve communicative goals (Fraser, 2010).
Research aims
In an attempt to allow a deeper insight into DMs as hedging devices, the present study aims to contribute the research on hedge use by Iranian L2 learners and to fulfill several gaps in the previous studies. This study delves into investigating the DMs as hedging mechanisms by intermediate and advanced Iranian L2 learners in experimental and control groups to find the frequency of DMs in their speech. It, also, includes comparison data from American and Iranian Native Speakers (NSs). This study attempts to build on previous research in finding the nuances between Iranian L2 learners', EFL teachers', and American NSs' discourse in terms of DMs to soften the force of commands. Moreover, it examines whether Iranian NSs use different types of hedges compared to Iranian L2 learners. Last but not least, the current research tries to scrutinize gender differences among Iranian L2 learners in using DMs as hedging devices.
Review of related literature
Earlier studies on politeness have mostly focused on the traditional Brown and Levinson's (1987) framework to measure politeness using three factors of social distance, relative power, and absolute ranking of impositions as perceived by the interlocutors. Nevertheless, (im)politeness conventions vary from one culture to another leaving one-theory-fits-all inapplicable to all situations in which language is implicated (Mills, 2009). Thus, as Hsieh (2009) shrewdly observes, "[c]onfined politeness theory cannot adequately explain the various kinds of human interaction" (2009, p. 56). He, also, argues that a "more contextualized investigation is needed in order to gain a more comprehensive understanding of what (im)politeness is" (Hsieh, 2009, p. 56).
(Im)politeness often varies across people with different cultural backgrounds. Yule (1999) believes that people can be polite through "being tactful, modest and nice to other people" (1999, p. 134). Also, Lakoff (1989) defines politeness as "a means of minimizing confrontation in discourse-both the possibility of confrontation occurring at all, and the possibility that a confrontation will be perceived as threatening" (1989, p. 102). Another definition of politeness was put forward by Leech (1983) as "maintain[ing] the social equilibrium and the friendly relations which enable us to assume that our interlocutors are being cooperative in the first place (1983, p. 82). Watts (2003) holds that politeness should be defined through discursive approach. He believes that such struggle defines "the ways in which (im)polite behaviour is evaluated and commented on by lay members and not with ways in which social scientists lift the term '(im)politeness' out of the realm of everyday discourse and evaluate it to the status of a theoretical concept" (Watts, 2003, p. 9). Although discursive practice seems messier than Brown and Levinson's (1987) universal framework, the analysis has proven to probe into the subtleties of culturally situated communicative behavior (Watts, 2003).
To Fraser (2010), hedging is an aspect of pragmatic competence. He defines it as "a rhetorical strategy, by which a speaker, using a linguistic device, can signal a lack of commitment to either the full semantic membership of an expression, … or the full commitment to the force of the speech act being conveyed" (Fraser, 2010, p. 22). Also, he holds that since hedging is drawn from every syntactic category, there exists no grammatical class of hedges. Fraser (2010) provides the following list of hedging devices: Adverbs, adjectives, impersonal pronouns, concessive conjunctions, indirect speech acts, introductory phrases, modal adverbs, modal adjectives, hedged performatives, modal nouns, modal verbs, epistemic verbs, negation, tag questions, agentless passives, parenthetic constructions, if clauses, progressive forms, tentative inference, hypothetical past, metalinguistic comments, etc. (Fraser, 2010, p. 22) Nikola (1997) discusses hedging as "a strategy which renders speakers' messages more tentative and vague, and thus reduces the force of what they are saying" (1997, p. 190). She holds that hedging achieves consequential interpersonal functions for L2 learners who might deliberately be considered as impolite or rude due to insufficient pragmatic skills. She claims that such lack of sufficient pragmatic knowledge may culminate in pragmatic failure. In this respect, hedges may compensate for the speaker to minimize the imposition or directness of his or her utterance (Wilamová, 2005).
A large number of studies have been carried out on the interlanguage development of DMs among L2 learners (Aşik & Cephe, 2013;Jalilifar et al., 2011;Neary-Sundquist, 2013;Siu, 2014;Yu, 2009). The types and functions of hedging devices were examined in these research studies to demonstrate whether the L2 learners used DMs appropriately. Aijmer (2002) suggests that a description of DMs be provided at various interlanguage stages of L2 learners of English. She holds that DMs carry interpersonal functions in everyday conversation; hence, face-saving, politeness, as well as indirectness are relevant in the usage of DMs. Aijmer (2002) believes that DMs such as tag questions and approximators cause the reduction of social distance between the speaker and the hearer adding to the politeness load of a given utterance. The following studies underscore the language learners' use of DMs in EFL classes and among NSs of English.
Aşik and Cephe (2013) based their study on the production of DMs by Non-Native (NN) speakers of English as compared with those used in NSs' discourse on two separate corpora: a corpus of 20 NN learners in an English language teaching program in Turkey in comparison with the research corpus of MICASE for the NSs' presentations. By running frequency analysis, their study delved into the occurrences of DMs in both corpora. The findings of their research demonstrated that NN English speakers made use of a finite "number and less variety of DMs in their spoken English" (Aşik & Cephe, 2013, p. 144). Fung and Carter (2007) drew their data from a secondary Hong Kong classroom discourse corpus, a corpus of spoken British English, as well as a pedagogic sub-corpus from CANCODE to compare and contrast the production of DMs by both British NSs and L2 learners both qualitatively and quantitatively. Their findings demonstrated that both groups of NSs and L2 learners used DMs for interactional maneuvers on referential, interpersonal, cognitive, and structural levels. L2 learners showed a more frequent use of referentially functional DMs such as "but", "OK", "because", "so", as well as "and" than the NSs. However, the L2 learners illustrated a limited use of DMs such as "yeah", "really", "say", "sort of", "I see", "you see", "well", "right", "actually", "cos", and "you know". In addition, NSs regarded DMs for a broader range of pragmatic functions in comparison to their counterparts.
Neary-Sundquist (2013) investigated the data from 37 NN examinees at different proficiency levels and 10 NSs based on four tasks (news, personal, passing information, as well as telephone tasks) on the test of oral proficiency. She analyzed the range and the rate of hedges used to lessen the force of an utterance or the certainty of its content among L2 learners and NSs. The coded hedges in her study were "like", "I think", "just", "sort/kind of", "a bit", "or/and whatever", "or something", "everything/that/stuff/things", and "not really". She concluded that the L2 learners at elementary and intermediate levels significantly underused hedges in comparison with NSs and advanced L2 learners. The study revealed that the L2 learners at the advanced level surpassed NSs in using hedges while performing various monologic testing tasks. The most frequently used hedges among both NSs and L2 learners were "I think" and "just" in her study. In addition, she delineated that the two groups of L2 learners' and NSs' range of hedges on different monologic tasks was significantly higher than the rate of hedges.
Carrying out her research on 211 Chinese L2 learners from junior high school-, high school-, and university-level English courses in China on the pragmatic development of hedging, Yu (2009) studied simulated debates, written questionnaires, and oral interviews of the L2 learners. She found that intermediate L2 learners only showed an awareness of mitigators, the performative "I think", and intensifiers; while, the advanced university L2 learners significantly manifested an awareness of all categories of hedges at a rate higher than their intermediate counterparts. The findings of her study, also, indicated that there were significant differences between the oral interviews and the debate task with the L2 teacher according to the frequency and range of hedging clusters. Jalilifar et al. (2011) developed and distributed a reading comprehension test among 100 undergraduate L2 learners examining the influence of explicit instruction of DMs as hedging devices through a pre-test-post-test research design. Attending 10 sessions of awareness-raising treatment with regard to the appropriate use of hedges in English, the L2 learners in the experimental group and the control group took the posttest of the same reading comprehension test as it was employed for the pre-test. It is worth mentioning that the treatment included three sessions of awarenessraising for the types and functions of English hedging clusters. The remaining seven sessions instructed the learners on the practical use of DMs in academic texts. Their findings substantiated the fact that the L2 learners in the experimental group significantly surpassed their control group counterparts in employing DMs as hedging devices in the post-test phase. Also, the results revealed that explicit instruction in perceiving hedging exerts a facilitative influence on improving university learners' reading comprehension and language proficiency.
In another study, Siu (2014) investigated L2 learners' use of hedging in academic writing among 136 N Cantonese-speaking L2 learners who enrolled in the course of English for Academic Purposes (EAP) in China. The participants of her study were divided into control (i.e. at the beginning of a oneyear EAP course) and experimental (i.e. at the mid-course of a one-year EAP course) groups to delineate whether the learners used DMs as hedges to mitigate the force of an utterance appropriately. The DMs examined in her study consisted of nouns (e.g. probability and possibility), adverbs/ adverbial phrases (e.g. always and possibly), adjectives (e.g. probable and likely), epistemic modals (e.g. will and shall), non-verbal hedging which included quantifiers (e.g. some of and a few of), verbal hedging involving main verbs (e.g. assert and argue), as well as passive voice (e.g. be regarded as and be viewed as). The results of her study illustrated that the L2 learners who were explicitly instructed on the appropriate use of DMs as hedges significantly outperformed the language learners in the control group in academic writing. Furthermore, the study revealed that the L2 learners in the experimental group made use of the modal verb "may" more frequently than their counterparts in the control group since they were more conscious of the importance to minimize the force of an utterance while writing an academic essay.
By and large, DMs as hedging devices can add to the politeness load of an utterance and soften an FTA. Brown and Levinson (1987) argued that although a plethora of studies have been carried out on gender differences in using language in context, such discrepancies are "epiphenomenal-neither the social underpinnings nor the linguistic manifestations are specific to gender" (1987, p. 31). Nonetheless, previous research has indicated that gender significantly influences notions of politeness (Gilligan, 1982;Holmes, 1995;Hsieh, 2009;Mills, 2003;Tannen, 1990). In this regard, Mills (2003) contends that there is an urgent need for politeness research to emphasize gender through a contextualized analysis. She opposes the perspective that "politeness or gender consists of a range of stable predictable attributes" (Mills, 2003, p. 1). Moreover, Holmes (1995) discusses that female interaction is ordinarily less aggressive, argumentative, and competitive than that of males. Women, also, try to prevent disagreement and to provide supportive feedback to reinforce relationships (Gilligan, 1982).
Research questions
Given the above, the current study aimed to provide answers to the following questions: Q1: What types of DMs do Iranian L2 learners most frequently use?
Q2:
Are there any significant differences among control, experimental, male, and female L2 learners regarding different types of DMs and proficiency level? Q3: Are there any significant differences among the perception of native speakers, advanced Iranian L2 learners, and Iranian EFL teachers regarding the use of DMs presenting (im)politeness? Q4: How do L1 (im)politeness strategies influence the utilization of L2 (im)politeness DMs?
Participants
Three groups of participants took part in the current study. None of the participants had any experience living in an English-speaking country except the American NSs who lived in New York. It is worth mentioning that the researchers were in touch with the American NSs through email and Telegram since they did not have an opportunity to visit the NSs through face-to-face interaction. However, the researchers of the present study could easily contact the other participants through different direct meetings and frequent visits.
The first group consisting of 160 L2 learners (i.e. 80 males and 80 females) was divided into 8 groups of 20 L2 learners according to their proficiency levels (i.e. intermediate and advanced), gender as well as experimental-control variables. The female EFL learners comprised 42 learners of 15-20 years of age, 21 learners of 20-25 years old, 13 learners of 25-30 years of age, and 4 learners of 31 years and above. The male group included 47 learners of 15-20 years of age, 18 learners of 20-25 years old, 12 learners of 25-30 years of age, and 3 learners of 31 years above (See Table 1).
The second group included 90 participants among whom 30 were advanced Iranian L2 learners, 30 were American NSs, and the other 30 were Iranian EFL teachers. The L2 learners, EFL teachers, and NSs included 12, 9, and 7 participants of 15-20 years of age; 11, 14, 16 participants of 20-25 years old; as well as 7, 6, 9 participants of 25-30 years of age, respectively (See Table 2). The third group consisted of 164 Iranian advanced L2 learners. Nevertheless, 14 L2 learners did not completely fill out the background information part and the scenarios; hence, they were omitted from the present study. The remaining 150 participants consisted of 59 learners of 15-20 years of age, 62 learners of 20-25 years old, 20 learners of 25-30 years of age, and 9 learners of 31 years above (See Table 3).
The L2 learners were asked to assess their proficiency level (Dewi, 2011) based on International Testing System or IELTS score band (IELTS, 2011, See Appendix A). According to Common European Reference Framework (CERF), the L2 learners who scored themselves 5-6 were reported to be at intermediate level, and those who marked themselves 7-9 were recognized to be at advanced level of proficiency.
Instruments
In order to assess the frequency and differences among the Iranian male and female L2 learners regarding their awareness of DMs as indicators of (im)politeness with reference to proficiency level and control-experimental variables, open Discourse Role-Play Tasks (DRPTs) were used (See Appendix B). An open DRPT is widely used for assessing language learners' pragmatic competence "without determining the course and outcome of the situations" (Eslami & Mirzaei, 2012, p. 203). To this end, the L2 learners looked into 21 role-play cards in which the researchers developed describing some situations without interfering with their course and outcome.
For evaluating the advanced L2 learners', EFL teachers', and NSs' awareness of DMs as indicators of (im)politeness, a questionnaire with seven scenarios was developed (See Appendix C). Each scenario consisted of two sentences: one including a hedging device (i.e. a DM) and one without it. A five-point Likert scale (i.e. 1 as very polite, 2 as polite, 3 as neutral, 4 as impolite, and 5 as very impolite) was established for each sentence for the participants to check. The first sentences of the scenarios 1, 2, and 3 included the approximator "or so", the modal verb "may", as well as the indefinite "anyone" as hedging devices, respectively; while, their second sentences did not incorporate any DMs. Nevertheless, the first sentences of scenarios 4, 5, 6, and 7 did not include any DMs; whereas, their second sentences deliberated the passive structure "I have been told", the tag question "won't you", the conditional "if" clause along with its modal "will", and the impersonal "It". The questionnaire was distributed among 10 advanced L2 learners, 10 EFL teachers, as well as 10 NSs to pilot the test. The reliability index, assessed by applying the Cronbach alpha (a) formula for the questionnaire was found to be .79.
To provide an adequate answer for whether there existed any influence on the utilization of L2 (im)politeness DMs by L1 (im)politeness strategies, the questionnaire with seven scenarios was used (See Appendix C). However, the researchers omitted the Likert Scale and instead told the advanced L2 learners to write the Persian translation for the two responses which were given to each scenario. After doing so, they answered to "Why do you think one or both of the responses are (im)polite in both English and Persian translation?"
Procedure
The data for this study were gathered in three phases which are discussed below:
Phase 1:
In order to answer the first and second questions of the current study, the background information along with the self-assessment of English competence were administered to four male and four female EFL classes in the Iran Language School (ILI) among 160 adult (i.e. 80 males and 80 females) L2 learners during four days in May 2017. The EFL classes consisted of 20 L2 learners each. Four of the classes were categorized into intermediate L2 learners and the other four were identified as advanced L2 learners based on CERF. Then, two of the male EFL classes at the intermediate and advanced levels were put into experimental groups. Also, two of the female EFL classes at the intermediate and advanced levels were considered as experimental groups. The remaining four classes of male and female L2 learners were regarded as control groups.
The four EFL teachers of the experimental groups were then trained for one hour discussing the importance of DMs as hedging devices to minimize the force of commands through different examples. These teachers provided the L2 learners of four classes with an instruction of one hour on the importance of DMs which may minimize the imposition and might compensate for the directness of an utterance. The instruction mainly included a limited number of DMs (i.e. approximators, modals, indefinites, passives, tag questions, conditionals, and impersonal "It") with different examples used as hedging to delve into the notion of (im)politeness phenomenon. The purpose was to make the learners aware of DMs as hedging devices and not to restrict them with these seven DMs. The control groups at the intermediate and advanced levels of language proficiency had no such instruction.
Afterward, the researchers and the eight EFL teachers of the current research provided the learners with 21 role-play cards each consisting of a situation with two characters that the L2 learners were expected to role-play. Using the role-play cards, the EFL teachers along with the researchers recorded the conversation between two L2 learners interacting with each other regarding the 21 situations. Each recording of the two learners role playing the 21 situations lasted approximately 60 min. This procedure occurred for the 160 male and female participants at the intermediate and advanced levels in both experimental and control groups during11 days in the ILI. Subsequently, the data were transcribed by the researchers to calculate the number of the DMs used as hedging devices to scrutinize the (im)politeness phenomenon. The DMs were computed through SPSS 22 using frequency distribution, measures of central tendency, and standard deviation for the eight groups of L2 learners. Also, three independent t-tests were conducted for comparing the use of DMs among the male and female L2 learners, the intermediate and advanced L2 learners, as well as the L2 learners at the control and experimental groups.
Phase 2: A questionnaire with seven scenarios was developed to assess the awareness of the advanced L2 learners, the EFL teachers, and the NSs considering the use of DMs as hedging devices as indicators of (im)politeness. Each scenario included two sentences: one consisting of a DM and one without it. Then, in order to pilot the test, 10 advanced L2 learners, 10 EFL teachers, as well as 10 NSs filled out the five-point Likert questionnaire. Through applying the Cronbach alpha (a) formula for the questionnaire, the reliability index was calculated to be .79. Using the background information along with the self-assessment of English competence, 30 L2 learners identified themselves at the advanced level of language proficiency according to CERF. After pilot testing, the questionnaire was distributed among 30 advanced L2 learners and 30 EFL teachers in two different sessions of 45 minutes. As for gathering of the data from American NSs who lived in New York, the researchers sent them the questionnaires through Telegram and email. They filled out the questionnaire and responded to the researchers' emails and telegram. The DMs were computed through SPSS 22 using frequency distribution, measures of central tendency, and standard deviation for both the EFL learners, EFL teachers, as well as the NSs. Moreover, a one-way between-groups analysis and post hoc comparisons using the Tukey HSD test were conducted to compare and to contrast the use of DMs among the L2 learners, the EFL teachers, and the NSs.
Phase 3:
In order to investigate whether Persian (im)politeness strategies influence the utilization of L2 (im)politeness DMs, the seven-scenario questionnaire was used (See Appendix C). Nevertheless, the researchers decided to omit the five-point Likert scale so that the L2 learners translate the two responses given to each scenario into Persian language. Afterward, the learners were asked to write in both Persian and English why or why not they think the responses were polite. To this end, the questionnaire was distributed among 164 advanced L2 learners; nevertheless, 14 of them were discarded from the present study since they did not fill out the questionnaire or fill it out incompletely. For classifying the DMs in both languages, the data were entered into MAXQDA Software, Version 11 (Verbi, 1989). Then, they were codified and described qualitatively.
Inferential statistics
Frequency and mean scores of seven DMs, namely approximators, modals, indefinites, passives, tag questions, conditionals, and Impersonal "It" based on control-experimental groups, intermediate and advanced levels of language proficiency levels, and gender are presented in Table 4.
For the male and the female L2 learners at the intermediate group in the experimental group, tag questions made up the highest frequency percent of 15.9 and 23.6 with the means of 2.90 and 4.30, respectively. However, for the language learners at the advanced level in the same group, the DMs of indefinites and impersonal "It" had the highest frequency percent of 29.5 and 37.1 with the means of 5.80 and 4.55, respectively.
Moreover, the highest frequency percent for the male and female L2 learners at the intermediate group in the control group was assigned to conditionals and modals with 10.4% and 7.2% and the means of 2.25 and 4.45, respectively. Nevertheless, for the L2 learners at the advanced level in the same group, modals accounted for the highest frequency of 11.3 and 12.2 with the means of 7 and 7.60.
An independent-samples t-test was conducted to compare the DM scores for advanced and intermediate L2 learners regarding approximators, modals, indefinites, passives, tag questions, conditionals, and impersonal "it". The results are presented in Table 5. In order to determine the difference in the males' and females' use of DMs presenting (im)politeness, an independent-samples t-test was calculated. The findings are shown in Table 6. To assess whether there were significant differences among the language learners with reference to DMs in the control and experimental groups, an independent-samples t-test was computed. The results are illustrated in Table 7. With respect to all the DMs, significant differences were found in the scores of control and experimental L2 learners revealing that language learners in the experimental groups used DMs more frequently that their counterparts in the control groups.
In order to determine the difference among advanced L2 learners, EFL teachers, and American NSs in the use of DMs presenting (im)politeness, a one-way ANOVA was run through SPSS. The results are presented in Tables 8-10. To scrutinize how L1 (im)politeness strategies influenced the utilization of L2 (im)politeness DMs, 150 advanced L2 learners filled out the questionnaire from Appendix C. The analysis was conducted with the software MAXQDA, Version 11 (Verbi, 1989). Being regarded as a professional software for qualitative data analysis, MAXQDA is allows coding of a large amount of research materials such as interviews, written questionnaires, and so forth. These data were then sorted and retrieved from the MAXQDA software according to both the Persian and the English categories established. The findings are shown in Table 11.
As can be seen in Table 11, the Persian and the English shared the classifications of "modals", "impersonal "It", "tag questions", "conditionals", and "passives"; nevertheless, the categories differed in "precision" and "direct reasoning" in Persian versus "Approximators" and "Indefinites" in English.
Discussion
This study confirmed that the L2 learners at the intermediate group in the experimental group employed tag questions more frequently than their advanced counterparts who significantly used indefinites and impersonal "It". Also, the findings showed that the language learners at the intermediate level in the control group had the highest frequency of conditionals and modals in comparison to the Table 9. One-way between-groups analysis and post hoc comparisons using the Tukey HSD for the comparison of DMs among advanced L2 learners, EFL teachers, and American NSs *The mean difference is significant at the .05 level.
(I) Native speakersadvanced L2 learners and EFL teachers (J) Native speakersadvanced L2 learners and EFL teachers
Mean difference (I-J) Std. error Sig.
Native speakers
Advanced L2 advanced L2 learners in the same group who frequently used modals. In addition, the results of the current study demonstrated that the advanced L2 learners employed DMs more frequently than intermediate ones and that the L2 learners in the experimental groups outperformed their counterparts in the control groups in using DMs.
Considering gender differences the male L2 learners underused the DMs of approximators, modals, and passives in comparison to the female ones demonstrating that the female L2 learners soften the force of commands more than male language learners. Nonetheless, regarding the DMs of indefinites, tag questions, conditionals, and impersonal "It", the male and female L2 learners did not significantly differ in employing them. Furthermore, a one-way between-groups analysis of variance indicated that the EFL teachers surpassed the L2 learners in using DMs and that the American NSs employed DMs more frequently than both EFL teachers and L2 learners.
The classification of the Persian and the English DMs based on MAXQDA software, also, revealed that the shared areas between the two languages were "modals", "impersonal 'It', 'tag questions', 'conditionals', and 'passives'". However, the Persian categories were "precision" and "direct reasoning" in comparison to the English classifications of "approximators" and "indefinites". Such areas of difference between the two languages are illustrated below.
In English, most of the L2 learners consent to the fact that using approximators such as "or so", "almost", and "kind of" make the sentence fuzzier and more polite. Participant 21 writes, "The mechanic does not have to lie. He uses the approximator "or so" to add to the politeness of the sentence". Another participant also believes, "Since 'or so' is used in the first sentence, the utterance is more polite than the second sentence". Also, participant 78 holds, "Using approximators such as 'almost', 'or so', 'kind of' and so forth make the English sentences more polite". Nevertheless, in Persian, the participants mostly mention that giving an exact time demonstrates politeness. They assert that using approximators make them unsure about the certain time of doing some tasks which is impolite. Participant 16 believes that the sentence "It takes twenty minutes or so". is impolite. She claims, "It's impolite because the mechanic does not take account of my precious time". Another participant, also, believes that this sentence is rude to him. He asserts, "The mechanic does not specify the exact time of repair; hence, the customers' time is not important for him". Moreover, participant 47 believes, "It's always great to be exact and tell the exact time. It shows how punctual the mechanic is. I think being punctual shows politeness". Participant 88 writes, "When approximators are used in Persian, I think they make the sentences blunt and impolite". In addition, another participant holds, "Because of the word 'or so', the sentence becomes less polite than the sentence 'It takes twenty minutes'". Participant 113 claims, "Using an approximator is an indicatation of unfaithfulness and disloyalty which is impolite. It shows imprecision and inaccuaracy. On the other hand, telling the exact time is a sign for punctuality and faithfulness adding to the politeness of the sentence". Furthermore, participant 134 holds, "It's rather impolite because he wants to reassure if it takes longer he won't accept any complaints. It's polite because he reassures me that there won't be any delay".
A large number of the advanced L2 learners agreed to the fact that using indefinites such as "anyone", "anybody", "anywhere", "people say that", "somewhere" and so on make the utterance vague and more polite. Participant 67 holds, "The police officer does not mention a specific person in the first sentence. Thus, the sentence is more polite because the word 'anyone' is employed". Another participant writes, "The sentence which employed 'anyone' is more polite than the second one because it's a clear statement of law". In addition, participant 135 holds, "In the first sentence, the police officer is neutral in giving a ticket to the faulty person since he used the word 'anyone'. So, it is more polite than the second sentence which is much more direct". Nonetheless, in Persian, participants mainly write that giving reasons show politeness. Also, they delineate that employing indifinites such as "anyone" and "people say that" make the sentence impolite. Participant 93 holds, "The first sentence is impolite because any driver knows this. The second sentence is more polite because he restates what he has done wrong". Also, participant 89 believes, "I think the translation of the first sentence is impolite in Persian because the officer did not give any reasons to the driver. However, in the second sentence, the officer gives a reason and that's what makes the sentence more polite. Besides, participant 10 writes, "The first sentence is less polite than the second one since the police officer used the word 'anyone'. On the other hand, the second sentence mentions the specific law for giving a ticket. So, sentences become more polite by giving reasons". Last but not least, participant 147 mentions, "Giving reasons make the utterance more polite".
In the same vein with the results of the current study considering gender differences in using approximators, modals, and passives as hedging devices, Holmes (1995) argues that female's discourse is typically less aggressive than male's. Moreover, Gilligan (1982) argues that females typically avoid disagreement and provide supportive feedback more frequently than males in order to establish and to reinforce relationships. Pettersson Granqvist (2013) maintains that "women tend to use more hedges, boosters and facilitative tag questions: they might merely be evidence of a female conversational style" (2013, p. 28). However, the findings of this research are not in line with Brown and Levinson's (1987) claim that the discrepancies found in males' and females' discourse are "epiphenomenal-neither the social underpinnings nor the linguistic manifestations are specific to gender" (1987, p. 31). This study did not find any significant difference between males and females in using the DMs of indefinites, tag questions, conditionals, and impersonal "It".
In line with this study, Neary-Sundquist (2013) indicated that advanced L2 learners and NSs surpass the L2 learners at both elementary and intermediate levels in using DMs to mitigate the tone of an utterance or the certainty of its content in English. Nevertheless, with regard to various monologic tesing tasks, she confirmed that the advanced learners are significantly better than both L2 learners and NSs in using "I think" and "just" as the two most frequently used hedges in her study. The current research has found empirical evidence that the American NSs surpass not only the L2 learners at the intermediate and advanced levels but also the EFL teachers in using DMs to moderate the force of an utterance. Drawing their data from three different corpora, Fung and Carter (2007), also, confirmed that British NSs use DMs for a broader range of pragmatic functions in comparison to L2 learners. In addition, Aşik and Cephe (2013) carried out their study on the occurrences of DMs by both L2 learners and NSs demonstrating that L2 learners underuse DMs in their spoken English compared to NSs.
Likewise, Yu (2009) confirmed that advanced L2 learners show a higher awareness of hedging clusters than the intermediate ones. He found that the advanced L2 learners significantly use modal shields, performative shields, quantificational approximators, pragmatic-marker hedges, and other strategies for discoursal and syntactic hedges; while, the intermediate L2 learners only employ mitigators, intensifiers, and the performative "I think". Although results of the present study reveal that the advanced L2 learners outperform their intermediate counterparts in using DMs as hedging devices, intermediate L2 learners use tag questions, conditionals, as well as modals more frequently than the advanced ones who significantly make use of indefinites, impersonal "It", and modals.
In correspondence with the present research results, Jalilifar et al. (2011) found empirical evidence of explicit instruction of hedging for specific academic purposes. They maintain that the undergraduate L2 learners' reading comprehension performance in using hedging devices to mitigate the force of commands in appropriate contexts significantly heightens their language and pragmatic proficiency level. Similarly, Siu (2014) examined the L2 learners' use of hedging in academic writing and illustrated that the learners who have hedging instruction outperform their counterparts in the control group in employing hedging devices appropriately in academic writing. She holds that the L2 learners in the experimental group are more familiar with using hedges in appropriate contexts to soften the force of an utterance when they write an academic essay; therefore, their writing piece becomes more polite with reference to the academic context. Also, in the present study, two groups of L2 learners at both the intermediate and advanced language proficiency levels had an explicit instruction regarding the use of hedges in different contexts which culminated in their more significant use of DMs as hedging devices than their counterparts in the control groups.
As Alijanian and Vahid Dastjerdi (2012) aptly puts it, "indirectness is considered a universal discoursal strategy but the extent to which it is applied varies from culture to culture" (2012, p. 60). Scollon (1997) argues that indirectness is communicated with the collectivist values and cultural notions which may differ in Eastern and Western societies influencing the way individuals interact with each other. Regarding Persian language and culture, Eslami-Rasekh (2004) delineates, "Iranian society, being a more group-oriented society, … puts more emphasis on the importance of society, family, solidarity, and common ground as opposed to individual privacy … and autonomy of individuals" (2004, p. 189). As the results of the present research for the classification of the Persian and the English DMs based on MAXQDA software suggest, the two areas of controversy between Persian and English cultures are "precision" and "direct reasoning" versus "approximators" and "indefinites", respectively. Regarding (im)politeness phenomenon in Iranian culture, interaction with other interlocutors of a community is a way to build relationship and harmony which makes them cautious and precise in communicating their own statements and goals in relation to other individuals. In this respect, Iranian L2 learners aim to arrive at reciprocal harmony and consensus through employing DMs as hedging devices to underscore priority for partnership and other people's face through "precision" and "direct reasoning" rather than "approximators" and "indefinites". Also, it can be mentioned that dispute and controversy are viewed as FTAs that may cause the L2 learners to lose face.
Conclusions
The current research investigated the interlanguage development of Iranian L2 learners and EFL teachers as well as the pragmatic knowledge of American NSs. It investigated the DMs as hedging devices as indicators of politeness with regard to gender, proficiency level, as well as control-experimental variables through open DRPTs and a seven-scenario questionnaire with a five-point Likert scale and without it for L2 learners to translate into Persian. The results demonstrated that the female L2 learners outperformed their male counterparts in using the DMs of approximators, modals, and passives revealing that the female language learners minimize a face-threatening act and the tone of an utterance significantly. Hence, such results corroborate Gilligan's (1982), Holmes (1995), Pettersson Granqvist (2013) studies in that females tend to use more hedges than males as mitigating devices. However, the findings of this study indicated that both groups of L2 learners did not show any significant differences in using indefinites, tag questions, conditionals, and impersonal "It" in their speech.
The results, also, indicated that the American NSs surpassed both EFL teachers and L2 learners in using DMs as hedging devices corroborating Fung and Carter's (2007) as well as Aşik and Cephe's (2013) studies. Besides, the current research found that intermediate language learners significantly used DMs to soften the force of commands and minimize FTAs less frequently than advanced L2 learners which substantiated earlier research by Neary-Sundquist (2013) and Yu (2009). This study confirmed that EFL teachers outperformed both intermediate and advanced learners in using DMs. In addition, the L2 learners in the experimental groups surpassed their counterparts in the control groups in employing DMs in their speech.
In addition, the present research revealed two areas of discrepancies, namely "precision" and "direct reasoning" in Persian versus "approximators" and "indefinites" in English, according to MAXQDA software to find the nuances of (im)politeness theory, contextual features and cultural subtleties between the two languages. Such results added to Eslami-Rasekh's (2004) research that Iranian culture underscore common ground as opposed to individual privacy which makes Iranians cautious and precise in order to be considered as polite in everyday language encounters in relation to other individuals. Hence, being more precise and providing more direct reasoning are regarded as two polite strategies in the Iranian culture rather than making the utterance fuzzier and indirect through using the DMs of "approximators" and "indefinites". 6. You are in a park to buy an ice-cream and there are many people in line. It's your turn to buy an ice-cream. You search your pocket and you remember that you put the money in your bag at home. The clerk in charge of the ice-cream cones wants you to try to be faster.
9. Your friend has financial problems and asks whether you can help him with some cash.
You: …………………………………………………………………………………………….
10. You are a judge in a court of law. The criminal in the court wants you to do him a favor illegally and shorten his imprisonment sentence. He wants to tell you that when he gets out of jail, he'll make it up to you.
You: …………………………………………………………………………………………….
15. You want to send a letter to your best friend so you go to a post office. You forget to close the door behind you. The clerk in the post office asks you whether you could close the door.
You: …………………………………………………………………………………………….
17. You bought a T-shirt at a popular department store which the seller told you that it was original and you paid a lot of money. However, one of your friends, an expert in distinguishing between original and fake T-shirts, tells you that the T-shirt is fake. You go back to the department store to tell them that it is not original.
18. You are sitting in a park bench to have some fresh air. A stranger comes and sits close to you smoking a cigarette. You feel so embarrassed. The stranger understands it and wants to know what bothers you.
You: …………………………………………………………………………………………….
19. You are a prominent lawyer in your city and want to defend your client whom you believe that he is not a criminal. The attorney general talks too much and does not allow you to introduce and defend your client. After some time, the attorney general tells you what evidence you have t defend your client. | 2018-11-27T10:22:52.349Z | 2018-01-01T00:00:00.000 | {
"year": 2018,
"sha1": "3e4be9a91acacc42226ce177b1cb859bab4f202f",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1080/23311983.2018.1461048",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "3e4be9a91acacc42226ce177b1cb859bab4f202f",
"s2fieldsofstudy": [
"Linguistics"
],
"extfieldsofstudy": [
"Psychology"
]
} |
259041593 | pes2o/s2orc | v3-fos-license | Exploring the Therapeutic Potential of Targeting GH and IGF-1 in the Management of Obesity: Insights from the Interplay between These Hormones and Metabolism
Obesity is a growing public health problem worldwide, and GH and IGF-1 have been studied as potential therapeutic targets for managing this condition. This review article aims to provide a comprehensive view of the interplay between GH and IGF-1 and metabolism within the context of obesity. We conducted a systematic review of the literature that was published from 1993 to 2023, using MEDLINE, Embase, and Cochrane databases. We included studies that investigated the effects of GH and IGF-1 on adipose tissue metabolism, energy balance, and weight regulation in humans and animals. Our review highlights the physiological functions of GH and IGF-1 in adipose tissue metabolism, including lipolysis and adipogenesis. We also discuss the potential mechanisms underlying the effects of these hormones on energy balance, such as their influence on insulin sensitivity and appetite regulation. Additionally, we summarize the current evidence regarding the efficacy and safety of GH and IGF-1 as therapeutic targets for managing obesity, including in pharmacological interventions and hormone replacement therapy. Finally, we address the challenges and limitations of targeting GH and IGF-1 in obesity management.
Introduction
Obesity is a prevalent health condition associated with an increased risk of developing several chronic illnesses, including dyslipidemia, type 2 diabetes mellitus (T2DM), hypertension, cardiovascular disease, and certain types of cancer [1]. According to the World Health Organization (WHO), in 2016, over 1.9 billion adults (18 years and older) were overweight, with more than 650 million being classified as obese. This represents a significant increase in the prevalence of obesity, more than doubling since 1980. The Centers for Disease Control and Prevention (CDC) report that 42.4% of adults in the United States were classified as obese in 2020. This trend is particularly pronounced among middleaged, Hispanic, and non-Hispanic black adults. Moreover, childhood and adolescent obesity is a significant public health issue in the United States, with nearly one in five children/adolescents aged two to nineteen years being categorized as obese.
Obesity is a complex condition caused by a combination of genetic, environmental, and behavioral factors, including diet, physical activity, and exposure to endocrine-disrupting chemicals. It is characterized by an excess accumulation of body fat resulting from an ongoing positive energy balance (a higher intake of calories than expenditure) and insufficient physical activity, which disrupts the energy balance and normal physiological homeostasis [2]. The fundamental components of energy balance include energy intake, energy expenditure, and energy storage, which interact in a complex manner to affect body weight and the fat ratio [3,4]. The management of obesity is best approached through a IGF-1 has a high degree of structural similarity to insulin and binds strongly to the IGF-1 receptor (IGF-1R), activating both the mitogen-activated protein (MAP) kinase and phosphoinositide 3-kinases (PI3K) signaling pathways in the target tissue [19,20]. The majority of circulating IGF-1 originates from hepatic cells, and its production and release are IGF-1 has a high degree of structural similarity to insulin and binds strongly to the IGF-1 receptor (IGF-1R), activating both the mitogen-activated protein (MAP) kinase and phosphoinositide 3-kinases (PI3K) signaling pathways in the target tissue [19,20]. The majority of circulating IGF-1 originates from hepatic cells, and its production and release are controlled by GH [21].
In addition to its potential to decrease fat mass, GH treatment has been explored for its therapeutic potential in various diseases. For instance, GH deficiency (GHD) has been associated with increased cardiovascular mortality and decreased quality of life [22]. GH replacement therapy has been shown to improve cardiac function, body composition and, to some extent, the overall well-being of GHD patients [1,10]. GH treatment has also been investigated for its potential in managing metabolic disorders such as T2DM and metabolic syndrome [1,10]. Although the effects of GH on glucose metabolism are still controversial, some studies have shown improvements in insulin sensitivity and glucose homeostasis as a result of GH treatment in T2DM patients [23]. Therefore, GH therapy may have potential therapeutic implications in managing various diseases beyond obesity.
Given the rising prevalence of obesity and its related comorbidities, there is an urgent need for effective strategies for managing this condition. Modulating the GH and IGF-1 axis may offer a promising solution, as these hormones have been shown to regulate metabolic processes and body composition. However, further research is required to fully understand the intricate relationship between GH, IGF-1, and metabolism. Furthermore, the safety of GH therapy requires careful consideration due to its ability to induce anabolic effects by stimulating protein synthesis and cell proliferation [11,24,25].
It is important to note that animal models may not necessarily be directly applicable to humans, and more clinical trials are needed to fully evaluate the potential benefits and risks of GH therapy in managing obesity. Therefore, caution should be exercised in extrapolating the results of animal studies to humans, and further research is needed to determine the safety and efficacy of GH therapy in human populations.
This review aims to evaluate the therapeutic potential of targeting the GH and IGF-1 axis in the management of obesity by examining their interaction with metabolism.
Method
In this review, we conducted a comprehensive literature search using electronic databases, including PubMed, Embase, and the Cochrane Library. We used the following search terms: "GH", "IGF-1", "obesity", "body weight", "body composition", "energy expenditure/balance", and "metabolic disorders". We limited our search to articles that were published in English from the year 1993 to 2023. We included randomized controlled trials, observational studies, transgenic mouse models, and systematic reviews that investigated the effects of GH and/or IGF-1 on body weight, body composition, and metabolic disorders. We excluded studies that focused on populations other than individuals with obesity or those that investigated other interventions in addition to GH and IGF-1. In addition to the current literature, we also referenced previous studies that have significantly impacted the current understanding of the role of GH and IGF-1 in obesity. These references date back to the early studies on mouse models and their application to human subjects. Furthermore, we have incorporated preliminary observations that were generated from our own newly developed models that are still undergoing thorough investigation. These observations provide additional insights into the effects of GH and IGF-1 on body weight, body composition, and metabolic disorders, augmenting the existing literature. The inclusion of our model-generated data adds to the breadth of evidence considered in this review and contributes to the ongoing exploration of the topic.
The Hypothalamus and Pituitary Gland (Hypothalamic-Pituitary Axis)
The hypothalamus is a crucial regulatory organ that integrates the nervous and endocrine systems; it plays a vital role in mediating physiological processes such as repro-duction, somatic growth, energy balance, and metabolic homeostasis [26,27]. Although relatively small, the hypothalamus comprises less than 2% of the total brain and is located in the lower part of the diencephalon. It receives diverse signals from other brain regions and subsequently initiates behavioral responses to various environmental stimuli [4]. The hypothalamus plays a crucial role in regulating the endocrine system by releasing hormones into circulation, which travel to other endocrine glands to control their hormone production. The hypothalamus communicates with the pituitary gland through two distinct pathways. In the first pathway, neurosecretory cells of the hypothalamus synthesize hormones such as oxytocin (OT) and antidiuretic hormone (ADH), which are directly transported to the posterior pituitary gland via extended neuron fibers and axons. In the second pathway, the hypothalamus secretes hormones that are produced and stored in neuroendocrine cells in the hypothalamus, which are then transported to the anterior lobe of the pituitary gland via the hypophyseal portal system [4,28].
The hypothalamus receives and integrates signals from hormones, nutrients, and other factors to regulate appetite and metabolism through the actions of specific neuronal populations. The hypothalamus is a crucial regulatory center in the control of energy balance, and its medial portion, which is composed of the ventromedial nucleus (VMN), arcuate nucleus (ARC), and paraventricular nucleus (PVN), plays a key role in this process [29]. The ARC contains two distinct populations of neurons, one that produces orexigenic peptides such as agouti-related protein (AgRP) and neuropeptide Y (NPY) and another that secretes anorexigenic peptides, including proopiomelanocortin (POMC) and cocaine-and amphetamine-related transcript (CART) [30]. POMC neurons secrete several peptides, including the anorectic peptide alpha-melanocyte-stimulating hormone (α-MSH), which acts via melanocortin 4 receptor (MC4R) to reduce appetite and food intake [31,32]. The PVN also expresses melanocortin receptors 3 and 4 and NPY receptors, and it secretes neuropeptides such as corticotropin-releasing hormone (CRH), which exerts an anorexigenic action [4,33]. Furthermore, afferent signals, including leptin from adipose tissue, insulin from the pancreas, ghrelin, and peptide YY from the gastrointestinal tract, modulate the anorexigenic centers of the hypothalamus [4]. Output signals from the PVN and lateral hypothalamus (LH) activate the sympathetic nervous system or the vagus nerve via the autonomic nervous system. These signals also include the release of GHRH, SST, and thyrotropin-releasing hormone (TRH), which regulate the metabolism of adipose tissue and overall metabolic rate by controlling the secretion of pituitary hormones such as adrenocorticotropic hormone, GH, and thyroid-stimulating hormone (TSH) [4].
The hypothalamus is a crucial organ for regulating body weight as it regulates the balance between food intake, energy expenditure, and body fat storage. This is evident, because the majority of genetic syndromes of severe obesity are caused by mutations in genes that are expressed in the hypothalamus [34]. Hypothalamic obesity (HO) develops due to dysfunction in the hypothalamic regulatory centers of body weight and energy expenditure. HO may be due to structural damage to the hypothalamus, radiation therapy, genetic disorders such as Prader-Willi syndrome, and mutations in specific genes, such as LEP, LEPR, POMC, MC4R, and CART [35].
The hypothalamus also regulates energy expenditure by controlling the sympathetic nervous system and the activity of hormones such as thyroid hormone and insulin, thereby playing a crucial role in maintaining energy balance and body weight homeostasis [36].
The pituitary gland, also known as the hypophysis, is a small endocrine gland located at the base of the brain; it plays a crucial role in regulating physiological processes by secreting hormones that travel to other endocrine glands and organs in the body. The pituitary gland is divided into two main regions, the anterior lobe and the posterior lobe. The anterior pituitary develops from the oral ectoderm during embryonic development [37]. This gland is encased by a network of blood capillaries originating from the hypothalamus, forming the hypophyseal portal system. This system is responsible for conveying neuroendocrine signals from the hypothalamus to the anterior pituitary and subsequently from the anterior pituitary to the circulatory system. The anterior lobe, also referred to as the adenohypophysis, produces six hormones: GH, prolactin (PRL), thyroid-stimulating hormone (TSH), melanin-stimulating hormone (MSH), follicle-stimulating hormone (FSH), and luteinizing hormone (LH). The posterior lobe, or neurohypophysis, is derived from neuro-epithelial cells and is thus structurally and anatomically separate from the anterior lobe of the pituitary gland [38]. The posterior lobe contains neuro-glial cells and nerve fibers that extend from the hypothalamus, and it is considered an extension of the brain [27]. The posterior lobe secretes two hormones, OT and ADH, which are produced by neurosecretory cells in the hypothalamus and transported via axons to be stored in the posterior lobe. These hormones are then secreted into the circulatory system under the control of the hypothalamus [38]. The pituitary gland plays a role in the regulation of energy expenditure and body weight in obesity [39]. The pituitary gland's anterior lobe produces several hormones that are involved in energy metabolism, including GH and TSH. GH is known to increase energy expenditure by promoting the breakdown of fat and by stimulating the production of IGF-1, which also promotes lipolysis under special circumstances [18,40,41]. TSH plays a crucial role in regulating the metabolic processes that are essential for normal growth and development, as well as metabolic homeostasis in adults. Thyroid hormone status is closely linked to body weight and energy expenditure [42]. Hyperthyroidism, characterized by an excess of thyroid hormone, leads to a hypermetabolic state that is characterized by increased resting energy expenditure, weight loss, decreased cholesterol levels, increased lipolysis, and increased gluconeogenesis [43].
OT and ADH, which are released from the posterior lobe of the pituitary gland, are also involved in the regulation of appetite and energy balance. In obesity, the balance between the secretion of these hormones can be disrupted, leading to an increase in appetite and a decrease in energy expenditure, contributing to weight gain [39].
The Liver and Adipose Tissue
The liver and adipose tissue serve as key regulators of whole-body energy homeostasis by coordinating glucose, lipid, and energy metabolism [44]. Insight into the roles of the liver and adipose tissue in metabolic homeostasis may provide a potential intervention strategy to alleviate the detrimental effects of obesity on human health. The liver is a relatively large organ that represents approximately 2-3% of the total body weight in humans, making it the second largest organ after the skin [45]. The liver is protected by the ribcage and is encased by peritoneal reflections [45]. The liver receives its blood supply from two distinct sources. The majority, approximately 80%, is supplied by the portal vein, which carries nutrient-rich blood from the spleen and intestines; the remaining 20% of the blood supply is oxygenated blood that is delivered by the hepatic artery [45,46]. Anatomically, the liver is divided into several functional regions, including the right and left lobes, which are separated by the falciform ligament, and the caudate and quadrate lobes, which are separated by the ligamentum venosum. The liver is also divided into functional units called lobules, which are hexagonal and consist of hepatic cells or hepatocytes, the primary functional cells of the liver [46]. Histologically, the liver is composed of several different cell types, including hepatocytes (HCs), hepatic stellate cells (HSCs), Kupffer cells (KCs), and liver sinusoidal endothelial cells (LSECs). Hepatocytes, which make up about 60-70% of the liver, are the primary functional cells of the liver and are responsible for most of the metabolic activity of the liver [47]. Sinusoidal endothelial cells line the blood sinusoids and regulate the flow of blood and nutrients through the liver [38][39][40].
Adipose tissue, commonly known as fat tissue, is a type of connective tissue with the primary function of storing energy in the form of lipids; it is composed of adipocytes, specialized cells that are capable of expanding and contracting to store and release fatty acids as necessary [48]. Adipose tissue is able to adapt and change in response to internal and external signals and can expand up to 15 times its original size [49]. Adipose tissue is a unique organ in that it possesses the ability for unlimited growth potential at any stage of life. The adipose tissue mass size is determined by the quantity of adipocytes present and the size of each cell. Adipose tissue expansion can occur through two distinct mechanisms: hyperplasia, or an increase in the number of adipocytes; and hypertrophy, an increase in the size of individual adipocytes. While hypertrophy, which is primarily due to lipid accumulation within the cell, is reversible, hyperplasia is a permanent change that persists throughout the individual's lifetime [50]. Adipocytes are primarily composed of triglycerides, which are molecular structures that are formed by bonding three fatty acid molecules to a glycerol molecule. These triglycerides are stored in the cell as droplets, occupying up to 90% of the cell's volume. Adipose tissue is distributed throughout the body, including in subcutaneous tissue, the retroperitoneal space, and in close proximity to organs such as the heart. Additionally, it envelops blood vessels and nerves and performs important functions such as insulation and cushioning [51]. The distribution of adipose tissue can vary among individuals and is influenced by various factors such as diet, genetics, and physical activity. It is also associated with various metabolic disorders such as obesity and T2DM. It has been linked to an increased risk of several chronic diseases, including cardiovascular disease and certain types of cancer [51,52]. In mammals, there are three main classes of adipose tissue: white adipose tissue (WAT), brown adipose tissue (BAT), and beige adipose tissue [53]. WAT is the most abundant and well-known type, composed of white adipocytes that store excess energy as triglycerides and provide insulation to regulate body temperature [54]. Conversely, BAT is composed of brown adipocytes that generate heat through thermogenesis, and they play a crucial role in regulating body temperature [55]. Beige adipose tissue, also known as "brown-in-white", is a newly discovered type of adipose tissue that has the properties of both WAT and BAT. Beige adipocytes, interspersed among white adipocytes, undergo a phenotypic shift from a white-like state to a brown-like state in response to stimuli such as cold exposure or specific hormonal signals [56].
Extensive research has revealed that adipose tissue not only regulates glucose and lipid metabolism but also functions as an endocrine organ by releasing a variety of hormones and signaling molecules that regulate a wide range of physiological processes. These include energy expenditure, appetite control, insulin sensitivity, inflammation, and tissue repair [57]. Both WAT and BAT play important roles in the secretion of hormones and signaling molecules in the form of peptides, lipids, and microRNAs. These include leptin and adiponectin, which have potent roles in mediating metabolic processes [57].
Adipose tissue acts as a communication hub between various organs and the central nervous system, regulating energy supply and demand through hunger and satiety signals. Adipose tissue responds to insulin by converting glucose to lipids and subsequently storing the lipids as a reserve for future energy requirements. Additional signals that functionally modulate adipose tissue include sex hormones, which partially determine fat distribution in the body and mediate inflammatory responses [58]. Dysregulation of these functions leads to the development of metabolic diseases [59]. The accumulation of excess WAT in the abdominal region also plays a role in the development of obesity.
In obesity, adipose tissue becomes enlarged and dysfunctional, leading to a chronic low-grade inflammation state and an increased secretion of pro-inflammatory cytokines such as TNF-alpha and IL-6, which also contribute to the development of metabolic disorders such as T2DM, hypertension, and cardiovascular disease [48,60,61].
In summary, the liver and adipose tissue are two key organs that play a crucial role in the development and progression of obesity. The liver regulates metabolic processes, including glucose and lipid metabolism. Excess nutrient intake results in the development of hepatic steatosis (fatty liver), which is often seen in obesity. The resulting insulin resistance contributes to the development of metabolic diseases such as T2DM and hypertriglyceridemia [62]. Additionally, fat accumulation in the liver may lead to inflammation and oxidative stress, which contribute to the development of liver fibrosis [63].
The Advancements in the Understanding GH Secretion: Mechanisms, Modulators, and Consequences
The development of a reliable immunological assay for measuring circulating hormones represented a crucial advancement in endocrinology. This led to the identification of a previously unknown pulsatile pattern of GH secretion, providing a deeper understanding of GH's physiological role and offering novel therapeutic avenues for GH deficiencyassociated conditions [64,65]. The interaction between GH and IGF-1 is intricate and bidirectional, with GH promoting the synthesis and secretion of IGF-1 and IGF-1, subsequently inhibiting the secretion of GH from the pituitary gland through a negative feedback mechanism [5,66].
The mechanism of GH production is a complex process that is regulated by a variety of hormones and neurotransmitters. In normal individuals, GH is produced and released by the pituitary gland in a pulsatile manner. The primary triggers for GH release are the hypothalamic hormones GHRH and ghrelin. Ghrelin, a hormone produced by the stomach, promotes hunger and increases GH release, while GHRH stimulates the pituitary gland to release GH [67][68][69]. In addition, other hormones such as IGF-1 and cortisol modulate GH secretion. The release of GH is also suppressed by the hypothalamic hormone SST, which acts as an inhibitor for the release of GH [70]. Obesity has been associated with alterations in the pattern of GH production; however, the underlying mechanisms are not fully understood. Studies have shown that individuals with obesity have lower GH levels and a less pulsatile pattern of secretion than those with normal body weight [65]. A previous study has found that obesity impairs sensitivity to the effects of arginine infusion on human GH levels, with peak levels being significantly lower in obese subjects compared with normal subjects. However, changes in insulin levels were similar between the two groups [71,72]. A recent study was designed to investigate the impact of body composition on GH production in non-obese adults. The results showed that abdominal fat was the major predictor of GH production, with higher levels being associated with lower GH production. This study also found that physical fitness was positively associated with GH production. This study concluded that abdominal fat plays a significant role in determining GH secretion in healthy non-obese adults; however, the exact mechanisms behind this relationship are not yet understood [73]. In an attempt to understand the underlying mechanisms regulating GH production in obesity, scientists have proposed several hypotheses. One possible explanation is that excess adipose tissue in obesity may lead to increased levels of circulating insulin and free fatty acids (FFAs), which can inhibit GH secretion by suppressing the principal regulator, the GHRH hormone. A previous study aimed to investigate the effect of FFA on GH secretion in response to GHRH in a sample of six young, healthy men. The study found that the peak GH response following GHRH administration was significantly reduced in the group that received the lipid-heparin infusion compared with the peak GH response in the control group. These results suggest that elevations in plasma FFA levels can inhibit GH secretion in response to GHRH. Due to the small sample size, further research is necessary to fully understand the underlying mechanisms and generalizability of the findings [74]. Furthermore, pharmacological interventions that reduce FFA levels have increased GH secretion. Conversely, elevations in FFA levels have been found to inhibit GH secretion, both at baseline and in response to various stimuli, such as reductions in FFA levels, hypoglycemia, physical exercise, or administration of GHRH or GHRP-6 (a synthetic hexapeptide that specifically triggers the release of GH by pituitary somatotrophs) [75]. Additionally, excess adipose tissue may also lead to increased levels of circulating insulin and FFAs, inhibiting GH secretion [76].
Aging is associated with a decline in the production of GH and IGF-1, which can result in a host of negative consequences. Specifically, the decline in GH and IGF-1 can lead to decreased muscle mass, increased body fat, and reduced bone density [77]. These changes can negatively impact an individual's quality of life, including causing an increased risk of falls and fractures. Additionally, decreased GH and IGF-1 levels have been linked to an increased risk of developing chronic conditions such as diabetes and heart disease [78]. Therefore, understanding the role of GH and IGF-1 in aging and the associated physiological changes are important for developing interventions to promote healthy aging.
The Negative and Positive Feedback Regulating GH Production
The GH-IGF axis is a complex system that plays a crucial role in regulating growth and metabolism [1,17,66]. The axis involves the interplay of several steroid hormones and proteins, which not only include GH and IGF-1 but also upstream signaling neuropeptides such as SST and GHRH, as well as the downstream signaling targets known as IGF-binding proteins (IGFBPs) [65,[79][80][81]. This section will focus on the mechanisms that regulate the negative and positive feedback effects in the GH-IGF axis.
The GH-IGF axis operates through a positive and negative feedback loop, whereby GH, which is produced by somatotroph cells in the pituitary gland, stimulates the liver to produce circulating IGF-1 (see Figure 1). This process is mediated by a pathway that leads to the activation of the transcription factor, signal transducer and activator of transcription 5 (STAT5) (see Figure 1), which in turn stimulates the expression of IGF-1 [82]. The increased level of IGF-1 promotes cell growth and division by binding to its receptors on the cell surface and activating the PI3K/Akt signaling pathway [5,83,84].
One of the earliest studies that aimed to investigate the molecular mechanism by which IGF-1 suppresses GH gene expression at the pituitary level was conducted at Nagoya University in Japan. In that study, the researchers established a rat somatotroph tumor cell line, MtT/S, and transfected plasmids containing the GH 5 promoter fused to the luciferase reporter gene. They found that IGF-1 suppressed GH promoter activity in a time and dose-dependent manner to inhibit GH secretion. Further experiments using deletion mutants of the GH promoter revealed that negative regulation was maintained on the shortest construct. This suggests that the IGF-1-related factor acts at the region nearest the minimal promoter. The study also found that the negative effect was eliminated by a PI3K inhibitor, indicating that the PI3K-mediated signaling pathway plays a major role in the negative regulation of IGF-1 [85].
A study was conducted to investigate how IGF-1 negatively regulates GH gene expression at the promoter level using a somatotroph cell line. The results revealed that IGF-1 inhibits GH mRNA levels by disrupting the binding of POU1F1 to the GH promoter through the inhibition of cyclic AMP (cAMP) response element binding protein (CBP). To confirm CBP's role as a target of IGF-1R signaling, researchers used a mutant CBP construct and knock-in mouse model, which showed elevated serum GH levels, a greater response to GHRH stimulation, lower weight gain, and decreased body fat. The findings suggest that IGF-1R signaling disrupts the POU1F1/CBP complex to inhibit gene expression. Furthermore, chromatin immunoprecipitation assays demonstrated the inhibition of CBP binding to the GH promoter after IGF-1 treatment. Using a mutant CBP construct that lacked a critical phosphorylation site led to the loss of IGF-1 inhibition, supporting the hypothesis. The study confirmed the inhibitory effects of IGF-1 on GH expression at the promoter level and provided evidence of CBP's role as a target of IGF-1R signaling [86].
To further understand the role of IGF-1 in negative feedback, our group developed a mouse model, referred to as the SIGFRKO mouse, in which the IGF-1R was ablated from the somatotroph and where they used the GH promoter as a driver for Cre recombinase. The SIGFRKO mice had increased GH gene expression and secretion and increased serum IGF-1. Additionally, the SIGFRKO mice had decreased GHRH and increased STT mRNA expression levels. These results support the idea that IGF-1 negatively regulates somatotroph synthesis and GH secretion, suggesting that hypothalamic feedback limits the extent of GH release [18].
A recent study investigating the role of IGF-1 in regulating the negative feedback of GH secretion developed a different mouse model in which the IGF-1R was specifically ablated in GHRH-expressing cells [84]. As expected, this mouse model presented an interesting phenotype that was characterized by an increase in GH levels, pulse amplitude, and frequency, as well as increased GHRH mRNA levels, GH mRNA expression, and serum IGF-1 levels, which were mediated by the pronounced elevation in the circulating level of GH. Despite the established role of GH in promoting lipolysis, this mouse model study did not demonstrate any effect on total fat mass despite the significant elevation in circulating GH levels [18]. Recently, our laboratory has developed a new mouse model, referred to as the S-GIGFRKO mouse, which is characterized by the ablation of the IGF-1R in both the somatotrophs and GHRH-expressing neurons. The results of our study revealed that the S-GIGFRKO mouse line displayed a modest increase in circulating GH levels and circulating IGF-1 levels. Furthermore, the ablation of IGF-1R in this mouse model was associated with increased lipolysis activity and a decrease in total body fat mass. These findings demonstrate the crucial role of IGF-1 in regulating GH production through negative feedback mechanisms [5]. Further details and analyses of these models will be provided in the next section of our study.
In conclusion, the GH-IGF axis is a complex system that is tightly regulated to maintain the balance between growth and metabolic processes. SST, GHRH, and IGFBPs play important roles in regulating the positive and negative feedback in the axis, ensuring a homeostatic response to changing physiological conditions. These mechanisms are mediated by different signaling pathways and receptors that are activated by interacting hormones and proteins. Further research is required to fully understand the intricacies of this system and its potential therapeutic applications.
Overview of the Current Understanding of the Relationship between the GH-IGF-1 Axis and Obesity
GH and IGF-1 are endocrine signaling molecules that are critical for regulating processes such as growth, maturation, and metabolic homeostasis. It has been suggested that deviations from normal GH and IGF-1 levels are linked with the development of obesity; however, the precise mechanisms underlying this relationship have yet to be fully elucidated [65,80,87].
One potential link between the GH-IGF-1 axis and obesity is the molecules' effects on insulin sensitivity. IGF-1 has a nearly 50% amino acid sequence similarity with insulin and elicits an almost identical hypoglycemic response [88,89]. The capability of IGF-1 to bind to insulin receptors suggests its involvement in mediating insulin activity. Several experimental models have been used to demonstrate the effect of IGF-1 on insulin sensitivity and resistance.
Researchers have created a mouse model known as the liver IGF-1-deficient (LID) mouse to investigate the metabolic effects of IGF-1 deficiency. The LID mice showed significantly reduced levels of IGF-1 and elevated levels of GH in their circulation, which is associated with higher insulin levels and abnormal glucose clearance after insulin injection. However, their fasting blood glucose levels and levels after a glucose tolerance test appeared to be normal. This suggests that the LID mice are insulin resistant but can maintain normal blood glucose levels due to the high insulin levels in their circulation. Treatment with recombinant human IGF-1 or a GH-releasing hormone antagonist, which reduces GH levels, improved insulin sensitivity in the LID mice. These findings indicate that circulating IGF-1 plays a role in insulin action in peripheral tissues (Figure 2) [90].
Evidence suggests that GH and IGF-1 could contribute to the onset of obesity by impacting inflammation and oxidative stress. A previous study aimed to investigate the potential role of GH and IGF-1 in the development of obesity and focused on their role in mediating oxidative stress and inflammation. The study determined the effects of long-term HFD-induced obesity on vascular function and metabolic alterations in a Lewis dwarf rat model of GH/IGF-1 deficiency. The results show that GH/IGF-1 deficiency exacerbates vascular dysfunction, inflammation, and oxidative stress when challenged with an HFD. However, GH/IGF-1 deficiency did not affect weight gain or changes in body composition in response to the HFD challenge. Instead, the low insulin levels observed in the GH/IGF-1 deficient rats may be due to compromised β-cell numbers or function and impaired β-cell compensation in response to metabolic challenges. GH/IGF-1 deficiency was associated with increased adiponectin levels but normal serum levels of leptin. In the control animals, the HFD stimulated an inflammatory response to increase circulating levels of multiple inflammatory cytokines, including IL-6 and TNF-α; however, the exact mechanism is not well understood [91]. mouse to investigate the metabolic effects of IGF-1 deficiency. The LID mice showed signif icantly reduced levels of IGF-1 and elevated levels of GH in their circulation, which is asso ciated with higher insulin levels and abnormal glucose clearance after insulin injection However, their fasting blood glucose levels and levels after a glucose tolerance test appeared to be normal. This suggests that the LID mice are insulin resistant but can maintain norma blood glucose levels due to the high insulin levels in their circulation. Treatment with re combinant human IGF-1 or a GH-releasing hormone antagonist, which reduces GH levels improved insulin sensitivity in the LID mice. These findings indicate that circulating IGF-1 plays a role in insulin action in peripheral tissues (Figure 2) [90]. Evidence suggests that GH and IGF-1 could contribute to the onset of obesity by im pacting inflammation and oxidative stress. A previous study aimed to investigate the po tential role of GH and IGF-1 in the development of obesity and focused on their role in mediating oxidative stress and inflammation. The study determined the effects of long term HFD-induced obesity on vascular function and metabolic alterations in a Lewis dwarf rat model of GH/IGF-1 deficiency. The results show that GH/IGF-1 deficiency exac erbates vascular dysfunction, inflammation, and oxidative stress when challenged with an HFD. However, GH/IGF-1 deficiency did not affect weight gain or changes in body composition in response to the HFD challenge. Instead, the low insulin levels observed in the GH/IGF-1 deficient rats may be due to compromised β-cell numbers or function and impaired β-cell compensation in response to metabolic challenges. GH/IGF-1 deficiency was associated with increased adiponectin levels but normal serum levels of leptin. In the Figure 2. The role of liver-derived circulating IGF-I in muscle insulin sensitivity. Liver-specific igf-1 gene deletion results in reduced circulating total IGF-I and elevated GH levels, leading to insulin insensitivity in muscle and islet cell hyperplasia with hyperinsulinemia.
Another possible mechanism linking the GH-IGF-1 axis and obesity is through the effects on muscle mass and function and the development of adipose tissue. GH and IGF-1 are known to promote the growth and development of skeletal muscle, and low levels of these hormones have been associated with a reduction in muscle mass and function [92]. This may lead to an impaired ability to burn calories, potentially contributing to weight gain. In addition, previous studies using several transgenic mouse models have demonstrated the crucial role of the GH-IGF-1 axis in regulating adipocyte proliferation and differentiation [66,93]. Adipocyte differentiation is a complex process involving the activation of multiple transcription factors, signaling pathways, and epigenetic modifications [94]. The process starts with the commitment of mesenchymal stem cells to the adipocyte lineage, followed by the formation of preadipocytes. Preadipocytes then undergo a series of morphological and biochemical changes to become mature adipocytes, which are characterized by the accumulation of triglycerides and the expression of adipocytespecific genes, such as peroxisome proliferator-activated receptor gamma (PPARγ) and adiponectin [95]. Finally, GH and IGF-1 may have effects on brain regions controlling the appetite [81,96]. Dysregulation of GH and IGF-1 may therefore disrupt normal appetite control and contribute to weight gain.
The relationship between GH, IGF-1, and obesity is complex and not fully understood (summary in Table 1). Further research is needed to fully understand the mechanisms by which these hormones may be linked to the development of obesity and to identify potential strategies for preventing and treating obesity. Table 1. Overview of the current understanding of the relationship between the GH-IGF-1 axis and obesity.
GH-IGF-1 Axis and Obesity Summary of Current Understanding References
GH deficiency and obesity GH deficiency can lead to an increase in body fat mass, while GH replacement therapy can decrease it. However, the relationship between GH deficiency and obesity is complex and is influenced by various factors. [97,98] GH excess and obesity GH excess is associated with decreased body fat mass but also with insulin resistance and glucose intolerance. [99] IGF-1 and obesity IGF-1 levels are positively correlated with body fat mass, and low IGF-1 levels have been linked to obesity-related complications such as insulin resistance and T2DM. However, the relationship between IGF-1 and obesity is not fully understood and may be influenced by other factors such as age and gender. [80,100]
GH-IGF-1 axis and adipocyte differentiation
The GH-IGF-1 axis plays a crucial role in regulating adipocyte proliferation and differentiation. GH promotes the differentiation of preadipocytes into mature adipocytes, while IGF-1 promotes the proliferation of preadipocytes. Dysregulation of this process may contribute to the development of obesity. [94] GH-IGF-1 axis and appetite regulation GH and IGF-1 have been shown to affect appetite regulation through various mechanisms, such as stimulating the production of leptin and ghrelin. However, the exact role of the GH-IGF-1 axis in appetite regulation and its contribution to obesity is still unclear.
Mouse Models Were Used to Study the Role of GH and IGF-1 in Obesity
The levels of circulating GH and IGF-1 exhibit an age-dependent pattern, with a gradual increase following birth, reaching a plateau during puberty and a subsequent decline of approximately 14% per decade of life [103,104]. The GH-IGF-1 axis plays a crucial role in the regulation of energy balance in the body, and disruptions in this axis have been linked to the development of obesity. As mentioned earlier, obesity is a complex metabolic disorder that results from an imbalance between energy intake and energy expenditure [105,106]. To gain a better understanding of the role of the GH-IGF-1 axis in obesity, researchers have utilized various mouse models to investigate the effects of alterations in these factors in the tissues and organs that are involved in energy metabolism, such as the liver, adipose tissue, and muscles [66]. These models have been valuable tools for studying the mechanisms by which the GH-IGF-1 axis regulates energy balance and for identifying potential therapeutic targets for obesity.
The Snell Dwarf Mouse
The Snell dwarf mouse is a naturally occurring mouse model first identified at the Bussey Institution at Harvard University in the 1930s [107]. This model is characterized by a phenotype of dwarfism, which is caused by mutations in the Pit-1 gene, a transcription factor that regulates the production of GH, TSH, and PRL. The Snell dwarf mouse serves as a valuable tool for understanding the GH-IGF-1 axis, as studies have demonstrated that the lack of Pit-1 results in decreased GH and IGF-1 production, leading to the dwarfism phenotype. This mouse model has been extensively utilized to investigate the role of GH and IGF-1 in various physiological processes such as bone growth, metabolism, and longevity. Additionally, the Snell dwarf mouse has provided valuable insight into the mechanisms of Pit-1-dependent gene expression and its role in the regulation of GH and PRL production. The observations generated from this mouse model have helped to increase our understanding of the current modern science and have been widely used to shape the current research in the field.
The Ames Dwarf Mouse
The Ames dwarf mouse is a naturally occurring mouse model that was first identified in the 1960s. This model is characterized by a phenotype of dwarfism, which is caused by mutations in the Pit-1 gene. This mutation leads to a severe growth defect that results in proportionate dwarfism, with adult mice that are half the size of their control littermates. Additionally, Ames dwarf mice are sterile and hypothyroid. At the cellular level, Ames dwarf mice have an almost complete absence of pituitary somatotrophs, lactotrophs (PRLproducing cells), and thyrotropes (TSH-producing cells). This lack of these three cell types causes a lack of GH, PRL, and TSH in the mice. The PROP1 gene encodes a transcription factor, which contains a paired-like homeodomain. Ames dwarf mice have a serine-toproline amino acid substitution in the DNA-binding domain of PROP1. This substitution results in the mutant PROP1 being unable to bind to DNA effectively; thus, the three pituitary cell types fail to differentiate and proliferate during development [108]. This mouse model is useful for understanding the GH-IGF-1 axis, as it has been shown that the lack of GH results in decreased IGF-1 production, leading to the dwarfism phenotype. In addition, Ames mice have been used to study the role of GH and IGF-1 in bone growth, metabolism, and longevity, as well as the effect of GH on brain development, behavior, and cognitive function. The Ames dwarf mouse has been shown to have an elevation in fat mass, making it a useful model for studying the effects of GH on obesity [109].
The Metallothionein-I Human Growth Hormone Transgenic Mouse Model (MT1-hGH)
In 1983, a collaboration project between the University of Washington and University of Pennsylvania resulted in the creation of the first transgenic mouse model associated with an alteration in the GH-IGF-1 axis. The scientists utilized the promoter or regulatory region of the mouse gene for metallothionein-I (MT1) and fused it with the structural gene coding for human growth hormone (hGH). Microinjection of these fusion genes into fertilized eggs resulted in the generation of transgenic mice (MT1-hGH). These transgenic mice exhibited increased size compared with control mice, which was attributed to the elevation in serum levels of GH, which subsequently resulted in an elevation in IGF-1. Additionally, the transgenic mice showed alterations in pituitary function, specifically in the form of a dysfunction of the cells involved in the synthesis of GH. This model has been extensively utilized to study the role of the GH-IGF-1 axis in growth and development and has contributed to a deeper understanding of the mechanisms' underlying growth and development as well as the complex interactions between these hormones and their receptors. This mouse model has been instrumental for advancing research in the field and for the development of new therapies for growth disorders. This model has helped to illuminate several concepts that are still widely used in modern research today [110].
The Adult-Onset, Isolated, Growth Hormone Deficiency (AOiGHD) Mouse Model
The AOiGHD mouse model is a genetically engineered mouse model that is characterized by a deficiency of GH in adult mice. This model was developed by crossbreeding rat GH promoter-driven Cre recombinase mice (Cre) with inducible diphtheria toxin receptor mice (iDTR). The resulting Cre +/− , iDTR +/− offspring were then treated with diphtheria toxin to selectively destroy the somatotroph population of the anterior pituitary gland, leading to a reduction in circulating GH and IGF-1 levels. The main goal of the study was to investigate the hypothesis that the decline of GH levels observed with weight gain and normal aging may contribute to metabolic dysfunction. The study aimed to understand the effects of GH on fat accumulation, protein accretion, and insulin sensitivity under different feeding conditions. The results of the study showed that AOiGHD mice improved whole-body insulin sensitivity in both low-fat and high-fat-fed conditions and that these mice preferentially utilized carbohydrates for energy metabolism. However, in high-fat-fed AOiGHD mice, the fat mass increased, hepatic lipids decreased, and glucose clearance and insulin output were impaired. These findings suggest that low GH in the context of excess caloric intake could contribute to the development of diabetes [104].
The GH −/− Mouse Model
GH −/− mice are characterized by a deficiency of GH. The method that was used to create these mice involved the removal of the entire gene coding region of the mouse GH genomic sequence using the VelociGene KOMP definitive null allele design, which involves replacing the removed sequence with a ZEN-UB1 reporter/selection cassette. These mice have a genetic deletion of the GH gene, resulting in a lack of GH production in the pituitary gland. GH −/− mice exhibit a phenotype of reduced body size, muscle mass, and bone density, as well as increased fat mass, similar to observations in human patients with GH deficiency. This mouse model is useful for studying the effects of GH deficiency in various physiological processes, as well as for developing therapies for GH deficiency in humans. Studies using GH −/− mice have been used to investigate the effects of GH on the aging-related decline in muscle mass, bone density, and metabolism. Additionally, this model has been employed to study the impact of GH deficiency on the brain and its effects on cognitive function and behavior [111].
The GHR −/− (Laron) Mouse Model
The Laron syndrome mouse model, created by a team of researchers led by Dr. Kopchick at the University of Ohio in 1997 [112], is considered one of the major milestones in enhancing our understanding of the role of GH in metabolic homeostasis and growth development. This model was created to mimic Laron syndrome in humans, which is due to mutations in the GH receptor (GHR) gene and is unique, as GHR deficiency has only been reported in humans and not in any other mammals. This model presents severe postnatal growth retardation, proportionate dwarfism, decreased IGF-1 levels, and elevated serum GH concentrations. This mouse model has been extensively utilized in research to gain insight into the mechanisms of GH in growth and aging. Furthermore, it has won the Methuselah Mouse Prize, an award for the longest-lived mouse model (i.e., its lifespan was shown to extend to almost 5 years of age), thus making the Laron mouse the most significant contribution to aging research [113]. It also serves as a valuable model for understanding the pathogenesis of obesity and the role of the GH-IGF-1 axis in the regulation of energy metabolism.
The Adipocyte-Specific GHR Knockout (AdGHRKO) Mouse Model
Adipocyte-specific GHR KO (AdGHRKO) mice were recently created by the same group of scientists who created the Laron mouse to better understand the role of GH signaling directly in adipose tissue. Using the Cre/Lox strategy to specifically ablate GHR expression from adipose tissue, the authors showed that AdGHRKO mice have increased adiposity but appear healthy with enhanced insulin sensitivity. Additionally, the AdGHRKO mice had increased fat mass; reduced circulating levels of insulin, c-peptide, adiponectin, and resistin; and improved frailty scores, with increased grip strength at advanced ages in both sexes. The study found that disrupting the GHR gene in adipocytes improved insulin sensitivity at an advanced age and increased lifespan in male AdGHRKO mice [111]. Overall, the results indicate that removing GH's action, even in a single tissue, can have observable health benefits, promoting long-term health, reducing frailty, and increasing longevity. By specifically ablating GHR expression from adipose tissue, this mouse model can be utilized by researchers to study the specific effects of GH on energy metabolism and the development of obesity. The results obtained from this model could aid in identifying potential therapeutic targets for the treatment of obesity and related metabolic disorders.
The Somatotroph IGF-1R Knockout Mouse Model (SIGFRKO)
This mouse model was developed in 2010 by a team of scientists at John Hopkins University using the Cre/lox system to specifically delete the IGF-1R from the somatotroph cells. The SIGFRKO mouse was used to study the role of IGF-1 in regulating the expression and release of GH. The SIGFRKO mouse showed increased GH expression and secretion, as well as increased serum IGF-1 levels. Additionally, there were compensatory changes in the expression of GHRH and SST, and the mice had normal linear growth in adulthood. Metabolic studies also revealed an elevation in the metabolic activity associated with an elevation in energy expenditure, reducing total fat mass due to increased lipolytic activity. These findings support the notion of negative regulation of GH expression and release by IGF-1 and suggest that hypothalamic feedback plays a role in limiting GH release. The SIGFRKO mouse also serves as a valuable tool for understanding the mechanisms of IGF-1 regulation in the hypothalamic-pituitary axis as well as the compensatory mechanisms that mediate growth and metabolic function in mammals (see Figure 3) [18,40].
The Somatotroph GHRH Neurons IGF-1R Knockout (S-GIGFRKO) Mouse Model
Recently, our laboratory has generated a novel transgenic mouse model that is chara terized by the selective deletion of the IGF-1R in GHRH neurons and somatotrophs. Th model was designed to investigate the role of IGF-1R signaling in the regulation of GHRH mediated GH production and growth. The S-GIGFRKO mice exhibited a modest increase i serum GH levels and GH gene mRNA expression, as well as a modest increase in serum IGF-1 levels (Figure 4) [5]. A gene expression analysis revealed that the deletion of IGF-1 resulted in an elevation of GHRH and SST in the hypothalamus, suggesting a compensator mechanism. The S-GIGFRKO mice appeared to grow normally, but adult mice had a redu tion in weight gain compared with control littermates. A body composition analysis showe a reduction in total fat mass but no changes in lean mass. A metabolic analysis revealed a elevation in the metabolic activity associated with increased energy expenditure. These find ings provide new insights into the role of IGF-1R signaling in GH production and growt regulation and the potential use of this mouse model for further research on GH-relate disorders. This unique mouse model presents a robust system not only for uncovering th
The Somatotroph GHRH Neurons IGF-1R Knockout (S-GIGFRKO) Mouse Model
Recently, our laboratory has generated a novel transgenic mouse model that is characterized by the selective deletion of the IGF-1R in GHRH neurons and somatotrophs. This model was designed to investigate the role of IGF-1R signaling in the regulation of GHRH-mediated GH production and growth. The S-GIGFRKO mice exhibited a modest increase in serum GH levels and GH gene mRNA expression, as well as a modest increase in serum IGF-1 levels (Figure 4) [5]. A gene expression analysis revealed that the deletion of IGF-1R resulted in an elevation of GHRH and SST in the hypothalamus, suggesting a compensatory mechanism. The S-GIGFRKO mice appeared to grow normally, but adult mice had a reduction in weight gain compared with control littermates. A body composition analysis showed a reduction in total fat mass but no changes in lean mass. A metabolic analysis revealed an elevation in the metabolic activity associated with increased energy expenditure. These findings provide new insights into the role of IGF-1R signaling in GH production and growth regulation and the potential use of this mouse model for further research on GH-related disorders. This unique mouse model presents a robust system not only for uncovering the functional significance of IGF-1 in somatotrophs and the hypothalamus but also for understanding the role of the IGF-1R-GHRH pathway in the regulation of body weight and energy balance [5,114]. Nevertheless, caution should be exercised in extrapolating the results of animal studies to humans, and further research is needed to determine the safety and efficacy of GH therapy in human populations (Table 2). Table 2. Summary of mouse models used to study the GH-IGF-1 axis and obesity.
Mouse Model Characteristics Applications
Snell dwarf mouse Naturally occurring mouse model with dwarfism caused by mutations in the Pit-1 gene, which regulates the production of GH, TSH, and PRL. Lack of Pit-1 results in decreased GH and IGF-1 production.
A valuable tool for understanding the GH-IGF-1 axis; extensively utilized to investigate the role of GH and IGF-1 in various physiological processes, such as bone growth, metabolism, and longevity.
Ames dwarf mouse Naturally occurring mouse model with dwarfism caused by mutations in the Pit-1 gene, resulting in proportionate dwarfism with adult mice that are half the size of their control littermates. Almost complete absence of pituitary somatotrophs, lactotrophs, and thyrotropes.
Useful for understanding the GH-IGF-1 axis; studied the role of GH and IGF-1 in bone growth, metabolism, and longevity, as well as the effect of GH on brain development, behavior, and cognitive function; shown to have an elevation in fat mass, making it a useful model for studying the effects of GH on obesity.
MT1-hGH transgenic mouse
The transgenic mouse model was generated by fusing the promoter or regulatory region of the mouse gene for metallothionein-I (MT1) with the structural gene coding for human growth hormone (hGH). Exhibits increased size compared with control mice due to ele-Extensively utilized to study the role of the GH-IGF-1 axis in growth, development, and metabolism; has been used to investigate the effects of alterations in GH and IGF-1 in various tissues and organs in- Table 2. Summary of mouse models used to study the GH-IGF-1 axis and obesity.
Mouse Model Characteristics Applications
Snell dwarf mouse Naturally occurring mouse model with dwarfism caused by mutations in the Pit-1 gene, which regulates the production of GH, TSH, and PRL. Lack of Pit-1 results in decreased GH and IGF-1 production.
A valuable tool for understanding the GH-IGF-1 axis; extensively utilized to investigate the role of GH and IGF-1 in various physiological processes, such as bone growth, metabolism, and longevity.
Ames dwarf mouse Naturally occurring mouse model with dwarfism caused by mutations in the Pit-1 gene, resulting in proportionate dwarfism with adult mice that are half the size of their control littermates. Almost complete absence of pituitary somatotrophs, lactotrophs, and thyrotropes.
Useful for understanding the GH-IGF-1 axis; studied the role of GH and IGF-1 in bone growth, metabolism, and longevity, as well as the effect of GH on brain development, behavior, and cognitive function; shown to have an elevation in fat mass, making it a useful model for studying the effects of GH on obesity.
MT1-hGH transgenic mouse
The transgenic mouse model was generated by fusing the promoter or regulatory region of the mouse gene for metallothionein-I (MT1) with the structural gene coding for human growth hormone (hGH). Exhibits increased size compared with control mice due to elevated levels of GH and a subsequent elevation in IGF-1.
Extensively utilized to study the role of the GH-IGF-1 axis in growth, development, and metabolism; has been used to investigate the effects of alterations in GH and IGF-1 in various tissues and organs involved in energy metabolism such as the liver, adipose tissue, and muscle.
(AOiGHD) mouse model The model was developed by crossbreeding rat GH promoter-driven Cre recombinase mice with inducible diphtheria toxin receptor mice (iDTR) to selectively destroy the somatotroph population of the anterior pituitary gland, leading to a reduction in circulating GH and IGF-1 levels.
This model has been used in various studies to investigate the role of GH in metabolic regulation and to understand the mechanisms underlying metabolic disorders such as obesity, insulin resistance, and diabetes A genetically engineered mouse model that lacks GH production in the pituitary gland due to a genetic deletion of the GH gene. These mice exhibit a phenotype of reduced body size, muscle mass, and bone density, as well as increased fat mass.
This model is useful for studying the effects of GH deficiency in various physiological processes and for developing therapies for GH deficiency in humans.
The GHR −/− (Laron) mouse model A mouse model with a targeted disruption of the gene encoding the GH receptor (GHR), resulting in a lack of functional GHR.
Useful in understanding the role of the GHR in various physiological processes such as growth, metabolism, and immune function; provides insights into the mechanisms of GHR signaling and its downstream effects on energy metabolism.
(AdGHRKO) mouse model This model was specifically designed to ablate GHR expression from adipose tissue, which caused increased adiposity, enhanced insulin sensitivity, increased fat mass, reduced circulating levels of insulin, c-peptide, adiponectin, and resistin, improved frailty scores with increased grip strength at advanced ages in both sexes, and increased lifespan in male AdGHRKO mice.
Study the specific effects of GH on energy metabolism and the development of obesity; identify potential therapeutic targets for the treatment of obesity and related metabolic disorders.
Somatotroph IGF-1R knockout mouse model (SIGFRKO) Uses the Cre/lox system to specifically delete the IGF-1R from the somatotroph cells. Increased GH expression and secretion, as well as increased serum IGF-1 levels as compensatory changes in the expression of GHRH and SST; normal linear growth in adulthood, elevation in metabolic activity associated with an elevation in energy expenditure reducing the total fat mass due to increased lipolytic activity A valuable tool for understanding the mechanisms of IGF-1 regulation in the hypothalamic-pituitary axis and the compensatory mechanisms that mediate growth and metabolic function in mammals.
Somatotroph GHRH neurons IGF-1R knockout (S-GIGFRKO) mouse mode Selective deletion of the IGF-1R in GHRH neurons and somatotrophs resulted in a modest increase in serum GH levels and GH gene mRNA expression, as well as a modest increase in serum IGF-1 levels; elevation of GHRH and SST in the hypothalamus normal growth, but adult mice had a reduction in weight gain compared with control littermates; reduction in total fat mass but no changes in lean mass. Elevation in metabolic activity associated with increased energy expenditure Provides new insights into the role of IGF-1R signaling in GH production and growth regulation and the potential use of this mouse model for further research on GH-related disorders. Can be used to understand the role of the IGF-1R-GHRH pathway in the regulation of body weight and energy balance.
The Effect of Obesity on GH and IGF-1 Production
Obesity is associated with a marked blunting of GH secretion, which is both spontaneous and is evoked by provocative stimuli. This reduction in GH secretion is observed in response to traditional pharmacological stimuli acting in the hypothalamus, such as insulin-induced hypoglycemia, arginine, galanin, L-dopa, clonidine, and acute glucocorticoid administration, as well as in response to direct somatotroph stimulation by exogenous GHRH [115].
The impact of obesity on serum IGF-1 levels is a matter of controversy within the scientific community. While some studies have shown no alterations in IGF-1 levels in obesity, others have indicated a decrease in IGF-1 levels in the presence of obesity; others have also demonstrated an increase in IGF-1 levels in obese individuals [65,116]. These seemingly contradictory findings may be explained by the high levels of insulin that are present in obesity, which has been shown to increase IGF-1 production in the liver while also reducing the formation of IGF-binding protein 1. This increased availability of free and active IGF-1, sustained by high insulin levels, may explain the decrease in GH secretion through negative feedback mechanisms. Therefore, understanding the mechanisms underlying the altered regulation of GH secretion in obesity is an important area of research as it may have implications for the treatment of obesity and related metabolic disorders.
Clinical trials have investigated the potential use of GH and IGF-1 as interventions for obesity. For example, A meta-analysis of 24 studies involving almost 500 obese individuals found that GH treatment led to a decrease in fat mass of about 1 kg and an increase in lean body mass of about 2 kg over an average of 12 weeks. The outcome of this study suggests that treatment with recombinant human rhGH leads to a reduction in visceral fat and an increase in lean body mass in obese adults without causing weight loss. However, the treatment also leads to increases in fasting plasma glucose and insulin levels. The study used relatively high doses of rhGH, and further studies with longer durations and lower doses are needed to better understand the effects of rhGH therapy on obesity and its potential impact on cardiovascular health [117]. In all clinical studies that use rhGH as a therapeutic agent, caution is urged as its therapeutic safety is assessed. In summary, more research is needed to fully understand the effects of GH therapy and its potential side effects.
Conclusions and Further Study
This review of the literature highlights the potential therapeutic benefits of targeting GH and IGF-1 in the management of obesity. The complex interplay between these hormones and growth, as well as their roles in regulating energy balance and body composition, suggest that modulating GH and IGF-1 levels may be a promising strategy for combating obesity and its associated comorbidities. However, caution should be exercised as there are several limitations and challenges that are associated with their use. The safety and efficacy of GH and IGF-1 as a treatment for obesity are not clear, and long-term studies have not been conducted. Furthermore, GH and IGF-1 are both hormones that are naturally present in the body, and their use as therapeutic agents can disrupt the body's homeostatic mechanisms, leading to untoward side effects. These side effects can include joint pain, carpal tunnel syndrome, and an increased risk of diabetes and cancer.
Moreover, it is important to note that animal models may not necessarily be directly applicable to humans, and more clinical trials are needed to fully evaluate the potential benefits and risks of GH and IGF-1 therapy in managing obesity. While the results obtained from animal studies provide valuable insights, caution should be exercised in extrapolating these findings to human populations. Human physiology and response to treatment can significantly differ from animal models, and therefore, it is essential to conduct welldesigned clinical trials involving human subjects. These trials will not only help determine the safety and efficacy of GH and IGF-1 therapy in humans but also provide more accurate information on the potential benefits and risks associated with these interventions in the context of obesity management. Only through comprehensive research involving human populations can we confidently assess the feasibility and suitability of GH and IGF-1 therapies as effective strategies for combating obesity.
Therefore, while GH and IGF-1 show promise as therapeutic agents for obesity, more research is needed to fully understand their effects and potential side effects. Randomized, controlled trials are needed to confirm the efficacy of GH and IGF-1 targeted therapies in treating obesity and determine their long-term safety. Additionally, the high cost and complexity of administration present challenges for their practical application. It is important to approach the use of GH and IGF-1 in obesity management with caution, considering the potential risks and limitations that are associated with their use.
In conclusion, GH and IGF-1 may offer a promising avenue for the management of obesity, but caution should be exercised, and more research is needed to fully understand their effects, develop safe and effective treatment strategies, and validate their findings in human populations.
Funding: The work was supported by National Institute for Health Care Management Foundation (U01HD086838-01A1).
Conflicts of Interest:
The authors declare no conflict of interest. | 2023-06-03T15:04:03.003Z | 2023-05-31T00:00:00.000 | {
"year": 2023,
"sha1": "56ae20fad34fe1ced1e2274ac988062d94138c56",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/ijms24119556",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "263960d4c23a82f84ceaef6811e45bddaa62fc34",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
20648424 | pes2o/s2orc | v3-fos-license | Angiosarcoma of the Descending Aorta, Diagnostic Difficulties
Introduction Primary angiosarcomas of the aorta are rare and because of their non-specific presentation, the initial diagnosis is often very difficult. Report A 66 year old woman, initially suffering from night sweats and general malaise, is presented. A computerized tomography (CT) scan was performed which showed a filling defect of the descending aorta. This defect later caused embolic occlusion of the celiac vessels. The patient underwent surgical resection of the filling defect of the descending aorta and an embolectomy of the celiac vessels. The defect was histopathologically diagnosed as an angiosarcoma. The clinical presentation, diagnostic pitfalls, histopathological diagnosis, and the therapeutic management are discussed. Discussion In this case report, the importance of carefully diagnosing an angiosarcoma is highlighted as the consequences could be rapid metastasization or embolization.
INTRODUCTION
Primary malignant tumors of the aorta, first described in 1873, 1 are extremely rare and exhibit considerable histological heterogeneity. 2 The symptoms and radiological appearance of these tumors are often non-specific. The diagnosis is often established after resection of the tumor. 3,4 The most common histological entities that have been described are sarcomas without classification. 4 This is a case report of a patient who initially presented with night sweats and general malaise, who's CT scan showed a filling defect of the descending aorta, which, after resection, was later diagnosed as an angiosarcoma.
CASE REPORT
A 66 year old woman, with no significant medical history, presented to her general practitioner with progressive general malaise that had lasted 6 months. She intentionally lost 10 kilograms because she was overweight. For 1 week, she had suffered from fever, night sweats, and fatigue in the lower limbs. She had no history of tuberculosis and, although living in an endemic Q-fever area, she had had no contact with sheep or goats. After 2 weeks she was referred to a rheumatologist who found an increased white cell count of 25.2 Â 10 9 /L (normal range 4e11 Â 10 9 /L), an increased C-reactive protein (CRP) level of 42 mg/L (normal range <10), and an increased erythrocyte sediment rate of 48 mm/hour (normal range <30 mm/hour). With the differential diagnosis of an unknown autoimmune disease she was treated with prednisone. As the symptoms persisted after 3 weeks, she was admitted to the department of internal medicine at a rural hospital. The initial differential diagnosis included an unknown generalized infection. Therefore, blood cultures were taken, which were negative and remained negative. The blood levels at that time were: an increased CRP of 106 mg/L, an increased white cell count of 20.6 Â 10 9 /L, and negative Q fever and lues. A computerized tomography (CT) scan showed a filling defect of the descending aorta and the coeliac trunk ( Fig. 1), which was interpreted as a possibly infected thrombosis and was treated with gentamicin and vancomycin. To prevent further growth of a possible thrombus, the patient was treated with therapeutic Nadroparin (Aspen Pharma Trading, Dublin, Ireland). The CT findings suggested positron emission tomography (PET), which showed a hotspot at the filling defect of the descending aorta and a small hotspot in the right tibia, considered to be septic emboli. A magnetic resonance imaging (MRI) scan of the tibia was performed. The differential diagnosis of the lesion in the tibia was inflammation or a malignancy.
The patient was referred to the department of vascular surgery of a tertiary referral hospital. Here the antibiotic was discontinued. On admission, no abdominal abnormalities were found on physical examination. The symptoms of the patient were persistent, but unchanged from the initial presentation. A multidisciplinary treatment team, including a cardiothoracic surgeon, a vascular surgeon, and a microbiologist recommended a magnetic resonance angiogram (MRA). This showed a non-enhancing lesion in the descending aorta without vessel wall involvement, concluding that the filling defect of the descending aorta was less suspicious of malignancy. Seven days after the referral, the patient developed acute abdominal pain. A CT scan was performed, which showed complete occlusion of the celiac trunk as well as infarctions in the left lobe of the liver, spleen, and both kidneys. To prevent further embolization and to limit any ischemia-reperfusion injury, urgent surgical intervention was performed. Because the MRA showed a non-enhancing lesion without vessel wall involvement, the defect was considered to be resectable, therefore endovascular recanalization and stenting was not considered before surgery.
A left thoracophrenic-laparotomy was performed using left heart bypass with a Biomedicus pump (Medtronic inc. Minneapolis, MN, USA). The distal part of the thoracoabdominal aorta, from the 8th thoracic vertebrae down to the celiac trunk, was resected ( Fig. 2A). An embolectomy of the celiac trunk was performed. An interposition graft (Intergard prosthesis ø 22 mm, Maquet Getinge group, Intervascular, Athelia, La Ciotat, France) was inserted between the two ends of the aorta with an oblique anastomosis at the celiac trunk (Fig. 2B). The tumor appeared to have spread beyond the vascular wall into some of the intercostal arteries. After 4 days on intensive care, the patient returned to the ward, recovered without complications, and was discharged in good health 10 days after the operation. The total hospital stay was approximately 5 weeks.
Histological examination revealed an epithelioid angiosarcoma. The tumor was present at the proximal resection edge and the embolus was shown to be a malignant tumor of the same kind. The tumor was staged pT4N0M1. Treatment was taken over by an oncologist, who conducted new PET, CT, and MRI scans to determine the course of treatment. In the PET/CT, a few weeks after the surgery, new abnormalities were found in the right femur and os ilium, suspicious of metastasis. The patient underwent palliative treatment.
DISCUSSION
Primary angiosarcomas originate from the heart, aorta, or the great vessels. About 140 cases of primary malignant neoplasia of the aorta have been described in the literature, with the first being documented by Brodowski in 1873. 1,3 Primary malignant neoplasms of the aorta can be classified based on where they appear to arise from: the intima, media, or adventitia. The most common site is the intima, where a polypoid tumor forms with an intraluminal growth pattern, causing obstruction and embolization. Intimal tumors can also grow longitudinally with thickening of the arterial wall. Mural tumors often present later and with extravascular growth; they originate from the media or adventitia. 3 Other neoplasms arising from the adventitia are tumors with extra-arterial growth and arterial obstruction, but these tumors are only present in advanced stages. The malignant neoplasms of the aorta that arise from endothelium are the angiosarcoma and endotheliosarcoma. The leiomyosarcoma arises from the smooth muscle cells of the media, and the fibrosarcoma arises from the adventitial fibroblasts. 5 These malignancies are very aggressive for both Angiosarcoma of the Descending Aorta local and distant recurrence. 6 The most common location of primary malignant neoplasia of the aorta is the descending thoracic aorta. The second most common location is the visceral abdominal aorta, followed by the infrarenal aorta. The treatment for tumors arising from the visceral aorta is most difficult because of the early involvement of the visceral arteries.
Herein, a case is reported of an angiosarcoma of the descending aorta in a 66 year old woman. Although the reported observations involve a female patient, there is a male predominance for these neoplasms. 3 Primary tumors of the veins are more often seen in women.
In this case, the woman suffered from non-specific symptoms. The symptoms that are usually mentioned in patients with malignant neoplasia of the aorta mostly originate from local occlusion of the aorta, such as symptoms of abdominal pain, lower extremity claudication, or secondary hypertension. The clinical presentation could also be related to embolization of the neoplasm, with metastases in the bones and skin, or to mesenteric infarction. The extension of the tumor in the visceral vessels could cause vascular insufficiency in the respective distribution. 4,5 Long-term survival is uncertain. Complete resection is very important, because the prognosis of patients with metastasis is extremely poor, as chemotherapy and radiotherapy are not effective. 7 The 5 year survival is about 8%, with a mean survival of 14 months. The survival time of the patient in this case report is unknown because she is alive at present. As many as 80% of the patients have metastatic disease. 5 In the case presented, the angiosarcoma of the descending aorta was not identified before embolization and metastases had developed.
In conclusion, angiosarcomas of the descending aorta are very rare, but should be suspected in patients in whom a filling defect of the descending aorta is seen on the CT scan. The diagnosis should be made before embolization of the process occurs, preventing metastasization and occlusion of the vessels. Symptoms usually present late. This case highlights the importance of careful consideration of the diagnosis when a filling defect in the descending aorta has been detected. If the present case is critically reviewed, it can be concluded that a pre-operative diagnostic work up of the filling defect of the descending aorta and the lesion in the tibia should have been prompted. | 2017-10-24T00:43:19.257Z | 2016-05-10T00:00:00.000 | {
"year": 2016,
"sha1": "a67a1e29da092de4cfdd124b8316b66302bdc764",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.ejvssr.2016.04.002",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a67a1e29da092de4cfdd124b8316b66302bdc764",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
268119309 | pes2o/s2orc | v3-fos-license | Patients with CML in the lymphoid blastic phase have inferior response to anti‐CD19 CAR T‐cell therapy compared to de novo Ph‐positive B cell acute lymphoblastic leukemia
Philadelphia ‐ positive acute B cell lymphoblastic leukemia (Ph ‐ positive B ‐ ALL) is the most common type of adult B ‐ ALL. Although the advent of tyrosine kinase inhibitors (TKIs) with conventional treat-ment strategies has improved the prognosis, the relapse/refractory (R/R) status is observed in certain patients with Ph ‐ positive B ‐ ALL. Chronic myeloid leukemia in the lymphoid blast phase (CML ‐ LBP) has similar immunophenotype and cytogenetic characteristics with Ph ‐ positive ALL. 1,2 Anti ‐ CD19 chimeric antigen receptor (CAR) T ‐ cell therapy has achieved great success in treating R/R B ‐ ALL. 3 – 6 Currently, there is a lack of data comparing the efficacy of anti ‐ CD19 CAR T ‐ cell therapy between de novo Ph ‐ positive B ‐ ALL and CML ‐ LBP. 7,8 Here, we performed a post hoc analysis of study NCT03919240, in which all patients received anti ‐ CD19 CAR T ‐ cell therapy be-tween January 2017 and May 2022 at the First Affiliated Hospital of Soochow University. Adult patients with relapsed or measurable residual disease (MRD) positive
performed using GraphPad Prism 9.0.0 (GraphPad Software Inc.) and R software, version 4.2.2.Intergroup comparisons were performed using the χ 2 (and Fisher's exact) test.The probabilities of duration of response (DOR), cumulative incident rate (CIR), event-free survival (EFS), and overall survival (OS) were estimated by means of the Kaplan-Meier method and were compared with the use of the log-rank test.It was considered significant at p < 0.05 for all tests.
The baseline characteristics, disease status, disease burden before CAR T-cell therapy, and the grades of cytokine release syndrome and immune effector cell-associated neurotoxicity syndrome of patients are shown in Table 1, there was no significance between the two cohorts (p > 0.05).5/9 (56%) of patients with CML-LBP and 11/25 (44%) of patients with de novo Ph-positive B-ALL had BM blasts higher than 5% prior to CAR T-cell infusion.Moreover, 2/9 (22%) of CML-LBP patients had a history of isolated central nervous system leukemia (CNSL).No patients had received other anti-CD19 immunotherapy prior to CAR T-cell treatment.Of the 9 patients with CML-LBP, 7/9 (78%) had ABL1 kinase domain mutations, and 8/9 (89%) were treated with second-or third-generation TKIs.The clinical characteristics of the nine patients with CML-LBP are shown in Table S1.
At the Day 28 evaluation post-CAR T-cell therapy, the complete hematologic remission (CHR) was significantly lower in patients with CML-LBP than those with de novo Ph-positive B-ALL (44% vs. 84%, p = 0.034) (Figure 1A).Two CML-LBP patients with isolated central nervous system involvement showed no response to CAR-T therapy and succumbed to CNSL.Although there was no statistical significance, MRD negative complete remission (CR) (MRD-CR) in patients with CML-LBP was also lower than that in patients with de novo Ph-positive B-ALL (25% vs. 43%, p = 0.627) (Figure 1B).Similarly, MMR in patients with CML-LBP was lower than that in patients with de novo Ph-positive B-ALL (50% vs. 86%, p = 0.166), although with no statistical significance (Figure 1C).Relapse after CAR T-cell therapy has become a key challenging issue to address in patients with B-ALL. 10 Of the responding patients, 3/4 (75%) of CML-LBP patients relapsed at 2.7, 3.3, and 11.6 months, and 8/21 (38%) of patients with de novo Ph-positive B-ALL relapsed in a median time of 6.6 months (range, 1.3-8.9months) after CAR T-cell therapy.The 2-year DOR of the two cohorts was 25% (1/4) and 43% (9/21), respectively (p = 0.240) (Figure 1D).The 2-year CIR of the two cohorts was 75% (3/4) and 52% (11/21), respectively (p = 0.110) (Figure 1E).The median EFS was 2.3 months (range, 0.6-27.2months) in patients with CML-LBP and 9.8 months (range, 0.3−9.2months) in de novo Ph-positive B-ALL, and patients with CML-LBP had a significantly lower 4-year EFS than those with de novo Ph-positive ALL (p = 0.017) (Figure 1F).The median OS was 14 months (range, 0.6-63.8months) in patients with CML-LBP and 30 months (range, 0.5-79.2months) in de novo Ph-positive B-ALL, respectively.The 5-year OS was comparable between the two cohorts (p = 0.170) (Figure 1G).
A worse response to anti-CD19 CAR T-cell therapy is independently associated with worse survival in B-ALL patients. 11In accordance with this report, our data showed that patients with CML-LBP had poorer CHR and worse EFS after anti-CD19 CAR T-cell therapy as compared with de novo Ph-positive B-ALL patients, especially in those with BM blasts higher than 5% or extramedullary involvement.In an ongoing phase 2 study, 5/6 (83%) of CML-LBP patients achieved response with ponatinib in combination with blinatumomab, but only 2/6 (33%) patients showed patients showed molecularly undetectable leukemia. 12Therefore, the efficacy of CAR T-cell versus ponatinib plus blinatumomab in CML-LBP patients needed to be explored in more patients.Some reports have demonstrated that TKIs and anti-CD19 CAR T-cells could not eliminate the CML stem cell population. 13,14Therefore, an allogeneic hematopoietic stem cell transplant is necessary for patients with CML-LBP who achieve MMR with anti-CD19 CAR T-cell treatment.
In summary, our data suggest that patients with CML-LBP had inferior response and EFS to anti-CD19 CAR T-cell therapy compared to those with de novo Ph-positive B-ALL, implying that other immunotherapies are needed for CML-LBP patients.Due to the limited number of CML-LBP patients in this study, our findings need to be validated in multicenter, prospective studies.In addition, the mechanisms underlying the poor response of CML-LBP to anti-CD19 CAR T-cell therapy deserve further investigation.
T A B L E 1
The baseline characteristics, disease status, disease burden before CAR T-cell therapy, and the grades of CRS and ICANS of the two cohorts.
Item CML-LBP De novo Ph+ B-ALL p Value
No. of patients 9 25 | 2024-03-03T05:08:06.304Z | 2024-02-01T00:00:00.000 | {
"year": 2024,
"sha1": "6a7b62c0fad0f0aa3d9ff3dd89ed3f6a1ef58fa8",
"oa_license": "CCBYNCND",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/hem3.49",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6a7b62c0fad0f0aa3d9ff3dd89ed3f6a1ef58fa8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
116133308 | pes2o/s2orc | v3-fos-license | SAFETY INSPECTION ON LEVEL CROSSING JPL 727 KM 537 + 453 PATAK-PATHUKAN ROAD , SLEMAN , YOGYAKARTA
Level crossing (LC) safety inspection between a highway and a railroad on Pirak-Pathukan Road, Sleman, Yogyakarta is necessary because this LC is located near various community centers, has high traffic volume, intersects with railway double tracks, and intersecting angle that is not perpendicular . This study aims to evaluate LC technical condition, analyse delay time and vehicles queue length, and evaluate pavement structure condition. The research results indicate: 1) LC technical conditions do not meet the requirements of Regulation of Director General of Land Transport No. 770 Year 2005; 2) The longest duration of LC gate closure occurred on Sunday at 15:05, that is 360 seconds, the highest traffic flow occurred on Monday (from the South side) that is equal to 1443 skr/day, and the longest delay time occurs on Sunday at 15:05, that is 498 seconds; 3) The value of pavement condition index (PCI) is 82% very good.
INTRODUCTION
According to Undang-Undang No. 22 year 2009, Traffic safety is a circumstance to avoid a person from accident risk during traffic that can be caused by humans, vehicles, and/or environment.Trains become one of transportation modes that provides several benefits to the community because the travel cost is relatively cheap, can transport many people or goods related its capacity, efficient in time, safe and comfortable (Utomo, 2013).According to Hasan (2009), there are some factors that affecting accidents at highway and railroad crossings, which are vehicles, drivers, natural and weather conditions, signs and markings, level crossing geometric design, and road pavement conditions.The safety inspection on a level crossing is a systematic examination of road and railway track on a level crossing to identify hazards, errors and deficiencies that can cause accidents.Inspection activities are usually conducted by PT.Kereta Api Indonesia or Ministry of Transportation.The city of Yogyakarta has a number of level crossings, one of which is a level crossing that located at Pirak-Pathukan Road, Sleman, Yogyakarta.This level crossing is located in the area that close to various community activity centers, such as Gamping market, Gamping sub-district office, Godean sub-district office, PKU (Public Health Center) of Muhammadiyah Gamping, Tamantirto village hall, Mejing II Primary School, State Junior High School 1 Kasihan, High School State 1 Godean, and Universitas Muhammadiyah Yogyakarta.In addition, this location is also adjacent to the national road of Wates Road in the South side and Godean Road in the North side.The purpose of this study is to inspect the safety of level crossing in Pirak-Pathukan Road, Sleman, Yogyakarta on JPL 727 KM 537 + 453.This study aims to evaluate technical condition of level crossing, analyse delay time and length of vehicles queue that occurring as the impact of level level crossing gateclosure, and to evaluate pavement structure condition using Pavement Condition Index (PCI) method.Secondary data in the form of train departure and arrival schedule information and list of level crossing in the Special Region of Yogyakarta obtained from PT. KAI DAOP VI Yogyakarta.
LEVEL CROSSING
Based on the Regulation of Director General of Land Transport No. 770 Year 2005 about technical guidelines for level crossing between road and railway track, the requirements for the construction of a level crossing includes: 1) Road surface is not higher or lower than rail head (0.5 cm tolerance).
2) There is a 60 cm long surface measured from outer side of rail track.
3) The maximum gradient for vehicle passing (calculated from the highest point on the rail head) is 2% measured from outer side of the flat surface and 10% for the next 10 meters.4) The maximum width of level crossing for one line is 7 meters.5) The angle of intersection between rail track and road is at least 90 degrees and the length of the straight road must be at least 150 meters from rail track.6) Must be equipped with an opponent rail or other construction to ensure a groove for the train wheel.7) Road segment that can be crossed between a road and a railroad has the following requirements: Class III, at least has 2 lanes and 2 directions, not on the curve, vertical alignment less than 5% from the rail track outer point, fulfill the visibility requirement, and in accordance with the General Plan of Spatial Planning (RUTR).
DELAYS
According to the Indonesian Road Capacity Guidance (PKJI, 2014), delay calculations include traffic delays and geometric delays.At a level crossing, the delay is affected by a stopped delay as a geometric delay and congestion delay as a traffic delay.Systematically can be expressed as follows: (2) Where: : Traffic delay : Geometric delay
QUEUE LENGTH
According to the Indonesian Road Capacity Guidance (PKJI, 2014), the queue length is a vehicle queuing along the level crossing approach and expressed in meters.The queue length starts to be measured when the level crossing gateis closed until the level crossing gateis opened.
PAVEMENT CONDITION INDEX (PCI) METHOD
According to Shahin (1994in Hadiyatmo, 2007), the assessment of pavement structures condition by Pavement Condotion Index (PCI) method is assessed based on the type of damage and the level of damage, and it can be used as a reference in maintenance efforts.
Stages of calculation of pavement condition assessment as follows:
Density
Density is the percentage of damage to the total area of a type of damage to the area in a research unit measured in square meters or long meters.To calculate the value of density used the following formula: (3) or (4) with: Ad = Total of damage area (m 2 ) Ld = Total of damage length (m) As = Total area of segment unit (m 2 )
Deduct Value
Deduct Value is a reduction value for each type of damage obtained from the graph that shows density and deduct value relationship.
Total Deduct Value (TDV)
Total Deduct Value is the total of deduct value for each type of damage and level of damage in a segment unit.
Corrected Deduct Value (CDV)
Corrected Deduct Value obtained from the graph that shows relationship between TDV and CDV by choosing the suitable graph with number of Deduct Value that more than score 2.
Pavement Condition Index (PCI)
Once the CDV is known, then the PCI value for each research unit or segment can be calculated by the following equation: PCIs=100-CDV (5) Where: PCIs = PCI for each segment unit CDV = CDV for each segment unit
Figure 1 Research sites
The primary data were obtained by conducting the following survey:
Infrastructure Condition
The survey was conducted by direct observation along the segment of highway and railway line related to the signs, markings and traffic signal lights.
Level Crossing Geometric
The geometric survey was carried out by using the Garmin 76csx GPS tool to obtain the intersection angle of level crossing.
Crossing Door Closure Duration
The survey was conducted to determine the time length of level crossing gateclosure.
Vehicle Time-Delay
The vehicle time-delay survey is conducted to obtain the time it takes for the vehicle to pass through a nuisance and obstacle at a level crossing.
Vehicle Queue Length
The queue length survey is intended to obtain the queue length that occurs at one time closure of the crossing gate.
Pavement Structure Deterioration
The survey of road pavement structural damage is done by direct observation along the highway under review.
IMPACT ANALYSIS OF LEVEL CROSSING CLOSURE
The steps that required to obtaining delay duration and vehicle queue length due to the closure of level crossing gateare as follows:
Traffic Flow
The data of traffic flow survey result at level crossing is shown in Figure 3.
Delay Time and Vehicle Queue Length
The queue length varies on each approach lane of level crossing for each level crossing gate closure time.The data of delay time can be seen in Figure 4. be obtained the results that the longest delay time occurred at 17:00 for 393 seconds which is causing the vehicle queue length at that hour is 31 meters in the North side and 125 meters in the South side of level crossing.
ANALYSIS OF PAVEMENT STRUCTURE CONDITION
Based on a survey conducted on a road segment (200 meters to the North and 200 meters to the South) of level crossing JPL 727 STA.537 + 453, it was observed that there were road damages at STA. 0 + 125 up to STA. 0 + 200.This damage has been analysed using PCI method.The result of pavement condition analysis by PCI method can be seen in Table 3 below.(5) = 100 -18 = 82 From the PCI value of each segment unit, it can be concluded that the amount of PCI value of this road segment as in the following figure.about technical guidelines for level crossing between road and railway track does not meet the requirements due to the passing train journey is often late, time intervals between trains during peak hours is less than six minutes, number of trains passing is more than 50 train/day, rail head and road pavement is in different elevation, distance between the consecutive level crossings is less than 800 meter, average daily traffic is 2956 vehicles /day, signs and markings are incomplete, and intersection angles between road and rail track is not perpendicular that could provide negative impact both for vehicle and train driver that passing this level crossing.index (PCI) is 82% therefor it can be concluded that the road pavement is in very good condition.
Q:
Traffic flow (skr/jam) QLV : Light vehicles flow QHV : Heavy vehicles flow QMC : Motorcycles flow ekr : Equivalent value of light vehicles Duration A survey of level crossing gate closure duration is done to find variations of level crossing gate closure duration caused by the passing train.The data of level crossing gateclosure duration for two days of observation are shown in Figure 2 below.
Figure 2
Figure 2 Level Crossing Gate Closure Duration
Figure 5
Figure 5 PCI Value Chart 2. Some parameters are analysed as the impact of level crossing gate clossing, including as follows: a.The longest closure duration of level crossing gate occurred on Sunday, March 19, 2017 at 15:05 for 360 seconds, while on Monday, March 20, 2017 occurred at 15:00 for 245 seconds.b.The highest traffic flow that occurred on Sunday, March 19, 2017 comes from the North side that is equal to 1932 vehicles/day or equal to 969 skr/day.Furthermore, the highest traffic flow that occurred on Monday, March 20, 2017 comes from the South side that is equal to 2956 vehicles/day or equal to 1443 skr/day.c.The longest delay time on Sunday, March 19, 2016 occurred at 15:05 for 498 seconds with a queue length is 65 meters from the North side and 78 meters from the South side.Moreover, the longest delay time on Monday, March 20, 2017 occurred at 17:00 for 393 seconds with a queue length of 31 meters from the North Side and 125 meters from the South Side.3.There are 4 types of damage on the road segment JPL 727 KM 537 + 453 along the 200 meters North-South direction, which are : patching (9.33%), bleeding (0.22%), depression (0.8%), and railroad crossing (4.8%).The value of the pavement conditions PCI (S) = 82 (very good) comparative stage between the parameters in the Regulation of Director General of Land Transport No. 770 Year 2005 about technical guidelines for level crossing between Road and Railway Track with the existing technical condition of level crossing on Pirak-Pathukan Road, Sleman, Yogyakarta (JPL 727 KM 537 + 453) can be seen in Table 2 below. A | 2018-12-05T14:27:39.538Z | 2018-01-01T00:00:00.000 | {
"year": 2018,
"sha1": "8c065bf282316d36a84bfde1f45c27a36b71035b",
"oa_license": "CCBY",
"oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2018/40/matecconf_istsdc2017_04005.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "8c065bf282316d36a84bfde1f45c27a36b71035b",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
233237434 | pes2o/s2orc | v3-fos-license | Long-term effects (> 24 months) of multiple lifestyle intervention on major cardiovascular risk factors among high-risk subjects: a meta-analysis
Background The evidence of the long-term effects of multiple lifestyle intervention on cardiovascular risk is uncertain. We aimed to summarize the evidence from randomized clinical trials examining the efficacy of lifestyle intervention on major cardiovascular risk factors in subjects at high cardiovascular risk. Methods Eligible trials investigated the impact of lifestyle intervention versus usual care with minimum 24 months follow-up, reporting more than one major cardiovascular risk factor. A literature search updated April 15, 2020 identified 12 eligible studies. The results from individual trials were combined, using fixed and random effect models, using the standardized mean difference (SMD) to estimate effect sizes. Small-study effect was evaluated, and heterogeneity between studies examined, by subgroup and meta-regression analyses, considering patient- and study-level variables. Results Small-study effect was not identified. Lifestyle intervention reduced systolic blood pressure modestly with an estimated SMD of − 0.13, 95% confidence interval (CI): − 0.21 to − 0.04, with moderate heterogeneity (I2 = 59%), corresponding to a mean difference of approximately 2 mmHg (MD = − 1.86, 95% CI − 3.14 to − 0.57, p = 0.0046). This effect disappeared in the subgroup of trials judged at low risk of bias (SMD = 0.02, 95% CI − 0.08 to 0.11). For the outcome total cholesterol SMD was − 0.06, 95% CI − 0.13 to 0.00, with no heterogeneity (I2 = 0%), indicating no effect of the intervention. Conclusion Lifestyle intervention resulted in only a modest effect on systolic blood pressure and no effect on total cholesterol after 24 months. Further lifestyle trials should consider the challenge of maintaining larger long-term benefits to ensure impact on cardiovascular outcomes. Supplementary Information The online version contains supplementary material available at 10.1186/s12872-021-01989-5.
Introduction
Despite the decreases in cardiovascular mortality in recent decades, cardiovascular diseases (CVD) are still a leading cause of premature mortality [1,2]. The risk of CVD strongly relates to modifiable risk factors [3] and a dominant part (50-70%) of the improvement in cardiovascular mortality can be ascribed to risk factor improvements in the population, while approximately 20-40% can be ascribed to better treatments [4,5].
With convincing evidence relating risk factor levels to CVD morbidity and mortality, most current CVD prevention guidelines include lifestyle intervention as a key element [6][7][8][9][10]. Despite the broad recommendation of lifestyle advice, the long-term effects of multiple lifestyle intervention on cardiovascular risk factors appear sparsely documented.
Systematic reviews and meta-analyses show that lifestyle intervention does result in small reductions in risk factors including blood pressure, cholesterol and smoking when evaluated typically after 3-12 months [11][12][13]. Nevertheless, studies on lifestyle intervention have not been able to demonstrate a clear impact on coronary heart disease mortality or morbidity [11,12]. This could be because the risk factor changes observed in studies of short duration are not maintained in the long term [11].
Most lifestyle studies report the effects of lifestyle intervention after 3-18 months, but few studies evaluate the effects beyond this time range. Since improvements in risk factor levels must be maintained over time, i.e. several years, to have an impact on cardiovascular events, it is of interest to elucidate the long-term effects of lifestyle intervention. The aim of this meta-analysis was to assess the long-term effects (i.e. after 24 months) of multiple lifestyle intervention on major cardiovascular risk factors i.e. total cholesterol (TC), systolic blood pressure (SBP) and smoking habits, in subjects with elevated cardiovascular risk from various causes.
Methods
The study was conducted and reported according to the PRISMA guidelines for meta-analyses and systematic reviews [14]. The protocol was published in the PROS-PERO register (https:// www. crd. york. ac. uk/ prosp ero/), registration number CRD42018088783.
Eligibility criteria
We included randomized controlled clinical trials (RCTs) of cardiovascular primary prevention with a follow-up period of at least 24 months. To ensure that the findings would be relevant for the present situation, only studies published after 1990 were considered. Participants included had to be individually randomized.
Patients
Eligible studies included patients ≥ 40 years without known CVD, but with at least one criteria of cardiovascular risk: hyperlipidemia, hypertension, cigarette smoking, obesity, inactivity, impaired glucose tolerance, the metabolic syndrome, or diabetes mellitus. Studies which included participants with diabetes mellitus were only included if only a minority (< 50%) of the participants had diabetes and if interventions were directed to reduce cardiovascular risk and not primarily blood glucose levels. Studies were excluded if the inclusion required presence of a specific medical condition.
Intervention
The intervention should be a health promotion activity that aimed to reduce total cardiovascular risk; i.e. to reduce more than one cardiovascular risk factor, through behavioral change, primarily related to diet and/or exercise; counseling or educational interventions, and with, or without, stable background pharmacological treatments.
Comparison
The studies should have a control group receiving usual care.
Outcome variables
Studies had to report the most important cardiovascular risk factors included in major algorithms for assessing total cardiovascular risk; i.e. SBP, TC and, if available, smoking habits.
Search strategy
A qualified medical librarian at the Medical Library, Oslo University Hospital, was consulted. RCTs published until April 15, 2020 were searched in PubMed, Embase and Cochrane Central Register of Controlled Trials. There were no language or date restrictions. Additional searches were also conducted in Cochrane Database of Systematic Reviews, UpToDate, NICE, and Prospero for ongoing systematic reviews.
In PubMed, Medical Subject Headings (MeSH Major Topic) and words in title were searched alone, or in combination, including lifestyle, primary prevention, combined modality therapy, risk reduction behavior, smoking cessation, diet, exercise, cardiovascular diseases, cardiovascular risk, hypertension, dyslipidemias and hypercholesterolemia. A more restrictive search strategy, including terms expressing lifestyle and cardiovascular risk, was then performed in Embase and Cochrane Central Register of Controlled Trials.
In addition, we manually screened reference lists of eligible papers and relevant systematic reviews.
Study selection
Two investigators (HB, TOK) independently evaluated studies for possible inclusion. Non-relevant studies were excluded based on title and abstract. The remaining trials were evaluated in full text. Disagreements were resolved by discussion and subsequent consensus.
Data abstraction
Three reviewers (HB, TOK, IS) independently extracted information from each included trial using a pre-made data extraction form. Country of study origin, number of participants in the intervention and comparison groups, baseline frequency of males and current smokers, mean age of participants, body mass index (BMI), SBP, and TC was extracted.
Endpoints variables
Regarding the two main outcomes considered, change in SBP and total TC from baseline to follow-up, we extracted the mean difference and standard deviation of mean difference in the intervention and comparison group. As for change in smoking habits, we registered the number and frequency of smokers at baseline and follow up in each trial arm. Investigators were contacted for additional data when necessary.
A secondary outcome was change in estimated total cardiovascular risk from applied algorithms (i.e. Framingham, PROCAM). In interpreting the available data disagreements were resolved by discussion among the three reviewers (HMB, TOK, IS) and subsequent consensus.
Risk of bias in individual studies
Three reviewers (HMB, TOK, IS) independently assessed potential sources of bias specific to RCTs using the Cochrane Collaboration's tool [15]. Trials were classified as having an overall low risk of bias when the following core domains were judged at low risk of bias; concealment of randomization, blinding of outcome assessor, and intention-to-treat analyses.
Statistical pooling
In order to calculate an overall effect of intervention, the total standardized mean difference (SMD) with 95% confidence interval (CI) was estimated (Cohen´s method). If the value of zero is not included in the 95% CI, then the SMD is statistically significant at the 5% level (p < 0.05). Recommended interpretation is that a value of 0.2 indicates a small effect, 0.5 a medium effect, and of ≥ 0.8 a large effect. Fixed and random effects model analyses were considered, and in presence of heterogeneity between trials we used the DerSimonian and Laird method [16].
Sources of heterogeneity, evaluation and quantification
Statistical heterogeneity among studies was assessed with Cochran's Q test. The magnitude of heterogeneity was evaluated by the I 2 statistics which describes the proportion of total variation due to heterogeneity rather than chance [17]. Potential sources of heterogeneity were investigated first by subgroup analyses. We stratified our data according to type of intervention (physical activity, diet, and both), and the following study characteristics: concealment of randomization, blinding of the endpoint assessors on allocated treatment and analysis according to intention-to-treat strategy. We extended the analyses by a random-effect meta-regression where the outcome variable was the observed SMD from every study, indicating treatment effect, and the different patient-and study-level characteristics (covariates). Source of heterogeneity was considered as important if the covariate decreased the between-study variance. The estimate of τ 2 , in the presence of a covariate in comparison to that when the covariate is omitted, allows the proportion of the heterogeneity variance explained by the covariate to be calculated [18].
A sensitivity analysis was undertaken to investigate the influence of each study by omitting each in turn from the meta-analysis, and assessing the degree to which the magnitude and significance of the intervention effect changed [19].
Small-study effect
Small-study effect was evaluated visually by the funnelplot and by Egger's test of asymmetry applied on the funnel-plot [20].
For six trials we obtained additional endpoint measures by communication with an author of these studies [23,24,26,27,29,34]. We estimated the standard deviation of mean difference from the reported standard error for two trials [31,32] and from the 95% confidence interval for one trial [33]. For one trial we did not obtain a clarification from the authors whether the measure of variability presented was the standard deviation or the standard error [25]. We assumed it was the standard error since their measure was ten times lower than the standard deviation of the eleven other studies included in our meta-analysis.
Risk of bias
As shown in Table 3 randomization and adequate concealment were present in six trials [24-26, 28, 29, 34]. Blinding of participants and health professionals to intervention allocation throughout the twelve trials was impossible, due to the nature of the trials, and classified open labeled with potential risk of bias. Blinding of the endpoint assessor to treatment allocation was considered adequate in all trials for endpoint TC, as laboratory staff presumably was unaware of subject group assignment. For endpoint SBP, blinding was reported in two trials [29,34]. Drop-out was present in eleven trials, ranging from 3 to 27% (median 11%). Intention-to-treat strategy was reported in ten trials [23][24][25][28][29][30][31][32][33][34]. A priori power estimation was presented in nine trials, but only one was adequate for the endpoints considered in our Records after duplicates removed n = 4000 Records screened n = 4000 Records excluded n = 3968 Full-text articles assessed for eligibility n = 32 Studies included in quantitative synthesis (meta-analysis) n = 12 Full-text articles excluded, with reasons n = 20 Incomplete reporting of data (9 ) Study-population was not a high risk population (4) Cholesterol not reported (2 ) Intervention-group used more lipidlowering medication compared to usual care group (1) Results not published (1) The majority (>50%) of the st udypopulation had established cardiovascular disease (1) Cluster-randomization (1) Published before 1990 (1) meta-analysis [29]. In summary, two of the included trials represented high-quality according to concealment of randomization, blinding of outcome assessor and intention-to-treat analyses [29,34].
Endpoint SBP
The pooled estimate from 12 studies indicated a small effect of lifestyle intervention on SBP (SMD = − 0.13, 95% CI − 0.21 to − 0.04, p = 0.0048), with moderate heterogeneity (I 2 = 59%) (Fig. 2a). The absolute difference between the mean value in intervention versus comparison group was approximately 2 mmHg (MD = − 1.86, 95% CI − 3.14 to − 0.57, p = 0.0046). When stratifying on type of intervention, difference in the pooled estimates was mostly observed between the two groups consisting of physical activity only (SMD = 0.02, 95% CI − 0.08 to 0.11) or diet only (SMD = − 0.12, 95% CI − 0.31 to 0.06), and the group with a combination of physical activity and diet (SMD = − 0.18, 95% CI − 0.28 to − 0.08). Considering the trials satisfying three major parameters of internal validity, no effect of lifestyle intervention was demonstrated (SMD = 0.02, 95% CI − 0.08 to 0.11), while for the trials with non-presence of all three conditions a small effect was found (SMD = − 0.17, 95% CI − 0.25 to − 0.09). Extending the analysis with metaregression, the study quality was associated with the effect of lifestyle intervention (p = 0.0136), accounting for 68% of the observed heterogeneity. None of the patient-related variables considered (mean age, mean BMI, frequency of male sex and frequency of smokers) were significantly associated with intervention effect. There was no indication of small-study effect as the funnel plot visually appeared symmetrical (Fig. 3a), supported by Egger's test (p = 0.921). The robustness of the pooled effect was demonstrated by influential analysis. Whatever study we omitted, the pooled estimate did not change in magnitude, direction or statistical significance.
Endpoint TC
The pooled estimate from ten studies (Fig. 2b) indicated no significant effect of lifestyle intervention on total cholesterol (SMD = − 0.06 (95% CI − 0.13 to 0.00, p = 0.0634), with no heterogeneity (I 2 = 0%). The absolute difference between the mean value in intervention versus comparison group was 0.05 mmol/l (MD = − 0.05, 95% CI − 0.11 to 0.00, p = 0.0495). Subgroup analysis and meta-regression were not indicated since there was no observed heterogeneity between the trials. No indication of small-study effect was found as the funnel-plot visually appeared symmetrical (Fig. 3b), confirmed by Egger's test (p = 0.893). A stable pooled estimate was demonstrated by influential analysis, omitting one study at a time from the meta-analysis.
Smoking habits
After receiving follow-up information, four studies reported on change in smoking habits [24,26,31,34]. Two of these studies [24,26] demonstrated that the proportion of current smokers at follow-up were lower than at baseline in the intervention groups, while no change in the control groups.
Total cardiovascular risk
Only one study [26] reported on change in total cardiovascular risk applied from algorithms (Framingham, PROCAM). This trial showed that the 10-year Framingham cardiovascular risk declined from 16 to 14.6% in the intervention group compared to an increase from 14.3 to 19.1% in the control group, bearing in mind that the changes in both groups also were affected by the fact that the participants gained 2 years of age during the study. The PROCAM cardiovascular risk calculations declined in both groups.
Discussion
The main result of this meta-analysis of RCTs examining long-term effects of lifestyle intervention indicates a limited effect on major cardiovascular risk factors SBP and TC. Regarding smoking cessation there was numerically greater reduction in smoking rates in the intervention groups compared to the comparison groups.
The limited long-term effect of lifestyle intervention may seem in contrast to the great importance lifestyle intervention has been given in guidelines on cardiovascular prevention [6][7][8][9][10]. As an example, the 2019 ACC/ AHA Guideline reports that lifestyle intervention with 6 different elements including diet and exercise may each reduce SBP by at least 4 mmHg in hypertensive subjects and 2 mmHg in normotensives, suggesting larger effects of combined measures [9]. However, the guideline does not specify the duration of the expected effects. From the Look Ahead study [35], the challenge of maintaining the benefits of lifestyle intervention on blood pressure (and other parameters) is evident: At 12 months, SBP was reduced by 6,8 mmHg in the intensive lifestyle group compared to 2,8 mmHg in the standard care group. However, averaged across the first 4 years, the difference in SBP was reduced by 5.33 versus 2.97 mm Hg; i.e. a difference of 2.36 mmHg, very similar to the results in our meta-analysis.
A small effect on SBP, but no effect on TC, was similarly reported in the Finnish Diabetes Prevention Study [30], where SBP were lowered by 5 mmHg after 2 years while TC remained unchanged. Also the American Diabetes Prevention Program demonstrated significantly greater decrease in SBP in the lifestyle group (− 3.4 mmHg ± 0.4) compared to control group (− 0.52 mmHg ± 0.4), but no changes in the TC levels [31].
In summary, major lifestyle studies and meta-analyses show that lifestyle intervention result in small, but significant changes in SBP and TC after 6-12 months. However, the benefits gradually attenuate over time, especially regarding TC. Our study confirms this attenuation of benefits and demonstrates further reductions in the period from 12 to 24 months.
Trials examining lifestyle intervention should ideally have clinical endpoints, as the ultimate goal is reducing cardiovascular events. Trials with clinical outcomes are large and costly, however, which explains why most lifestyle intervention trials instead focus on improvements in established cardiovascular risk factors. Reviews so far have reported effects after 3-12 months, and occasionally up to 18 months, motivating our attention to study the effects after longer follow-up. Ideally, however, even longer follow-up, i.e. 5-10 years, would be optimal to substantiate the clinical value of the interventions. From our literature search, such data appear very limited.
The present study demonstrates that a small reduction in SBP of approximately 2 mmHg may be maintained over time with lifestyle intervention. This was observed in a population with mean baseline SBP of 129 mmHg and many studies report that greater reductions may be achieved when the baseline blood pressure is higher [9,36]. Although the effect of lifestyle intervention appears small, a reduction of this size may result in a valuable reduce in the risk of future CVD, according to epidemiological evidence and studies evaluating benefits of sodium reduction in the population [36], that suggested that a reduction of 3.8 mmHg could prevent 1,6 million annual CVD deaths globally.
While a limited effect on SBP was observed, we found no effect on cholesterol levels. The reasons for the lack of efficacy are unclear, but may relate to the intensity of the intervention, the quality of the dietary and exercise advice and to difficulties in maintaining lifestyle changes over time. Since several studies of short duration, i.e. 3-6 months, have reported considerably larger effects on blood pressure [37], and to some extent on cholesterol [38,39], the latter explanation may appear the most important.
Traditional lifestyle interventions have had cholesterol levels and smoking cessation as the main targets [40,41]. However, both smoking rates and cholesterol levels have been significantly reduced in most western populations the last decades, the latter mainly due to general improvements in the population diet, for example with reductions in trans fat [42]. Accordingly, reduction in cholesterol levels through diet modifications may have become harder to achieve. Hence, the most feasible target for risk reduction through lifestyle interventions in non-smokers may now have become a reduction in SBP.
The results in our meta-analysis highlight the need to develop strategies that enable high-risk individuals to maintain the effects on risk factors beyond 3-6 months. At present, we do not know if the attenuation of the effects is entirely related to a reduced compliance/ patient fatigue regarding lifestyle habits, or if it may also represent some sort of physiological adaptation to the obtained lifestyle changes. A further possibility is that the advices used in the included studies are no longer optimal, and need to be revised according to the contemporary risk profile and lifestyle challenges in the population.
Strengths and limitations
This meta-analysis is, to the best of our knowledge, the first addressing the impact of multiple lifestyle intervention on cardiovascular risk factors in the long-term; i.e. after 24 months.
Our review was based on a comprehensive literature search, which reduces the possibility of missing relevant trials [43]. Trial selection was done by two authors, and data extraction by three authors, to minimize transcription errors. As recommended by the Cochrane Collaboration tools for assessing risk of bias in randomized trials [15], we did not use summary scores to identify quality of trials. The components used for quality assessment are validated and reported to be associated with bias [44]. The analysis applied the recommended principles for meta-analysis methodology regarding eligibility criteria for the individual trials, analysis methods to explore sources of heterogeneity between studies and evaluate small-study effect [45].
Small-study effect was unlikely to affect our results. A major limitation was the observed heterogeneity between trials. Our results concerning the efficacy of lifestyle intervention on SBP was altered when stratifying on the components of trial quality, and meta-regression demonstrated that trial quality was an important determinant of the intervention effect. The impact of study-level variables on meta-analysis results has been investigated, indicating true associations between heterogeneous treatment effects and the study-level variables [46]. On the other hand, the diversity could be related to differences in the patient population studied and differences in interventions. The population in the two trials classified as having an overall low risk of bias [29,34] was normotensive, making it less likely to expect a significant SBP reduction. Moreover, the intervention in these two trials consisted of physical activity only, which quite likely may have reduced the impact of the intervention compared to trials also including dietary advice.
The review considered trials published after 1990 only. Of the 4315 records identified, only 12 trials could be included, illustrating the sparse number of RCTs with follow-up time as long as 24 months and sufficient data to be evaluated.
Future directions
A final answer on the efficacy of lifestyle interventions for reducing cardiovascular risk would require RCTs large enough to evaluate effects on clinical outcomes, but such trials would have to be very large and costly, as exemplified by the Look AHEAD study [35]. A more feasible approach could be the use of proper validated risk algorithms as primary outcomes, as these integrate the effects on multiple risk factors and allow valuable estimates of the intervention on total cardiovascular risk. Meanwhile, further trials should focus on the challenge of maintaining the benefits often reported in studies of 6-12 months duration. In this respect it is worth noting that the most feasible targets for risk reduction may now have become reduction in SBP and smoking cessation, as reductions in cholesterol levels through diet counseling seem hard to achieve.
Conclusion
In conclusion, our results suggest that the effects of lifestyle intervention on major cardiovascular risk factors after 24 months of follow-up are limited, but a modest effect on SBP may be of clinical relevance. Our observations demonstrate the challenge of maintaining benefits during longer follow-up, and suggest a need to develop new strategies to promote durable changes in cardiovascular risk. | 2021-04-15T14:06:54.881Z | 2021-04-15T00:00:00.000 | {
"year": 2021,
"sha1": "e48b58cdd73cab7dc8e5352be5b8350e9c69d706",
"oa_license": "CCBY",
"oa_url": "https://bmccardiovascdisord.biomedcentral.com/track/pdf/10.1186/s12872-021-01989-5",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e48b58cdd73cab7dc8e5352be5b8350e9c69d706",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
4914667 | pes2o/s2orc | v3-fos-license | Integrated care of muscular dystrophies in Italy. Part 1. Pharmacological treatment and rehabilitative interventions.
This paper describes the pharmacological therapies and rehabilitative interventions received by 502 patients with Muscular Dystrophies, evaluated in relation to patient's socio-demographic and clinical variables, and geographical areas. Data were collected by the MD-Socio-Demographic and Clinical Schedule (MD-SC-CS) and by the Family Problems Questionnaire (FPQ). The most part of the enrolled patients were in drug treatment. The number of the medications increased in relation to patient's age, disability degree and duration of illness and was higher among patients with Duchenne Muscular Dystrophy (DMD) compared with Becker (BMD) or Limb-Girdle Muscular Dystrophies (LGMD). Steroids (deflazacort or prednisone) were the drug most frequently used, followed by cardiologic and bone metabolism drugs. In general, patients using steroids were younger and had a shorter duration of illness; patients using cardiac drugs and dietary supplements were older and had a longer duration of illness. Rehabilitative interventions were provided to about 70% (351/502) of patients, mainly DMD. Of these, physiotherapy was the more frequent treatment (96.6%) and was prevalently performed in rehabilitative centres (about 70% of patients) and at home in only 30%. Hydrokinetic-therapy was practiced by 6.8% of patients. Respiratory rehabilitation was provided to 47.0% of patients (165/351) and assisted mechanical ventilaventilation to 13.1% (46). The amount of rehabilitative interventions increased in relation to the patient's age, level of disability and duration of illness. Compared to Central and Northern Italy, in Southern Italy there was a higher attention to cardiological impairment as shown by a higher number of patients receiving heart drugs. No statistically significant differences concerning the possibility to have access to rehabilitative interventions were noted among the three geographical areas. However, patient living in Southern Italy tend to receive rehabilitation more often at home.
Introduction
Muscular dystrophies (MDs) include a group of inherited disorders characterized by progressive muscle weakness and wasting, and classified according to pattern of inheritance, age of onset, and involvement of specific skeletal muscles (1,2). The identification of dystrophin (3,4) and the subsequent characterization of the dystrophin-glycoprotein complex (DGC) was the first step towards the clarification of the molecular pathogenesis of MDs (5,6). Several forms of MD arise from primary mutations in genes encoding the components of DGC complex (7). The most common forms -affecting both children and young adults -are Duchenne (DMD), Becker (BMD) and Limb-Girdle Muscular Dystrophies (LGMDs). Due to the multi-systemic involvement, the management of MDs requires a multifaceted approach and a multidisciplinary expertise (8,9). The clinical management is mainly based on the use of drugs [steroids (10)(11)(12), ace-inhibitors (13,14) or beta-blockers (15) followed by other cardiological and/or respiratory medication when appropriate (15,16)] and rehabilitative treatments (9). This integrated approach was able to improve quality and prolong life expectancy even in patients affected by the most severe forms (17,18). As a consequence, DMD should now be considered as an "adulthood" disease (17,18) requiring long term family assistance which may be very demanding (family burden) when professional and social supports are poor or lacking (19,20).
In 2012, a national study on the families of patients with muscular dystrophies was carried out in Italy with the aim to describe the difficulties of the care-giver experience as well as the professional and social supports the relatives may rely on (21,22). We found that relatives whose children had higher degree of disability, spent more daily hours in caregiving and/or had poor social support experienced a higher burden. Nevertheless, 88% of them reported something positive out of the situation (21,22).
Based on the same data set, in this paper, we report data on the pharmacological and rehabilitative treatments provided to the 502 patients, and investigate differences in relation to demographic and clinical variables, and geographical areas.
Design of the study
The study was carried out in 8 specialized centres for MDs, located in Northern (3 centres), Central (3 centres), and Southern Italy (2 centres). The patients' selection criteria were the following: diagnosis of DMD, BMD, or LGMD confirmed by molecular analysis or muscle biopsy; age between 4 and 25 years; in charge to the participating centres for at least 6 months; living with at least one adult relative. For each patient the key-relative was interviewed if he/she was aged between 18 and 80 years and not suffering from illness requiring long-term intensive care (21,22). The protocol of the study was approved by the Ethic Committee of the Second University of Naples (coordinating centre), and by the Ethical Committee of each participating Centre.
Instruments description
MD-SD-CS collects information on the main sociodemographic characteristics of the patients and their families, and on patients' clinical variables. Barthel Index (BI) assesses the patient's degree of independence in daily activities. It provides a global 1-100 score (0 "totally dependent"; 100 "totally independent"). Questions ad hoc developed by the researchers for the present study were used to interview the key-relative on patient's functional autonomy in the previous month. The inter-rater reliability in BI scoring was tested preliminary (Cohen's kappa coefficient ranging from 1 to 0.90 for 9 BI items and equal to 0.67 for the lasting BI item).
MD-CS collects information on pharmacological therapies received by the patient in the two months preceding the interview and on psycho-educational interventions and social/welfare support provided to patients and their families in the past six months. The schedule also collects information on where each treatment was provided.
FPQ explores relative's burden, attitudes toward the patient, and professional and social network support in emergencies concerning the patient (23). It contains additional items on expenses sustained by the family in the previous 12 months for care.
The psychometric properties of the FPQ was previously tested in this study sample (21).
Statistical analysis
Differences in pharmacological therapies and rehabilitative interventions related to patients' socio-demographic, clinical and geographic variables were explored by the analysis of variance and χ 2 , as appropriate. Correlations between the number of drugs or rehabilitative interventions and patients' age, duration of illness and levels of functional abilities (BI global score) were explored by Spearman's r coefficient. Multiple regression analyses were performed to explore the simultaneous ef-fects on drugs and rehabilitative interventions (dependent variables) of patients' socio-demographic and clinical characteristics. Only variables related to drugs or rehabilitative interventions statistically significant in the univariate analysis were included in the multivariate ones. Statistical significance was set at p < 0.01.
Most of the 502 key-relatives were mothers and married or cohabiting. Almost half of them had received high-er education and were employed (Table 1). They spent on average 5.7 (4.6sd) daily hours in patient's care-giving in the previous two months.
Rehabilitative treatments
Three-hundred-fifty-one patients (70%) benefited of rehabilitative interventions. Physiotherapy was the most frequent treatment provided to MD patients, followed by respiratory rehabilitation and assisted mechanical ventilation (Table 3). Hydrokinetic-therapy was performed in only 21/502 (4.2%) patients. Rehabilitation was provided at home in about one-third (107/351, 30.5%) of cases. Although the percentage of rehabilitative interventions received by patients did not differ among the three geographical areas, however the home care rehabilitative treatments were more frequently performed in Southern Italy (24 in North, 31 in Central and 52 in South Italy (p < 0.003).
The complexity of rehabilitation treatment, intended as a number of rehabilitative interventions, increased in relation to patient's age (r = .33, p < .0001), level of disability (BI global score r = -.63, p < .0001), and duration of illness (r = .38, p < .0001).
Multiple regression analyses
Socio-demographic and clinical variables accounted for 23% of variance in pharmacological therapies provided to patients in the previous two months ( Table 4). As shown by the standardized beta weights, number of drugs was significantly higher among patients' with longer duration of illness, and suffering from DMD.
Patient's clinical variables accounted for 42% of variance observed in rehabilitative interventions (Table 4) received by the patients in the previous six months, confirming that the number of the interventions was higher among patients with more severe disabilities, and in those suffering from DMD or LGMDs.
Discussion
The study reveals that about 75% of patients, independently from the type of muscular dystrophy, receive a drug treatment. This finding outlines a shift from past views of MDs as "incurable diseases" toward a clinical approach based on effective pharmacotherapy. In line with the current clinical guide-lines, the steroids (deflazacort or prednisone) were the drug more frequently administered in DMD (8,9); they were more frequently used by patients still ambulant (119/205) compared with those wheelchair-bound (86/205) (X 2 = 55.7 df 1, p < .0001). This result can be explained by the current debate on the use of corticosteroids in the wheelchair stage, although recent studies have shown that the long-term steroid administration is useful to a) preserve upper limb strength (24), b) reduce the progression of scoliosis and the decline of respiratory function (25,26) and c) delay the onset of heart dysfunction (27).
The higher number of cardiac drugs prescribed in Centres located in Southern Italy, may be related to the longterm expertise in cardiological monitoring of these Centres (28)(29)(30) and by the recent adoption of Treat-NMD and National Council for Rare Diseases guide-lines (31,32).
The study also shows that the majority of MD patients in Italy receive rehabilitative interventions, whose complexity increased as the illness progresses. However some differences exist in the modality of provision, as in Southern Italy a higher number of patients receive domiciliary treatment. This condition, probably due to the poor availability of rehabilitative centers in Southern Italy, leads to an indirect benefit, both for patients and families, in terms of comfort of care, time saving and transfer costs. This study also reveals that in Italy -although with the known different regional shortages -an integrated pharmacological/rehabilitative care is guaranteed to the majority of patients with muscular dystrophies. Hopefully the recent changes in the Italian health care policy will further facilitate the patient's access to evidence based treatment. | 2018-04-03T01:23:20.891Z | 2017-03-01T00:00:00.000 | {
"year": 2017,
"sha1": "a1a1354b61c50a466b1ded8cdacd7735dd07c996",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "1e8edc312456eb800d103f0db16272cd6e59f141",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
199552007 | pes2o/s2orc | v3-fos-license | Linking Graph Entities with Multiplicity and Provenance
Entity linking and resolution is a fundamental database problem with applications in data integration, data cleansing, information retrieval, knowledge fusion, and knowledge-base population. It is the task of accurately identifying multiple, differing, and possibly contradicting representations of the same real-world entity in data. In this work, we propose an entity linking and resolution system capable of linking entities across different databases and mentioned-entities extracted from text data. Our entity linking/resolution solution, called Certus, uses a graph model to represent the profiles of entities. The graph model is versatile, thus, it is capable of handling multiple values for an attribute or a relationship, as well as the provenance descriptions of the values. Provenance descriptions of a value provide the settings of the value, such as validity periods, sources, security requirements, etc. This paper presents the architecture for the entity linking system, the logical, physical, and indexing models used in the system, and the general linking process. Furthermore, we demonstrate the performance of update operations of the physical storage models when the system is implemented in two state-of-the-art database management systems, HBase and Postgres.
INTRODUCTION
In entity linking and resolution, entities refer to real-world objects (e.g., people, locations, vehicles, etc.) and real-world happenings (e.g., events, meetings, interactions, etc.). Entities are described by data in information systems. However, the descriptions may be repeated and different in these systems. In a database, for instance, a person may have more than one record in a table, and the records may have repeating, differing and contradicting information about the person. Likewise, two different databases may capture different information about the same entity. For example, a medical database only concerns with a person's health related properties, whereas an immigration database only concerns with the truthfulness of a person's identity.
A description of an entity is called a profile, and it can be a record in a relational database or a paragraph of words about an entity in a document. An entity may have multiple profiles in one or more sources. In other words, multiple profiles in one or more databases (or documents) may refer to the same real-world entity.
Once the profiles of entities are captured into a database, the profiles and the entities become separated in that the users of the database know the profiles, but possibly, not the entities. This separation raises a serious issue. Answering the question of whether a given profile refers to a particular real-world entity is non-trivial and challenging. For example, given the profile: {name: Michael Jordan, nationality: American, occupation: athlete}, there are at least four real-world persons whose profiles in Wikipedia match this description (see [1] for details). A dual problem to the above problem is whether two profiles, which may look similar or very different, refer to the same real-world entity. This dual problem is as hard as the above problem.
The goal of entity linking research is to design methods to derive an answer to the dual question: do a pair of given profiles refer to the same entity? When a pair of profiles are found to refer to the same entity, one of two actions may be taken. One is to remove of one of the profiles. This is called deduplication. The other is to merge the two profiles and this is called resolution/linking 1 .
Three complications make the linking/deduplication task more difficult. The first complication is from non-alignment of attributes and relationships. That is, different profiles describe entities using different attributes and/or relations. This is illustrated by the profiles p 1 and p 2 in Table 1. The two profiles have different attributes except for the name attribute. The non-aligned attributes make their match less possible. The second complication is from the multiplicity of values. For example, compared with the profile p 1 , the profile p 3 has two name values. The third complication is the presence of provenance data. Provenance data describes the background information of a value as well as the validity period(s), security & access restriction(s), source(s), etc., of a value. For example, in p 4 , {since 2005} specifies when the name 'George' started being used, and {2010} indicates when the height valued '160' was taken. Unlike non-alignment, multiplicity and provenance of values can be useful as they provide more information. However, their usefulness comes at a cost: they require more powerful matching algorithms and data structures to enable effective usage.
This paper presents the system supporting our entity linking method Certus [11], the data and index models that enable multiplicity and provenance of attribute and relationship values to be accurately captured and leveraged for effective and efficient entity linking. The contributions of the paper are as follows.
• First, we present the architecture of our entity linking system (Section 3). This architecture enables textual data to be processed and the entities described in the texts can be linked to entity profiles from other data sources. The architecture uses Elasticsearch 2 , an index engine, to increase the linking and search efficiencies. • Secondly, we propose a graph model for entity linking involving multiplicity of attribute and relation values with provenance information (subsection 4.1). In this model, the attributes and relations of profiles are well-represented by lists of sets (of attribute/relation, value, and provenance), instead of dictionaries of attribute-and relation-value pairs. Our model enables provenance and value-multiplicity to be captured, indexed and used correctly. • Thirdly, we propose physical models for the storage of the graph of entity profiles; detail the index structures that support effective search and blocking operations (subsections 4.2 & 4.3); and give the processes in the entity linking component of the system (Section 5). • Lastly, we show experiments about the time performances of our physical model implementations on both relational and non-relational database management systems (Section 6).
RELATED WORK
Entity linking and resolution is a well-known database problem that has attracted volumes of research in the literature, especially in the relational data setting. Readers are referred to [14] for details. In general, the existing works focus on two main directions: accuracy and efficiency. The accuracy concern is on finding true matches of different entity profiles when they refer to the same real-world entity without introducing false matches. A more specific term called efficacy is defined to mean accuracy in [2]. The efficiency issue is about alleviating the infeasible pairwise comparison of profiles, and making the linking process scalable in large data. For accurate entity linking and deduplication, early works on the subject examined many methods such as cosine similarity match, distance-based match, TF/IDF, and Soundex. The well known similarity measures for entity linking are summarized and reviewed in [10]; and the work in [9] presents a comparative evaluation of some existing works.
The efficiency problem has also drawn significant research attention. The complexity of calculating the exact similarity between profile pairs is O(n 2 ). Given a large number of entity profiles, say n = 100 million, the time for computing similarity is too long to be practical. Thus, several ideas have been introduced in the literature to address the problem, like canopy (sorting and moving window), hierarchical, bucketing (clustering), and indexing approaches. In practice, the indexing approach has been found to be more useful, resulting in the proposal of a plethora of indexing methods in the literature (see [4,17] for surveys of techniques).
In the recent years, there has been an increasing research interest in linking entity-mentions in texts to existing entities in knowledge-bases. From Wiki Miner in [13], many works have been produced in this area and are reviewed in [5,20]. The fundamental steps in text-based linking include: entity-mention detection, candidate matching-entity generation, and candidate matching-entity ranking. The work in [5] reviews the methods for detecting entitymentions in texts. Whereas the review paper [20] summarizes the details of how features (such as the mentions, types, contexts, etc.) and models (e.g., unsupervised, supervised, probabilistic, graphbased, and combined methods) are used in the ranking of candidate matching-entities. The efforts toward ranking is continuing, and the work in [12] aims to identify effective relationship words among entity-mentions to increase the accuracy of linking.
Most data management and software companies claim to support entity linking in structured data, but the systems are often not available for evaluation. In contrast, a number of open source research frameworks are available on entity linking in text data. For example, [13] proposes a method to extend terms in texts using Wikipedia pages. [6] is a framework tagging terms in short texts by Wikipedia pages, which is then followed by the works in [8,19] for software improvement. [19] and [3] are other tools that contain a three step implementation for linking entity-mentions in text to Wikipedia pages. The work in [22] sets up a framework for entity linking work to be tested and evaluated.
There exists works in the literature on the support and use of provenance for entity linking. For example, [15] is on provenance modeling and capture for entity linking whereas [23] presents a provenance-aware framework for improving entity linking results. Our work models, supports, and leverages provenance as well as attribute-and relation-value multiplicity for accurate entity linking in both structured and text data.
SYSTEM ARCHITECTURE & FUNCTIONS
This section covers the architecture of our entity linking system, and outlines the functions of the components of the system. Figure 1 presents an overview of the architecture of our system. Central in the system is the Knowledge-Base (KB) which is a graph of entity profiles (details in Section 4). The profiles in the KB come from three sources: (a) ingested profiles from different data sources (through the Ingester) with no restriction on model; (b) extracted profiles from user-supplied textual documents (via the Text Parser); and (c) profiles created from the User-Interface (UI). The profiles are linked and indexed by the Entity Linking & Resolution (ELR) and Indexer components respectively. And, all user interactions with the system are via the UI, mediated by the Query Processor.
The following are brief details and functions of the components. The Ingester: maps entity descriptions from various data sources into graph-modelled profiles in the knowledge-base. Its operation is straight-forward and dependent on the respective models (or lack thereof) of the various sources of data. The Text Parser: reads textual user-inputs (e.g., documents, reports, etc.), and extracts mentioned-entities and their relationships from the texts, and stores the extracted entity profiles into the knowledge-base. In our implementation, we use Stanford NER [7], Stanford POS tagger [21], and Open IE 4.x [16] for this purpose.
The problem with the above-mentioned packages is that they may produce many triples (subject, relation, object) that do not reflect the original intention of authors in the writings. For example, extractions for the sentence "John said that, Peter has taken away the mobile phone", include the triple: ("Peter", "has taken away", "the mobile phone"). This extract is only syntactically correct. The semantic correctness of this extraction is, however, dependent on John's credibility/position. If John is a Police spokesman, for instance, then the chance of semantic-correctness would be high. However, if John is an adversary of Peter, for example, then the chance for the extraction to be correct would be low.
Therefore, we developed some heuristic rules to filter ambiguous extractions. The rules: (a) replace coreferences (pronouns) with the actual entity-mentions; (b) remove extractions that are conditional, and indirect speech; and (c) filter extractions that describes feelings and emotions. The inputs to the rule-based filtering system (RbFS) are the text and labelling from Stanford NLP. The rules improve the F1-score of extractions by 18% on average on our test datasets. The details of the RbFS is out of the scope of this paper. The Query Processor: receives requests from the user-interface and responds based on the request type. An insert or update request is directly sent to the knowledge-base. For a query by profile identifier or keywords, the index is searched and then answers are retrieved from the knowledge-base. The Indexer: keeps the indexes up to date with the current system state. Whenever the knowledge-base is updated, this component sends the update to the indexes. The indexes support users' queries and the ELR component. We use Elasticsearch, an open source distributed search and index engine, as our index management system. Later on (in subsection 4.3), we present details of the structure of the indexes in our system. The ELR component: as the name suggests, is the main component in the system. In principle, for every entity profile p in the knowledge-base, indexes are read for candidate matching profiles; calculations of the similarity of p to each of the candidates are performed; and the knowledge-base is updated to store the similarities. A candidate p ′ of p from the indexes is a profile that is roughly similar to p, i.e., the pair share some similar attribute and/or relationship values. We remark that the fact that p and p ′ share a similar 'word' does not necessarily mean that they refer to the same real-world entity. For instance, if p is a male with the name Pete and p ′ is a female and has a friend called Peter, then p ′ can be a candidate of p as they share a similar value (i.e., Pete & Peter), but they do not match. Therefore, the indexes merely give a set of possible profiles that may match which require further evaluation.
MODELS
In this section, we present the logical, physical, and indexing models used in our entity linking system.
Modelling of Profiles
A real-world entity, naturally, has many attributes (or properties) and relates to other entities in multiple ways. An entity profile (simply, profile) captures some of the attributes and relationships of an entity; and another profile may capture different attributes and relationships of the same entity with possible overlaps and contradictions. For example, a person may have multiple profiles in the same/different sources of data (e.g., databases, knowledge-bases, social networking sites, etc.). Entity profile structure. The data structure of profiles should be able to capture multiple values of the same attribute or relationship as well as their provenance information. This is because of the ever-changing or evolving nature of the properties and relationships of real-world entities. For example, a person may change his/her names, live at different addresses over time, have multiple marriages spanning different periods, etc. These changes lead to multiple values for attributes and relations and these values may be associated with provenance information.
We represented an entity profile as a triple p = ⟨ id, A, R ⟩, where: id is the identifier of the profile, A = [a 1 , · · · , a n ] is a list of attribute-objects, R = [r 1 , · · · , r m ] is a list of relationship-objects; and each a ∈ A, r ∈ R is a set of ordered 3 key-value pairs. Four profile examples are given in Table 2, shown in our structure. Profile p 1 describes a person entity: a male called Peter up to 1991 and now called John. He lived_at location L1 from 1989 to 1995, owns L1 since 1989 and has a friend named Bob. Our data structure for profiles is thus able to capture the multiplicity and provenance of values. Entity profiles graph. We use a graph model for modelling profiles, as it is capable of representing any number of attributes and relationships. Moreover, since A and R are defined as lists, instead of dictionaries, value multiplicity can be presented easily. Furthermore, the edges of the profiles graph allow the traversal of the profiles.
Formally, we use the following definition of an entity profiles graph, G = (V , E, F A ), where: (i) V is a finite set of nodes; (ii) E is a finite set of edges, given by E ⊆ V × V ; (iii) each node v ∈ V (resp. Table 2 edge e ∈ E) has a label L(v) (resp. L(e)); and (iv) each node v ∈ V has an associated list F A (v) = [a 1 , · · · , a n ] of attribute-objects.
A node in an entity profiles graph represents a profile p, identified by the profile id, and is associated with A and R as defined. Two types of edges exist in the graph. One is called a relation-edge, derived from R. That is, the edge (rel, P1, P2) is an edge in the graph iff.: P2 is the value for relation rel in P1 where P1 and P2 are profile/node identifiers. The second type of edge is called a similarity-edge, derived from the profile pair similarity and has the form (sim, P1, P2, score, c f m) where sim is a fixed label, score is the similarity score (defined later in Section 5), and c f m is a binary indicator showing whether the link-state of a profile pair has been confirmed by a user. The indicator is necessary because, in sensitive systems like policing, we want 100% precision if two profiles are linked. Thus, c f m requires user-interaction (to be discussed further in Section 5). Figure 2 is an example of the graph of profiles in Table 2. Note that each node in the graph carries its attribute list (not shown in the diagram).
Physical Model for Profiles
Profiles are modelled as a graph; and the nodes and relation-edges are stored in one structure while the similarity-edges are stored in a separate structure. Similarity-edges are updated frequently as any profile change triggers a re-computation of similarities for the profile and other affected profiles. Therefore, storing similarityedges in a separate structure improves the update efficiency. The Figure 3.
The model in Figure 3 is self-explainable; and the data in the two tables of the model are derived from some of the exemplar profiles in Table 2. The table in Figure 3(a) stores the nodes (profile ids and attributes) and relation-edges (relationships). Each node uses multiple lines and each line is for an attribute or relationship value pair with provenance details. The table in Figure 3(b), on the other hand, is for the storage of similarity-edges and each pair of profiles has an entry in the structure. simsc and rejsc represent similarity score and rejecting score respectively (details in Section 5). Since the size of the table in Figure 3(b) is the square of the number of nodes/profiles, to reduce the size, a threshold may be used to filter out very lowly-scored entries.
We realize that the performance of accessing the similarity-edge table plays a crucial role in the overall linking time performance due to its frequent update operations. Therefore, we show empirical results on three different implementation options of the physical model in Section 6.
Index Mappings
Our aim is to design index structures to support users' search for profiles and support the candidate matching-profiles generation (a.k.a blocking) of the ELR component. Thus, we use Elasticsearch, a distributed index management system that can support multiple indexes with various structures.
Recall that, logically, each profile is a triple of the form ⟨ id, A, R ⟩ in the knowledge-base. We consider two options for building indexes for the profiles (discussed below), and both are configured with the double-metaphone phonetic analyzer [18], and custombuilt synonym and alias transformers. Keyword search & blocking indexes. The indexes that support keyword search of profiles and blocking for entity linking uses a set of words generated from the profiles. The set of words are values from the profile without provenance. That is, the provenance values, the structure, the attribute and relation names are all ignored, relationship targets (i.e., other profile ids) are replaced by the summary of the target, and all duplicate words are removed. For example, the target summary for p 1 in Table 2, is a bag of the following values: "John, Peter, m, 1, brown, 2000".
The 'loose' structure of these indexes guarantee high recall of search and blocking results. Structured search indexes. The indexes for structural search consider the structure of the profiles. For example, if a user wants to find a person with "name : John, lives_at : 1 Brown street − {until : 2000}", the index should enable p 1 in Table 2 to be found. To support such structural search, we build indexes with the nested mappings in Elasticsearch for profiles structured as JSON objects with: where t 0 , t 1 , t 2 are date time values; R mapping similarly defined.
We remark that the usage of nested mappings is critical to the preservation of the correct semantics of the multiplicity of values and their associated provenance information in Elasticsearch. Otherwise, Elasticsearch indexes the profiles in a 'flat-format' of the form: , to : [t 1 , t 2 ]} -which loses semantics and leads to errors and very low precision.
In settings where smaller and more precise blocking are required, the nested-mapped indexes should be used.
LINKING OF PROFILES
This section presents a description of our profiles comparison and linking processes. First, we highlight some relevant preprocessing steps. Then, we detail the profile-pair comparison and evaluation; and finally, give a brief overview of the match prediction and confirmation.
Preprocesses. Prior to the calculation of the similarity between profile pairs, some preprocessing are necessary. For example, consider person and location entities: it is important to tackle the disparate representation of the same names and addresses respectively. The name Richard is often aliased as Dick; and the street-type Boulevard is often shortened as BLVD. To enable Dick to match Richard, a dictionary of name aliases of people is created (similarly, for addresses). Each name/address in a profile is checked against the dictionary. If the name has an alias, the name is expanded in the form "name alias", e.g., "Richard Dick". Similar operations are performed on the initials, and pre-/post-fixes of names. Similarity evaluation. Given two profiles p 1 and p 2 , our entity linking method uses two scoring and one decision processes to determine whether they refer to the same entity in the real-world. The two scores are the similarity score, simsc, and the rejection score, rejsc; while the decision process is a data-dependency-based prediction model. We discuss the scoring here.
Given a profile p, we use the notation X ∈ p to represent either an attribute X in p[A] or a relation X in p[R]. The similarity score, simsc, of two profiles p 1 , p 2 , is calculated as follows: where M is a function that returns a value indicating the level of approximate match between a pair of values for the same attribute/relation X , and I returns the level of information supplied by the match M for the values of X .
The function M considers many factors, dependent on the attribute/relation; and the values of an attribute/relation are in the form of a bag of words after synonym/alias expansion with provenance data. For example, to evaluate a name match for person entities, M considers the initials, ordering, post-/prefixes, aliases, and phonetics of names, as well as n-gram matching of character/word sequences. Edit distance is used after n-gram matching to improve accuracy and efficiency. If two values match within a user-specified threshold, then the provenance information are considered.
The function I returns the highest information level of matching values. For example, for the name-pair "John Smith White" and "Jones Smiths Green", the I name -weight is derived as: The function in f (w) indicates the probability of two profiles to be linked if they match on the value w. Note that, in this example, the name-pair "Green" and "White" are not considered in the evaluation of I as they are dissimilar (i.e., have low M value). Intuitively, if a word w is rare, it has high in f (w)-value. Consider the two firstnames 'John' and 'Cherith'. When two profiles share the name 'Cherith', the probability for the two profiles to be linked is much higher than when two profiles share the name 'John'. The in f (w)-value of a word w is controlled by two factors: the number m(w) of profiles sharing the word, and the number k(w) of real-world entities shared by the profiles sharing the word. If k(w) is large, the fact that the m(w) profiles share the same word contributes very little to the linking, and in f (w) should be small. When the total number of profiles increases, the chance for two profiles to share the same word becomes larger but the crucial control of the probability is still by k/m. That is, in f (w) ∝ m/k; and m can be easily obtained from word statistics but k is often unknown. A large m does not mean a large m/k ratio. We use a variation of the Sigmoid function to estimate in f (w), given as: where α, β control the steepness of the decay curve and its midpoint respectively. For a given w, k(w) can be empirically estimated and linked to β. However, in general, our empirical results suggest α = 0.1 and β = 60 are suitable settings for our applications.
The rejsc score, on the other hand, is based on a simple penalty system. Given two profiles with a high overall simsc score, a penalty of 1 is added to their rejsc score if the pair are dissimilar on a key attribute/relation (determined by application and domain). For example, in a law enforcement context, one such key attribute for person and location entities is birth-date and zip-code respectively. Match prediction & confirmation. As mentioned earlier, we use a data-dependency aided decision model to predict whether a given pair of similar profiles refer to the same real-world entity. This decision model is a major topic (and we refer interested readers to the paper on it in [11]). The approach eliminates the challenge of and need for fine-tuning dis/similarity thresholds for approximate matching, through the use of a discovery algorithm that learns matching rules in labeled data. The match prediction model achieves high precision without significant compromise of recall.
In some applications, even accurate prediction of the linked status of two profiles require human confirmations. Thus, our entity linking system supports this scenario, allowing the keeping of domain experts in the loop. Indeed, every similarity-edge between profiles carry the data structure for the confirmation of predicted matches (when needed). For example, in Figure 2, the similarityedge between nodes L1 and L2 is not confirmed (i.e., cfm: false).
EXPERIMENTS
In this section, we empirically evaluate the performance of the three different implementations of the physical model. We remark that, the accuracy of the ELR system is already evaluated in [11].
We note that updating the pairwise similarities of profiles is a major performance bottleneck. This is because, for every 1,000 profiles, the updated similarity entries are around 20,000-100,000. Therefore, we examine the time efficiency of accessing the similarity structure (Figure 3(b)).
All procedures in the work are implemented in Java, and the entity linking system runs on Ubuntu 18.04 machine(s). For singlemachine tests, the experiments were run on an Intel(R) Core(TM) i7-7700 CPU @ 3.60GHz computer with 32GB of memory. In the cases where multi-node HBase clusters are required, an Intel(R) Core(TM) i7 CPU @ 2.30GHz computer with 16GB of memory is added. The versions of Postgres and HBase used are 9.6.12 and 1.4.8 respectively.
Efficiency of Similarity Storage Structure
We present our experiment results on the efficiency of accessing the similarity structure on different platforms with different implementations. We tested the implementation in Postgres 4 and HBase, with schemas summarized in Figure 5.
The operations to access the similarity structure include search, insertion, update and deletion. Since the similarity is for a pair, the search must be supported from either ID. We created two indexes for this purpose in the relational option of Postgres. With the HBase options, CF means a column family which is a dictionary of keyvalue pairs with the keys listed in the brackets. The 'id-pair' is constructed by ID1+"-"+ID2. In the case of HTable3 in Figure 5(c), the second id-pair is ID2+"-"+ID1.
Performance of the physical model on small to large data. In this experiment, we examine the relative update transaction (involving search, insert & delete operations) time performance of the three physical model implementations (in Figure 5) over small to large datasets. The results for: (a) small-to medium-sized data (i.e., 23K to 23M profile pairs), and (b) medium-to large-sized data (i.e., 23M to 468M profile pairs) are presented in Figure 4 For case (a) above, the HBase-1 and HBase-2 implementations are on a single-node cluster for a fair comparison with the Postgres implementation; and for case (b), the HBase implementations are on a two-node cluster. The results show that in all cases, of the three implementation options, the relational option (Postgres) is significantly slower than the HBase counterparts; and the HBase-1 4 It is noteworthy that the performance difference of the Postgres implementation for ON/OFF AUTOCOMMIT settings is marginal. Thus, we report the best (i.e., AUTO-COMMIT OFF). implementation (i.e., option (b) in Figure 5) is the better of the two HBase options. It is also noteworthy that there is no significant performance difference between the HBase implementations on single-node and two-node clusters. Stress test of HBase implementations. In this experiment, we perform further tests to examine the insertion and update (replacement) operations of the best-performing models (i.e., the two HBase models). We consider three data sizes: 6, 30, and 54 billion profile pairs. As the results in Figure 4(c) show, the insertion operations are, as expected, more efficient than update operations for both models over the three datasets. Moreover, both the insertion and update operations are scalable for both implementations on very small-sized (i.e., just a two-node) cluster.
CONCLUSION
In this paper, we present the details of the entity linking system that powers our entity linking method called Certus. We describe the architecture of the system, the graph and data models, and index structures used to support the multiplicity and provenance of attribute and relation values, for effective entity linking and resolution. Further, we give the details of the physical model for storing the entity profiles graph, and discuss three different implementations of the structure for storing the similarity-edges. Due to the frequency of the update transaction of similarity-edges, we perform experiments to evaluate the time performance of accessing the similarity structure on two state-of-the-art database management systems (HBase and Postgres) to demonstrate the relative performances of the three different implementations. The empirical results show a generally good performance for all implementation options. In particular, the HBase implementation options, with even just one-or two-node clusters, scale very well for huge data sizes. | 2019-08-13T02:28:37.000Z | 2019-08-13T00:00:00.000 | {
"year": 2019,
"sha1": "73bde5e7eb8dbf6f1e1e548f1b9b12782c9d434c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "e382c19a183a921f9eeb926966253b96318643b8",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
124151226 | pes2o/s2orc | v3-fos-license | Breeding Site Characteristics and Associated Factors of Culex pipiens Complex in Lhasa, Tibet, P. R. China
Characterizing the breeding sites of Culex pipiens complex is of major importance for the control of West Nile disease and other related diseases. However, little information is available about the characteristics and associated factors of the breeding sites of the Cx. pipiens complex in Lhasa, a representative high-altitude region in Southwestern China. In this study, a cross-sectional study concerning the breeding site characteristics and associated factors of the Cx. pipiens complex was carried out in Lhasa, Tibet from 2013–2016. Chi-square analysis and binary logistic regression analysis were applied to identify the key factors associated with the presence of Cx. pipiens complex larvae. Using a standard dipping method, 184 water bodies were examined and Cx. pipiens complex larvae were observed in 36 (19.57%) of them. There were significant differences in the composition of Cx. pipiens complex larvae among the breeding site stability (χ2 = 19.08, p = 0.00) and presence or absence of predators (χ2 = 6.986, p = 0.008). Binary logistic regression analysis indicated that breeding site stability and presence or absence of predators were significantly associated with the presence of Cx. pipiens complex larvae in Chengguan District, Lhasa. Relatively permanent water bodies such as water bodies along river fringes, ponds and puddles, and water bodies with no predators should be paid more attention for future Cx. pipiens complex larvae abatement campaigns in Lhasa, China.
Lhasa, the capital of the Tibet Autonomous Region (TAR) of China, is an international tourist city with plateau and national characteristics. It is known as one of the highest cities in the world, having an elevation of about 3650 meters, and lies in the center of the Tibetan Plateau. Evidence has shown that mosquitoes in the Cx. pipiens complex have already settled in urban Lhasa, TAR, and the local Cx. pipiens complex comprises the subspecies Cx. pipiens pipiens, Cx. pipiens pallens, Cx. pipiens quinquefasciatus, and hybrids of these subspecies. In addition, climate change may have played a role in the establishment of mosquitoes in Lhasa [9]. At present, the transmission of diseases by the Cx. pipiens complex has not been reported in Lhasa. However, further warming raises the risk of the outbreak of mosquito-borne diseases in the future [10,11]. This has already constituted a potential public health threat to the locals [12].
Each species of mosquito has its preferred breeding site for oviposition [13], depending on climate conditions, physical geography, and human activity [14]. Breeding sites can be natural or artificial, shaded or sunny, permanent or temporary, of various sizes, and found in running or stagnant water bodies, among others. Globally, many studies have displayed that the breeding habits of the Cx. pipiens complex are similar, and the main breeding sites are water bodies that are not seriously polluted, such as sinkholes, sewer ditches, cesspits with clear water, stagnant water in low-lying land, and so on [15]. The Cx. pipiens complex has better capacity of adaptation towards diverse breeding sites. In the Wroclaw area of Poland, evidence has shown that Cx. pipiens s.l. (L.) was well adapted to various breeding site types including ditches, catch basins, flowerpots, and buckets with diverse water quality [16].
Informed larval interventions that target more profuse breeding sites have enormous potential in combating Cx.-pipiens-complex-related diseases, especially at a regional scale. Though some studies have examined the types of Culex breeding sites and their related characterization in high-altitude regions [17], little information is available concerning the high altitude regions in China. This poses a serious challenge for the prevention and control of potential mosquito-borne diseases in the future. Therefore, this study aims to explore the breeding site characteristics of the Cx. pipiens complex and related environmental and physico-chemical parameters in urban Lhasa, to determine which breeding site characteristics can better explain the presence of Cx. pipiens complex larvae. The results of this study could provide first-hand scientific assessment of Cx. pipiens complex breeding sites and provide implications for developing intervention measures to control mosquito-borne diseases in Lhasa in the future.
Study Area
Lhasa City is an international tourist city with plateau and ethnic characteristics and is the administrative capital of the Tibet Autonomous Region of the People's Republic of China, consisting of one municipal district (Chengguan District) and seven counties (Linzhou county, Dangxiong county, Nimu county, Qushui county, Duilongdeqing county, Dazi county, and Mozhugongka county).
This study was conducted in selected sites of Chengguan District, Lhasa from 2013-2016 ( Figure 1). Chengguan District was the only municipal district in Lhasa city during the study period, with a population of 279,074 in 2013. By 2012, Chengguan District covered an area of 523 square km, but the municipal district only accounts for about 10% of the total area of Chengguan District. These sites were selected mainly according to the geographic and socio-economic characteristics of urban Lhasa. The selected research sites from 2013-2016 mentioned above are summarized in Table 1.
Mosquito Larvae Sampling and Identification
Based on our previous research [9], mosquito species of Chengguan District, Lhasa belong to the subspecies of the Cx. pipiens complex. Evidence has shown that the breeding sites of mosquito of the Cx. pipiens complex mainly include sinkholes, sewer ditches, cesspits, low-lying land, and so on, which generally exist in outdoor environments [15]. Therefore, the selection of potential breeding sites in this study mainly focused on outdoor surroundings.
The larval sampling was conducted using a standard dipping method [18]. In the outdoor surroundings, all the potential breeding sites were located and inspected. When mosquito larvae were present, 10 dips were taken with a dipper in each breeding site. When a breeding site was too small to make 10 dips, water was dipped as many times as possible. In large water bodies, dipping was carried out 100 m apart [14].
To further identify the species of the collected mosquito larvae in Lhasa, the late instars of mosquito larvae were immediately preserved in 90% absolute ethanol and then taken to the laboratory of the National Institute for Communicable Disease Control and Prevention (ICDC), the Chinese Center for Disease Control and Prevention (China CDC) [19]. A multiplex PCR protocol was adopted to identify the subspecies of mosquitoes using polymorphisms in the second intron of the acetylcholinesterase-2 (ace-2) locus, developed by Smith and Fonseca [20]. For the polymerase chain reaction (PCR) identification, the method used in this study was the same as in our previous research [9].
Breeding Site Characterization
Prior to the survey of potential breeding sites, information about the research sites was recorded, including geographic location, population, economic development level, water bodies, park, housing conditions, and land utilization. The larval breeding sites were characterized either visually or using hand-held equipment.
Some key breeding site characteristics, such as breeding site types, the location of water bodies, distance to the nearest household, perimeter of water body, breeding site stability, substrate types, predators, vegetation, nature (artificial or natural), water flow or static water, shade, water depth, pH, water temperature, dissolved oxygen, turbidity, soluble solids, conductivity, salinity, and resistance, were recorded or tested in this study.
Identified water bodies were classified according to their nature, classified as: river fringes (breeding sites formed along riverbanks when the water level drops), ponds (water area larger than 50 m 2 ), puddles (water area less than 50 m 2 ), irrigation or drainage ditches, and ground pools [21]. The perimeter of each breeding site was categorized by estimation as shorter than 1 m, 1-10 m, or longer than 10 m. Substrate types were classified into cement or concrete, soil, metal, and others. Distance to the nearest house was measured by GPS and classified as less than 10 m, 10-100 m, and greater than 100 m. Water depth was classified into greater than or equal to 0.5 m and less than 0.5 m. The stability of mosquito larval breeding sites was classified as either temporary or permanent. Temporary breeding sites held water for a short period of time (approximately two weeks after the rainy season ended) and stemmed mainly from rain showers. When rain ceased, these breeding sites dried out. On the other hand, the permanent breeding sites held water for a longer period of time (approximately two to three months after the rain ended or fed by natural underground sources) and hence were more stable.
pH, water temperature, dissolved oxygen, turbidity, soluble solids, conductivity, salinity, and resistance were recorded by handheld equipment. pH was recorded by a Waterproof pHTestr 30 (OAKTON Instruments, Vernon Hills, IL USA) [22]. Dissolved oxygen was recorded by portable dissolved oxygen meter (SG6-FK2 CN, Mettler-Toledo, LLC, Columbus, OH USA). To measure turbidity, turbidity meter was adopted in this study. Some indices, such as soluble solids, conductivity, salinity, and resistance were recorded by a portable multiparameter tester (SG23-FK-CN, Mettler-Toledo, LLC, Columbus, OH, USA).
Temperature ( • C) and relative humidity (%) data were obtained from the China Weather Website (http://www.weather.com.cn). During collections, ambient outdoor air temperature and relative humidity were recorded by portable weather station (Davis Weather Link 6.0.3, Davis, CA, USA).
Ethics Statement
This study was approved by the Ethics Committee of China CDC (No. 201214). Ethical approvals were also obtained from the Lhasa Health Bureau, Chengguan District CDC and Tibet CDC respectively in the Tibet Autonomous Region.
Statistical Analysis
Chi-square test was applied to determine the importance of factors for explaining the presence or absence of Cx. pipiens complex larvae. Some factors with statistical significance in the chi-square analysis were selected to do further binary logistic regression analysis to calculate the odds ratio (OR) and 95% Wald confidence intervals. Presence of larvae was categorized as one, while the absence of larvae was categorized as zero in the logistic regression model. Statistical analysis was carried out using SPSS software (Version 19.0 for windows, SPSS Inc., Chicago, IL, USA). p < 0.05 was considered as statistically significant, and all tests were two-tailed.
The Potential Mosquito Breeding Sites in Lhasa, 2013-2016
In this study, 184 potential mosquito breeding sites were examined from the sampled locations from 2013-2016. Representative water bodies in Lhasa are shown in Figure 2.
The Positive Breeding Sites and Species of Mosquito Larvae in Lhasa
Among potential mosquito breeding sites, Cx. pipiens complex larvae were observed in 37 water bodies, accounting for 20.1% of overall water bodies (Table 1). In total, 180 Culex larvae collected in 2013-2016 from 36 water bodies as mentioned above were further identified to subspecies according to a multiplex PCR identification. It was demonstrated that all the identified mosquito larvae belonged to subspecies of the Cx. Pipiens complex, including 63 pure mosquitoes (35%) and 117 hybrids (65%). The pure mosquitoes included 22 Cx. Pipiens pipiens, 11 Cx. Pipiens quinquefasciatus, and 30 Cx. pipiens pallens. Possible hybrids consisted of 80 Cx. pipiens pipiens × Cx. pipiens pallens, 26 Cx. pipiens pallens × Cx. pipiens quinquefasciatus, and 11 Cx. pipiens pipiens × Cx. pipiens quinquefasciatus. Sequence analysis confirmed the accuracy of multiplex PCR in this study.
The Main Characteristics of 184 Potential Mosquito Breeding Sites in Lhasa
Among 184 sites which contained water, 36 (19.57%) were productive for Cx. pipiens complex larvae, including 12 puddles, 6 sewer or tube wells, 5 ponds, 4 temporary ground pools, 3 river fringes, 3 irritation or drainage ditches, and 3 other water bodies. The frequency and percent composition of the main characteristics of 184 potential mosquito breeding sites in Lhasa, Tibet are shown in Table 2.
Positive Breeding Sites and Key Factors Associated with the Presence of Mosquito Larvae
There were significant differences in the composition of Cx. pipiens complex larvae among the breeding site stability (χ 2 = 19.08, p = 0.00), and the presence or absence of predators (χ 2 = 6.986, p = 0.008, Table 3). Based on the results of chi-square analysis, we found that no significant differences were observed in the presence of Cx. pipiens complex larvae among breeding site type, distance to the nearest house, artificial or natural, flow or static, perimeter, pH, dissolved oxygen, soluble solid, salinity, substrate type, presence or absence of predators, vegetation, shade, water depth, water temperature, turbidity, conductivity, or resistance.
The Findings of Binary Logistic Regression Analysis
To further exclude the confounding factors of the results of chi-square analysis, binary logistic regression analysis was adopted. Breeding site stability and the presence or absence of predators were found to be the key factors which determined the presence of mosquito larvae (Table 4).
Discussion
The precise identification of mosquito larvae species in Lhasa is of great importance not only for the study of the mosquito ecology but also for prevention and control of mosquito-borne diseases in future. In this study, a multiplex PCR method revealed that all identified mosquito larvae belonged to subspecies of the Cx. pipiens complex. This is consistent with the results from previous reports in Tibet [9].
This study found that the larvae of the Cx. pipiens complex mainly existed in river fringes, puddles, sewers, and temporary ground pools, although there were no significant differences of mosquito larvae among these water bodies. Except for the breeding sites along the river fringe, other water bodies were mainly artificial and represented major types of breeding sites in Lhasa. This indicated the possibly important infection locations of mosquito-borne diseases such as filariasis, West Nile disease, and other potential diseases in the future. The findings of this study are similar to the studies of Savage and Miller and Hribar in the USA [15,23,24]. Considering their findings, members of the Cx. pipiens complex readily breed in storm sewer catch basins, clean and polluted ground pools, ditches, animal waste lagoons, effluent from sewage treatment plants, and other sites that are slightly to very eutrophic or polluted with organic wastes. Generally, Cx. pipiens quinquefasciatus is associated with more eutrophic water than Cx. pipiens. Habitats along river fringes were more important during the dry season when the water levels reduced, and stagnant pools of water suitable for mosquito breeding were created [25].
In this study, among the 184 breeding sites which contained water, nearly one-fifth of water bodies were productive for mosquito larvae. Based on the literature review and our previous findings, the subspecies of the Cx. pipiens complex settled their populations in Lhasa in only a short time. Furthermore, evidence has shown that adult mosquito density in Lhasa has been low in recent years, with 1. [26]. Therefore, this may be the main reason for the low proportion of mosquito larvae in the examined water bodies during the study period.
Regarding the subspecies of the Cx. pipiens complex, we found that both pure Culex mosquitoes (35%) and their hybrids (65%) existed in the study sites of Lhasa city. These included pure Culex mosquitoes (Cx. pipiens pipiens, Cx. pipiens quinquefasciatus, and Cx. pipiens pallens) and their hybrids (Cx. pipiens pipiens × Cx. pipiens pallens, Cx. pipiens pallens × Cx. pipiens quinquefasciatus, and Cx. pipiens pipiens × Cx. pipiens quinquefasciatus). The findings mentioned above were identical to those in the previous research in this region [9,27].
In the present study, the stability of mosquito larval breeding sites was identified to be the key reason associated with the presence of Cx. pipiens complex larvae in Lhasa. We discovered that the majority of Cx. pipiens complex larvae were observed in permanent breeding sites, including along river fringes, ponds, and puddles. The results of the study are similar to the findings of two studies in western Kenya. Fillinger et al. found that semi-permanent and permanent habitats were suitable for the proliferation of Culicines and Anopheles gambiae sensu lato [28]. Fort Ternan et al. found that permanent habitats held water for a long period of time, and that after the rain these habitats were more preferred by the Culicines and Anopheline mosquitoes [25].
Other factors potentially affecting the presence of Cx. pipiens complex larvae included distance to the nearest house, artificial or natural, flow or static, perimeter, pH, dissolved oxygen, soluble solid, salinity, substrate types, vegetation, shade, water depth, water temperature, turbidity [38], conductivity, and resistance [39]. However, there were no marked differences in the presence of Cx. pipiens complex larvae among these variables mentioned above, and further study needs to be carried out in the future.
To date, many studies have found that some biological and physicochemical characteristics of larval habitats such as pH, water temperature, dissolved oxygen, turbidity, soluble solids, conductivity, salinity, and resistance were correlated with the presence of mosquito larvae [14,17,32,[40][41][42]. One study in Egypt examined the effects of environmental parameters on larval population density [43], including pH, biological and chemical oxygen demands, daytime water temperature, plant growth, salinity, total organic matter, and concentrations of heavy metals. They found that Cx. pipiens larvae displayed high tolerance to elevated levels of heavy metals in sewage water and sewage or domestic waste. Besides, these breeding sites had compensatory effects, probably caused by their high nutrient levels. Muturi et al. found that Cx. quinquefasciatus was associated with turbid water in U.S.A. [44]. However, no significant association was detected between the presence of Cx. pipiens complex larvae and habitat related variables in this study. The potential reasons still need to be investigated in further studies.
This study mainly studied the potential breeding sites in outdoor environments, however, a small amount of discard containers in rooms and drain pits of tap water in the courtyard pose a potential threat under suitable conditions. Further study could also focus on the variations of the breeding habit of the Cx. pipiens complex along with the change of the ecological environment caused by the urbanization in Lhasa in recent years. Other factors such as heavy metallic elements and their compounds [45], orthophosphates, biochemical oxygen demand (BOD), radioactive substances, the contents of minerals and their compounds [46], and some microbial contents were not detected in the current study, and similar studies could focus on these.
Furthermore, we could not ignore the possible error from identifying subspecies of the Cx. pipiens complex using multiplex PCR method in itself. There has been some research undertaken using DNA barcoding [47] and protein profiling [48] to distinguish the subspecies of the complex mentioned above. Since the adopted primers in this study only focused on three forward primers (ACEquin, ACEpall, and ACEpip) and one backward primer (B1246s), plus the limitation of sampling to some extent, Cx. pipiens molestus was not detected as larvae in this study in Lhasa [49].
Conclusions
The present study found that breeding site stability and presence or absence of predators were two key influencing factors which were significantly related to the presence of Cx. pipiens complex larvae. Mosquito larvae of subspecies of the Cx. pipiens complex mainly bred in permanent water bodies, and the absence of predators may increase the probability of finding them. Therefore, permanent water bodies with no predators should be highly emphasized for future Cx. pipiens complex control campaigns in Lhasa. | 2019-04-21T13:04:02.815Z | 2019-04-01T00:00:00.000 | {
"year": 2019,
"sha1": "595202ce8f63c43698ad338f0c70287f20b94981",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/16/8/1407/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "595202ce8f63c43698ad338f0c70287f20b94981",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
69620633 | pes2o/s2orc | v3-fos-license | Study on Data Selection Method of Historical Operation Data for Large Scale Power System
A data selection method based on similarity measurement and support vector machine (SVM) is proposed. At first, the critical clearing time (CCT) is used as the class label, and features which are strongly correlated with the class label will be extracted. Secondly, a SVM classifier is trained on the initial training instances with extracted features, and the instance which is misclassified will be removed. Thirdly, the concept of the most similar instance pair is proposed, which two instances with the minimum distance are selected, and then removes the eligible instances which is noisy and redundant instances. The proposed method which can simultaneously prune data in horizontal and vertical directions is tested by online historical data of an actual large scale power system. Experimental results demonstrate that more than 70% features and 30% instances are reduced, and the accuracy and storage reduction are also improved. This method can be well used with the good performance in large scale power system.
Introduction
With the rapid development of DC and AC hybrid grid, the power system becomes larger and more complicated, which brings profound influence to the operation stability of power grid [1]. It is necessary to continuously expand the performance of online security analysis, in order to meet the requirements of dispatching operation work adjusting to large scale power system. Recently, the big data technology has developed, which provides more means to solve the problems in many technical fields [2]- [4]. At present, a lot of dispatching operation data had been accumulated day by day in dispatching institutions. For example, the size of calculated data and result data which produced by the online security analysis can be amounted to 1GB in every 15 minutes. Such huge amount of data must be dealt with in learning process, which will cost massive computational resources and have long runtimes. Thus, data selection can be applied for reducing the data to a manageable size, leading to a reduction of the useless, erroneous or noisy data, before applying learning algorithms [5]- [8]. Data selection is one of the important pre-processing steps that can be applied in many data mining tasks [9]. This step aims at two aspects: (1) the data size can be reduced, leading to a reduction of the training time as long as the improvement of learning efficiency; (2) the noisy or erroneous data can be removed, leading to the improvement of the accuracy in classification problems. There are two common methods of data selection: feature selection and instance selection. Feature selection aims at reducing the number of power grid characteristics from longitudinal dimension of dataset; and instance selection aims at reducing the number of instances from latitudinal dimension of dataset, choosing a subset of the total available data to achieve the original purpose of the data mining application as if the whole data had been used. In the last years, several approaches for data selection have been proposed in power system area. Most of methods are proposed to preserve the boundaries between different classes in the dataset, because of the relevant information provided by border instances for supporting discrimination between classes, which destroy the structure of the data set and is bad for further analysis [10]- [11]. According to [12], [13], SVM (support vector machine) is usually used for classification with good performance. A segmentation method based on a two-level SVM is proposed [14], which reduce the negative effects of the mistaking instance. But the most useless and reductant instances are preserved with low reduction efficiency. Hybrid methods (e.g. RIPE) use ENN (Edited Nearest Neighbour) [15] to removing border points that are noisy aiming at smoothing the boundary, and use the concept of the nearest similar pair with the same class of datasets to removing redundant instances. The algorithm make better compromise in the classification accuracy and the storage compression ratio, but ignoring the problem of the nearest similar pair with the different class. This paper is organized as follows: Section II presents the method of feature selection based on similarity measurement; Section III presents the method of instance selection combining the repetitive screening, SVM and similarity measurement; Section IV presents the experimental results, including the evaluation of the method and the comparison of three similarity measurement; finally, section V presents main conclusions of our work.
Feature selection base on correlative coefficient
When selecting the features to be processed, it is needed to select the features that are most related with the class label. In order to find the features having high relationship with the class label, the correlation coefficient between them is calculated. In the probability theory and statistics, the correlation coefficient reflects the strength and direction of the linear relationship between two variables, and the most commonly used is Pearson correlation coefficient. Pearson Correlation coefficient calculation formula is defined as follows.
Where A and B are two linear variables, N is the number of elements in A or B, i a is the value in A, i b is the value in B, A is the average of A, B is the average of B. The range of the correlation coefficient is [-1, +1], the correlation coefficient greater than "0" represents that two groups of variables are positive correlation, and more closer to "+1", more stronger correlation degree; on the contrary, the correlation coefficient less than "0" represents that two groups of variables are negative correlation, and more closer to "-1", more stronger correlation degree; "0" represents that two groups of variables are uncorrelated. The correlation degree is determined by following scope of correlation coefficient showed in Table 1.
Instance selection based on SVM and similarity measurement
One method of instance selection often has limitations, Combination of several methods can give play to complementary advantages and make up for the shortcomings each other. The method of instance selection proposed in this paper was the combination of the SVM and similarity measurement, which can remove the noisy and useless instances. This method was mainly divided into three steps: repetitive screening, selection based on SVM, selection based on similarity measurement
Repetitive screening
The operation of the power system is periodic. The periodicity is relatively stable in a short period of time, and with the growth of time scale, the difference of operation mode for power system will inevitably increase. Features are reduced greatly after feature selection used to the initial dataset, the difference between two instances will be disappear, that there will be two or more instances of the same characteristics. Thus, the repetitive instances need to be deal with at first, for choosing one of the same instances and deleting the remaining instances.
Instance selection based on SVM
The support vector machine (SVM) is a widely used tool in classification problems. It trains a classifier by finding an optimal separating hyperplane which maximizes the margin between two classes of data in the kernel induced feature space. In learning by SVM, SVM calculates an alignment discernment line which maximize margin. SVM is excellent in generalization capability and it can extend to nonlinear by a kernel trick [13]- [14]. The instance was trained by SVM algorithm with the kernelled decision function represented as: Where sgn is the sign function to determine the classification of the instance (for example, the calculation result is negative or positive which represent the different classification respectively), K is linear kernel, and b are the parameters occurred in training process, i x is support vector, i y is the class label, x is instance to be discriminated.
The Gaussian kernel is commonly used.
Where the parameter 0 . Except the parameters in kernel function, c called cost parameter need to be specified in the training process which is a positive constant. The cost parameter that denotes the penalty of slacks can enhance the generalization ability of SVM algorithm. It need to be clear that, the parameters of decision function are default in the model training process if using the mature data analysis tools, such as R programming language or MATLAB. Generally, is equal to reciprocal of characteristics, and c is equal to 1 in SVM decision function.
The step of instance selection base on SVM is described below.
Step 1: Labelled training set S is trained by SVM, finding an optimal separating hyperplane and learning a classification model. Step 2: S as the test set is classified by classification model learned in step 1, obtaining the classification result R. Step 3: If the instance in test set classified by SVM differs from the class in the given training data, the instance need to be removed. Step 4: Step1 ~ Step3 is repeated until there is no incorrect discernment. The method is a process of removing instances iteratively, it can be used to remove noise instances, make the decision boundary more clearly.
Instance selection based on similarity measurement
The method in section B can effectively remove noise instances near the boundary, but also remain all of the internal instance, leading to unsatisfactory compression ratio of capacity. The instance selection method based on similarity measurement was described in this section.
The Euclidean Distance is defined as follows: And || -|| ab is the Euclidean distance.
The Correlation Distance is defined as follows: Where,
AB
Cor is the correlation coefficient defined in section 2.
The step of instance selection base on similarity measurement is described below. Step 1: the most similar instance x' of the instance x was calculate and marked, and repeat this step until all instance's most similar instance were calculate and marked in dataset S. Step 2: all the most similar instance pair were extracted after traversal of the instance x and its most similar instance x'. Step 3: the class of each instance in the most similar pair was compared. If the same class, anyone of two instances was deleted; if not the same class, the two instances were deleted at the same time. The method is a process of removing instances either closed to the boundary or internal instance far away from the boundary, it can be used to remove either noise instances or reductant instances.
Initial Dataset
State variables of power system were selected as the analysis of the attribute in dataset, including the electric variables of equipment in static state and statistics of area and power plants and stations. Selecting static variables under the static state can shorten the time of stability judgement. The speed of stability judgement was reduced if transient variables were selected, which needed a period of time in transient stability simulation process. There was no definite conclusion of the relation between static variables and power system stability, so that more static variables were choose as far as possible under the premise of computing resources. The static variables and statistics are shown in Table 2. Critical clearing time (CCT) that represent the system boundary of stable and unstable was selected as class label to analysis the dataset. And it can characterize the stability degree when specified fault occurred in power system, the longer critical clearing time, the less impact to power system, and the system is more stable. System instability will be created by the specified fault if the critical clearing time is less than the normal protection operation time. The historical data of online security and stability analysis in June was selected to be the instance of initial dataset in which 2484 instances and 9815 attributes existed at last. The information of initial dataset is shown in Table 3. In order to verify the validity of this method, the data set was divided into training set and test set before using algorithm. So that, 80% instances were randomly selected in each class as the training set, and the remaining instances as test set. 10 kinds of random numbers were used to get the result after calculating the average as the final result of this method.
Results of Feature Selection
Features that beyond middle degree correlated to the CCT were selected for subsequent analysis by the method in section 2, and features that weak correlated and zero correlated to the CCT were removed. Then, the correlation coefficient that was equal to 0.4, 0.5, and 0.6 was selected to be threshold respectively, the number of features and accuracy after feature selection were compared in Figure 1. Figure 1 shows that with the increase of the correlation coefficient threshold, the number of features reduced sharply, but the change of the classification accuracy was more complex. Thus, 0.5 has been used as threshold to select features, and 574 features were selected in the end for the further analysis.
Results of instance selection
The instance selection method proposed in this paper was applied, and Table 4 shows the results. Except accuracy and reduction to evaluate the performance of instance selection method, effectiveness was introduced in this paper which has a comprehensively assessment. Thus, we consider effectiveness equals accuracy multiple reduction. In Table 4, No. represents ten random numbers; Accuracy represents the classification result of dataset without instance selection; Accuracy1 represents the classification result of dataset after using the instance selection based on SVM and Reduction1 represents its selection effect; Accuracy2 represents the classification result of dataset after using the combined instance selection based on SVM and similarity measurement and Reduction2 represents its selection effect. Table 4 shows that instance selection based on SVM has a good performance in terms of accuracy and the instance selection based on SVM and similarity measurement has a good performance in terms of reduction. Through the comprehensive comparison of effectiveness between two methods, the second method is better than the first one which prove the validity and efficiency of the method proposed in this paper. To find a better similarity measurement of instance selection, the performance of three methods including Euclidean Distance, Hausdorff Distance and Correlation distance were compared considering the indexes: accuracy, reduction and effectiveness. The Figure 2, 3 and 4 show, respectively, the resulting indexes of each method. Considering the accuracy, although there is little difference between three methods, the performance of Hausdorff Distance and Correlation distance are little better than Euclidean Distance. Once considering the reduction, the reduction of Euclidean Distance is much better than others. Thus, Figure 4 shows that, after comprehensive comparison of effectiveness, Euclidean Distance can better applied to instance selection.
Conclusions
In this paper, we have presented a data selection method based on similarity measurement and SVM. This methodology efficiently deals with the feature selection and instance selection. The experiments have shown that the method can remove internal reductant instances and noisy instances, not only keep the distribution of dataset, but also meet the requirement of data selecting. The comparison of three similarity measurements shows that Euclidean Distance can well applied to instance selection. In conclusion, the data selection method proposed is valid and effective, with the additional advantage of being able to scale up to large datasets. | 2019-02-19T14:07:45.152Z | 2018-11-05T00:00:00.000 | {
"year": 2018,
"sha1": "5758df109806420473df64103e318f0727be3b80",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/192/1/012038",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "5c9ca164646b44799ef7bef037b868e5785d3142",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Physics",
"Computer Science"
]
} |
238988306 | pes2o/s2orc | v3-fos-license | Electrophysiological correlates of interference control in the modified emotional Stroop task with emotional stimuli differing in valence, arousal, and subjective significance
The role of emotional factors in maintaining cognitive control is one of the most intriguing issues in understanding emotion-cognition interactions. In the current experiment, we assessed the role of emotional factors (valence, arousal, and subjective significance) in perceptual and conceptual inhibition processes. We operationalised both processes with the classical cognitive paradigms, i.e., the flanker task and the emotional Stroop task merged into a single experimental procedure. The procedure was based on the presentation of emotional words displayed in four different font colours flanked by the same emotional word printed with the same or different font colour. We expected to find distinct effects of both types of interference: earlier for perceptual and later for emotional interference. We also predicted an increased arousal level to disturb inhibitory control effectiveness, while increasing the subjective significance level should improve this process. As we used orthogonal manipulations of emotional factors, our study allowed us for the first time to assess interactions within emotional factors and between types of interference. We found on the behavioural level the main effects of flanker congruency as well as effects of emotionality. On the electrophysiological level, we found effects for EPN, P2, and N450 components of ERPs. The exploratory analysis revealed that effects due to perceptual interference appeared earlier than the effects of emotional interference, but they lasted for an extended period of processing, causing perceptual and emotional interference to partially overlap. Finally, in terms of emotional interference, we showed the effect of subjective significance: the reduction of interference cost in N450 for highly subjective significant stimuli. This study is the first one allowing for the investigation of two different types of interference in a single experiment, and provides insight into the role of emotion in cognitive control.
Introduction
Each day, we are flooded by an 'informational tsunami' and forced to process plenty of stimuli. However, our cognitive capabilities have their limits [1]. To achieve our goals and focus on important stimuli, we need a special mechanism, i.e., cognitive control. It is a mental ability, combined from distinct and autonomous, but somehow correlated subcomponents [2,3]. Cognitive control helps us to concentrate on the relevant incentives while ignoring the non-significant ones. A recent review of brain imaging data suggests that the essential factors of cognitive control are shifting, which is the ability to flexibly change between task-sets or goals; updating, monitoring and changes to working memory; and inhibition, which helps to suppress the automatic or prepotent response [2]. Among the critical inhibitory processes, interference control has to be discussed. Generally speaking, this mechanism is responsible for suppressing the stimulus that entails a competitive response and suppressing the distraction that can slow down current working memory operations.
In the current study, we distinguished between two types of interference control measured in different paradigms [4]. Interference control may be perceptual, i.e., based on an object's perceived characteristics (e.g., the physical similarity of letter shapes). This type of interference is present, for example, in the flanker task [5]. However, interference control may also be associated with a stimulus's meaning (e.g., a word eliciting high arousal). In this way, it is inevitably combined with emotional functioning. This type of interference control can be measured, for example, in the emotional Stroop task (EST) [6]. In the current study, we wanted to examine whether emotional factors such as valence, arousal (the physical form of activation associated with emotions), and subjective significance (the reflective form of activation related to emotions) influence interference control at perceptual or conceptual levels [7]. Emotional interference is in this case indistinguishable from conceptual interference, as the words used as experimental stimuli were loaded with a particular emotional charge. Event Related Potentials (ERP) measures are suitable for this aim because they allow careful investigation of changes associated with specific emotional factors.
Emotional factors influencing interference control
A stimulus's emotional load can be described along several dimensions, among which valence and arousal seem to be two critical and separate components [8]. Valence refers to the experience of pleasantness vs. unpleasantness of the stimuli, resulting in an approach or avoidance reaction, whereas arousal indicates the level of bodily activation induced by stimuli exposition [9][10][11] and its biological aspects appraisal [12]. Interestingly, both mentioned components can be treated as orthogonal elements constituting an emotional experience [8,13]. Therefore, manipulation of one factor while controlling the other is possible and can be used in cognitive control tasks (e.g. [14,15]). Valence describes the evaluation of the stimuli as negative, positive, or neutral. In other words, the unpleasantness vs. pleasantness of the emotional reaction. Research has shown that emotionally loaded stimuli are processed faster than neutral ones as they engage more attentional resources [16]. Additionally, human perception is biased to detect possibly dangerous stimuli. Therefore, negatively loaded stimuli capture more attention than positive ones as this is crucial for survival [17,18]. Arousal manifests a level of energy induced by an emotional reaction, and it is related to the automatic, experiential system based on Epstein's typology [19]. Arousal tends to be a more biologically driven, survival-oriented reaction. Therefore, highly arousing stimuli provoke malfunctioning of complex cognitive processing, such as, for example, cognitive control [20].
However, one should consider the relationship between valence and arousal more carefully. Data suggest that the reported emotional load of words on arousal and valence scales remain in relation best described by a quadratic function [20,21]. Stimuli characterised as highly positive or highly negative are perceived as more arousing than neutrally valenced ones (e.g. [22,23]). The findings discussed above question the assumption of the orthogonal relationship between the related components. Therefore, it seems crucial to select verbal materials with caution, respecting the association between both factors and adjusting the arousal level to ensure uniform comparisons such as highly arousing negative stimuli compared to highly arousing positive stimuli [24].
Another component describing the emotional experience is the subjective significance, comparable to arousal as it is a type of activation resulting from reflective processes [20,25]. Subjective significance refers to the importance assigned to the stimulus from the perspective of the individual's goals and needs. It might tend to evoke more demanding and energy-consuming systematic cognitive processing [25]. Subjective significance is a similar phenomenon to will-power [26] or the salience concept [27]. Similarly, as arousal activates the experiential mind system, subjective significance is associated with the second, rational (or reflective) system [19,25].
The influence of valence and arousal in cognitive control tasks (like the flanker task or the EST) has shown that emotionally loaded words tend to provoke longer response times compared to neutral ones [6,28]. Firstly, in the classical flanker task, where a positive mood was elicited, more significant interference and slower reaction times were observed than for negative or neutral mood conditions [29]. Additionally, some research used an emotionally modified flanker task where participants were asked to distinguish between the flanker and target stimulus with different arousal levels. It was found that, in a congruent condition (flanker and target stimuli were expressing the same affect), the reaction time was faster compared to incongruent trials (see for example [30][31][32]). Past results suggest that valence provokes interference in this type of task. The influence of valence and arousal on cognitive control were observed in Van Steenbergen, Band, and Hommel's experiment [33]. Different types of moods were activated by listening to music and evoking positive memories. Moreover, valence significantly affects cognitive control in tasks, such as the EST, flanker task, and Simon task. In the case of arousal, it was observed that highly arousing verbal stimuli result in emotional interference separately from the valence effect [13]. Additionally, research has shown that the arousal induced by physical activity supports cognitive processing in congruent trials but undermines it in incongruent trials [34]. The influence of arousal on cognitive control was also demonstrated in a study using the recall of emotionally loaded words [35,36]. At the same time, data illustrating the effect of subjective significance on cognitive control is still missing [29].
indicator of visual conflict in the flanker task [40,42,43]. Others argue that in the early period of 200-350 milliseconds after stimulus onset, another component may indicate the visual processing of the task, namely the P2 (or sometimes P200) component [44]. P2 is a positive potential observed from the frontal to parietal areas of the scalp. The differences between processing congruent and incongruent trials have been frequently reported within this component [42,[44][45][46][47]. Results of an interesting experiment using the flanker task have shown, that processing the incongruent trials evoked larger amplitudes than processing the congruent ones in the P2 component, which in this study was identified as signal recorded over fronto-central areas in the time-period of 150-250 ms from stimulus onset [48].
The late positive complex (LPC) has also been observed to vary according to the flanker's congruency. LPC is a positive going wave observed over the parietal parts of the scalp about 400 milliseconds after the onset of the stimulus. This component has been originally tied to the processes related to memory [49,50], but effects within this component in the flanker task have been frequently reported [51,52]. Incongruent trials have been reported to evoke more positive potentials than congruent ones [53]. Taking into consideration the late time of the differences between incongruent and congruent trials in the LPC component and the associations of the component to the memory processes, it could be argued that the effects observed in the flanker task in LPC are caused by the decision-making process, which requires comparing the viewed stimuli with the previously remembered rules of the task [54,55].
Manipulating the flanker task's affective properties has been frequently reported to have an effect on behavioral characteristics [29,36,38,56,57]. When emotional valence is concerned, a negative affect has been reported to decrease the accuracy in the task [56,58], while positive affect has been reported to slow down reaction times [29,57]. High arousal has been reported to decrease the accuracy of the flanker task [56,59]. However, some studies report that both high and low arousal can cause interference in processing the task [39]. Some studies report that high arousal may increase the speed of solving the task [59-61], while others report that both high and low arousal can slow down the processing of the task [39,62]. The subjective significance is an emotional dimension, which has not yet been extensively explored. However, a study using emotional stimuli significant to the participants showed that such stimuli could increase the interference in the flanker task [63].
Some authors suggest a difference between the influence on the flanker task of task-relevant vs. task-irrelevant emotional stimuli [64]. The most common way of using the task-relevant ones is employing the procedure of the emotional flanker task [30,65]. This procedure uses emotional stimuli contrary to arrows or geometric shapes used in the classic approach to the flanker task [5]. The stimuli used in the emotional flanker task could be pictures [30,65] or emotional words [66][67][68]. Analogically to the classic flanker task, negative stimuli have been reported to decrease the performance in the task [68].
Emotional factors also influence the components of ERPs during the processing of the flanker task or emotional flanker task. The valence dimension was reported to differentiate the amplitudes within the P200 component, namely positive stimuli evoke more positive potentials than the negative ones [69]. The context of safety, which could be connected to the positive valence of emotions and relatively low arousal, was also reported to evoke more positive P200 amplitudes in the flanker task than in the threat context, which could be interpreted as negative and highly arousing [70]. The stimuli presenting people from the same race as the participant, which could be interpreted as more positive and more significant stimuli, were also reported to evoke more positive amplitudes than the stimuli presenting people of different races [71]. Valence was also reported to influence the early posterior negativity (EPN) component. EPN is a negative-going wave observed in the posterior parts of the scalp at around 200-350 ms after stimulus onset. Namely, the negative stimuli have been reported to evoke more negative potentials than neutral ones [72]. Also, in the LPC component, positively valenced stimuli have been reported to evoke more positive potentials than negative ones [69].
Emotional Stroop task.
The EST is a modification of the classical Stroop task [73], which measures interference control [4]. As in the classical Stroop task, the participant's primary task in EST is to name the font color of the presented word. The difference is the nature of the interference, which in this case is caused by the emotional load of the word [4,74]. The target words in the EST are carefully chosen to differ only in emotional factors (e.g., valence, arousal, or subjective significance) while being matched concerning other relevant properties (e.g., frequency, length, or grammatical class). This allows us to infer that the behavioral slowdown and ERP effects observed reflect the automatic attraction of attention caused by the word's emotional load [24,75].
EST is a useful procedure in studies involving participants with clinical disorders, such as depression, anxiety and PTSD [76]. This procedure is based on recognising the emotional traits of a word, which causes the interference in processing. The emotional charge causing the interference may be amplified in clinical groups when the meaning of the word is related to the objects evoking anxiety, stress or depressive states [13,77,78]. The results of EST may differ between normal participants and trauma survivors, even if the trauma itself did not cause clinical disorders, which was observed on the difference between adults raised in biological families and orphanages [79]. Some researchers even suggest, that with a particular choice of words EST may be used in predicting the possibility of self-harm and suicide [80].
Past ERP studies investigating the effects of the emotionality of words in EST focused primarily on valence. It is first worth noting that emotional words are processed differently from other, more salient emotional stimuli. While processing emotional scenes and faces modulates very early ERP components, emotional word processing has a much more pronounced effect on later ERP components associated with semantic analysis [81]. As one comprehensive review of EEG and fMRI data [82] reports, most cited studies on emotional word processing show emotional effects (e.g., more negative amplitudes for negative relative to neutral words) starting at the 200-300 ms time-window.
The first effect, reported in studies investigating the effect of valence on ERPs, is the increase in occipitotemporal negativity for both negatively and positively valenced words relative to neutral words called the early posterior negativity (EPN) effect. This effect has been reported in silent word reading [83][84][85] and lexical decisions [84,85].
Valence effects on EPN have been reported to start as early as 100 ms (e.g., [86]). However, as such early potentials are mainly influenced by orthographic features rather than semantic analysis [87], these findings may only reflect conditioned associations with the visual characteristics of valenced, high-frequency words [88], or represent a spillover effect of the block design, in which past conditions influence early potentials on the next stimulus [89].
The P2 component presents a less regular pattern of valence effects, as it has been reported to be sensitive to negative words only [90], positive words only [91], or both positive and negative words [92,93], generally producing a more positive amplitude relative to neutral words.
The N450 is the first component specific to the EST and is associated with cognitive control. It is observed in a 350-500 ms time window in fronto-central areas, sometimes taking the shape of global negativity [94,95]. It was found to be sensitive to the valence of presented words, enhancing negative amplitude for negative words [94,95], and correlating to a more general increase in amplitude while experiencing emotional interference [96][97][98]. Both P2 and N450 components tend to reaffirm and resemble the behavioral results in EST, presenting a more sensitive measure of inhibitory control [98,99].
The effects of valence on the LPC are somewhat inconsistent. Studies of silent word reading and lexical decisions report either that the processing of negative words evokes more positive LPC amplitudes than neutral or negative words [7,83,88,93,100,101], or just the opposite pattern of results [84,85,102,103], but as LPC is claimed to be a manifestation of later stages of semantic processing [104,105] associated with attention and conscious recognition of stimulus [106], one regularity seems to find more and more support in the literature: LPC becomes more emotionally modulated as the level of attention to the valence of the word increases [107,108]. Specifically, González-Villar et al. [107] conducted a study using the EST and a task where participants had to judge the emotionality of words and found that the LPC was modulated by valence only in the latter one. This finding was later replicated by Delaney-Busch, Wilkie, and Kuperberg [109] under different task demands. Similarly, our previous EST study [98] found no valence effect on the LPC.
The literature on the effects of arousal on LPC is less robust than that of valence. The study mentioned above by Delaney-Busch et al. [109] found that while valence did not modulate the LPC in a task in which participants judged whether a word denotes an animal, an increase in the amplitude of the LPC was observed for high arousal words, relative to words with a low level of arousal.
As for earlier components, our previous ERP study in the EST paradigm [98] found that the P2 component was modulated by arousal, exhibiting a more positive amplitude for high arousal words when compared to moderate arousal words. Similar previous results include those of Thomas et al. [99], who found that threat-related words elicit more positive amplitude P2 responses than neutral words.
Subjective significance, being the factor that only recently started to be examined experimentally, is most lacking in systematic empirical studies. Research has already indicated its influence on ERP components. For example, Herbert et al. [110] showed that emotional stimuli self-referentiality correlated with greater LPC amplitudes. Recently, we have demonstrated [111] that subjective significance influences even earlier components, such as FN400. In a lexical decision task experiment, amplitudes for highly significant words were more positive than for words of low subjective significance while controlling for arousal and valence. Because such a task employs deeper processing of words than the EST and, as we discussed, the strength of ERP effects tends to resemble the depth of processing and attention to emotional factors, it remains an empirical question whether subjective significance modulates components related to inhibitory control in the EST. Preliminary behavioral results showed that subjective significance shaped reaction times [25]. Even more promisingly, in an EST experiment with a much shorter word list and not examining valence effects [98], we found that subjective significance modulated the N450 and even the P2 components. The P2 amplitude was more positive for moderately significant words relative to highly significant words. The N450 amplitude was more positive for highly significant words as compared to minimally significant words.
Merging the flanker task with the emotional Stroop task.
Many procedures have been proposed that merge the emotional flanker task with the EST [33,112]. In different variations of this task, the emotional word, written in a specific colour, is either placed next to the flanker word [113] or surrounded by flanker words [113][114][115][116][117]. Some researchers also use separate screens, including and excluding flankers with a short break between them [117]. The words used as flankers could be the names of the colours [33,113] or the same words as the target written in either the same colour (congruent condition) or a different colour (incongruent condition) [113,[115][116][117]. No matter the exact structure of the displayed trials, the participant's task is always to name the colour of the target word, as in the EST.
Combining the two tasks has been confirmed as a procedure revealing the influence of emotional charge on cognitive control. It has been reported that emotions speed up the processing of incongruent trials, both negative [114,118] and positive ones [116,119]; some suggest that negative emotions may be more impactful [33,117]. These results suggest that both valence and arousal may speed up processing in this kind of task. As for the ERP results, the N200 component, usually involving differences in the flanker task, also turned out to be stimulated by emotions in the combination of flanker and Stroop tasks [43, 118,119]; however, some studies also observed differences in the N450 component [113] typical to the EST. We hypothesise that, in this combination of two tasks, we could expect the effects related to processing both the flanker task and EST, as this task includes incongruence on the purely visual level and emotional stimulation on the level of the meaning of the word.
Aim and hypothesis
In the current experiment, we aimed to investigate the electrophysiological correlates of two different types of cognitive control that can be measured in the combined flanker and Stroop tasks: perceptual and conceptual (associated with meaning) levels of interference control [7]. In the flanker test, the interference is based on the perceptual features of the stimuli. In contrast, in the EST, the interference is based on stimuli meaning and emotional connotations. A high emotional load is thought to capture attention for stimulus processing, and thus it is harder to get the correct answer for an untrained task (naming the colour of the text). On the behavioural level, we expected both flanker incongruence and the affective features of words to impose interference costs and thus lengthen reaction times. Considering emotional factors, we predicted that increasing the arousal level would cause increasing difficulty in maintaining interference control (thus longer reaction latencies would be observed). We also predicted that increasing levels of subjective significance, a factor introduced as an activation mechanism for reflective and effortful processing, would increase the effectiveness of interference control, thus reducing reaction times.
The measurement of ERP correlates of cognitive processing gives us a unique chance to investigate, precisely in time, the course of changes evoked by the different types of cognitive interference (perceptual and conceptual). Since the analysed issue is relatively new in the literature, we decided to analyse the data using two different approaches: exploratory and classical component-based. Considering the exploratory approach, in general, we expected the effect of flanker congruency (perceptual features inhibition) earlier than the effects of emotional factors (conceptual meaning features interference). The amplitude in conditions characterising greater interference (e.g., incongruent flanking stimuli) was expected to be augmented (i.e., more negative or more positive, depending on the amplitude general tendency) in comparison to conditions characterising lower interference (e.g., congruent).
Considering the classical component-based approach, we expected to find the flanker congruency effect in EPN, i.e., more negative amplitudes for incongruent stimuli in comparison to congruent stimuli. We also expected to find arousal and subjective significance effects in the P2 and N450 components. Arousal was expected to impair inhibition control effectiveness and amplitudes were expected to be augmented (more positive in P2 and more negative in N450) for high arousal words conditions than low arousal words. Subjective significance was expected to improve the effectiveness of inhibition control. Thus, amplitudes were expected to be less augmented (less positive in P2 and less negative in N450) for the conditions with high subjectively significant words than low subjectively significant words. Finally, we expected to find a valence effect in LPC component amplitude, i.e., differentiation between negative and positive valence categories of words, since LPC indexes the meaning of word processing and discrimination of different categories [82].
We also expected interactions between the manipulated variables. However, because the current experiment was the first in the field allowing us to investigate this, we had no specific expectations, and we have treated this part of our work as pure exploration.
Participants
The participants were recruited from various faculties of Warsaw universities. They had to meet the following criteria to be included in the experimental group, i.e., they had to be righthanded, native Polish speakers, without chronic clinical issues that may affect EEG recording directly or through medication (e.g. neurological and mental disorders). The participants had their vision intact or corrected to normal by glasses. They received a small compensation for taking part in the experiment. The entire experimental group consisted of 36 participants (18 men and 18 women), from 19 to 28 years old (M = 21.78; SD = 2.40). After collecting the data, some participants were excluded from EEG analyses if, due to excessive artefacts or extremely short or long response time, they had more than 50% of trials rejected. Effectively, there were 34 participants included in the further analysis, 18 , we estimated, that the expected effect sizes could be from η 2 = .1 to η 2 = .21 for simple effects and from η 2 = .1 to η 2 = .28 for interaction effects. We conducted a-priori power analysis using G-Power software (Faul et al., 2009), estimating that the number of participants needed to achieve a high statistical power of the study (at least .8) for the interaction of two factors would be at least 18 participants. The design of the study, with a large number of repeated measures for each emotional factor, ensures the high statistical power, while a larger number of participants (initially double the estimated number) allows us to observe an interaction of more than two factors-such interactions, however, are burdened with smaller statistical power.
We did not collect any personal data that would allow for the identification of the participants. The participants provided written informed consent to participate in the experiment, which was documented and stored in the research diary. The bioethical committee of the Faculty of Psychology at the University of Warsaw approved the design, experimental conditions, and procedure. All of the procedures involving human participants were done following the ethical standards of the institutional and national research committee and with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards.
Design
We investigated the behavioural and electrophysiological measures related to interference control while performing a flanker task combined with EST. We manipulated the factors of valence (3 levels), arousal (3 levels), and subjective significance (3 levels) while controlling the following properties of words: concreteness, frequency of appearance in language, and length. There were also two levels related to the flanker task, i.e., the congruent or incongruent colours of stimulus vs. flankers.
Linguistic materials 2.3.1 Word selection.
As the word stimuli, we used 405 nouns acquired from the Affective Norms for Polish Words Reload database [120]. In the process of validating this database, each word had been assessed on a self-assessment manikin scale [121] by 50 participants (25 women and 25 men) on eight different dimensions (Valence, Arousal, Dominance, Origin, Significance, Concreteness, Imageability, and Age of Acquisition). Mean values for every dimension were calculated for each of these words.
Words were divided into 27 groups (15 words each) by their valence (negative, neutral, and positive), arousal (low, moderate, and high), and subjective significance (low, moderate, and high). We also controlled for two other factors, namely the length of the words (the number of letters) and the frequency of usage in the Polish language, transformed into natural logarithms [122].
For the dimension of valence, mean ratings of the experimental stimuli were M = 3.98, SD = 0.54 for negative, M = 5.12, SD = 0.22 for neutral and M = 6.15, SD = 0.46 for positive words. As for the arousal, stimuli of low arousal had mean ratings M = 3.34, SD = 0.26, moderate arousal words were M = 3.98, SD = 0.15 and words of high arousal were M = 4.75, SD = 0.41. In subjective significance, words of low significance were M = 3.0, SD = 0.28, these of moderate significance M = 3.62, SD = 0.14, and stimuli of high subjective significance were M = 4.36, SD = 0.39. The list of words used in the experiment may be found in S1 Appendix.
We conducted ANOVA in a 3 (levels of valence) x 3 (levels of arousal) x 3 (levels of subjective significance) model for all dimensions (including also the two controlled ones), verifying the accuracy of experimental stimuli selection. To justify stimuli selection, we should obtain significant effects of valence levels on valence ratings only (treated as the dependent variable), effects of arousal on arousal ratings, and effects of significance on significance ratings. There should be no more significant effects, therefore indicating that groups were different on the experimental dimensions only.
Procedure
The participants sat in a comfortable chair. The words were displayed on a 17.3-inch diagonal LCD, at a distance of approximately 1 m from the participant's eyes. The font was Helvetica, a size of 10 percent of the screen height. Participants were encouraged to respond as quickly and as accurately as possible.
The task was to assess the font colour of the middle word by pressing tagged keys on the keyboard. Above and below the target stimuli, there was the same word but printed in either a congruent or incongruent font colour. The content and latency of the response were recorded. A single experiment consisted of two runs of comprised 810 trials, i.e., 15 words in each of 27 categories (3 arousal levels x 3 valence levels x 3 subjective significance levels, repeated in two conditions (congruent and incongruent). The categories were presented in blocks. The order of the categories in each run was randomised. The order of words within a block and order of congruent/incongruent conditions was also randomised but subjected to the condition that the same word could not be presented in successive trials. A trial proceeded as follows: 1. Fixation cross displayed for a randomly varied interval between 400-500 ms.
2. The stimulus presented until the participant responds, but not shorter than for 300 ms.
3. The blank screen displayed for a randomly varied interval between 1000-1100 ms.
The experimental protocol provided three-second breaks for normal blinking every 30 trials. A break self-regulated by the participant separated the runs of the experiment. The procedure is outlined in Fig 1.
EEG recording
2.5.1 Apparatus. The stimuli were displayed on a standard personal computer monitor. The stimuli were synchronised to EEG recording by a circuit that recorded changes in the brightness of a small rectangle on display, covered from the participant's view. Its brightness changed synchronously with the content of the screen. We recorded EEG signals from 19 electrode sites: Fz, Cz, Pz, Fp1/2, F7/8, F3/4, T7/8, C3/4, P7/P8, P3/4, O1/2 referenced to linked earlobes. The ground electrode was placed at the AFz position. All impedances were kept at a similar value below 5 kOhm. The signal was acquired using a Porti7 (TMSI) amplifier, sampled at 1024 Hz.
2.5.2 Offline EEG signal processing. We conducted offline signal processing utilising Matlab1 with the EEGLAB toolbox [123] and custom-made scripts. Offline, the signal was zero-phase filtered. We used the second-order Butterworth filters with 12 dB/octave roll-off; the high-pass filter cut-off was 0.1 Hz, the low-pass cut-off was 30 Hz. Additionally, we used the notch filter for the 49.5-50.5 Hz band, also implemented as the second-order Butterworth filter.
We extracted intervals ranging from -200 to 800 ms, with 0 being the onset of the stimulus. The signals were baseline corrected to the interval -200 to 0 ms. We removed from further analysis trials in which the participant did not correctly identify the colour of the presented word.
Additionally, we removed trials with response time shorter than the (Q1-W) or longer than the (Q3 + W) of the response time individually for each participant, where Q1 is the 25th percentile, Q3 is the 75th percentile, W = 1.5 � (Q3-Q1). These operations were performed on data transformed by a natural logarithm. Effectively, the response time for the analysed data across all participants is within 295-5700 ms. The mean number of trials per condition was M = 28.59, SEM = 0.03.
We prepared the data in the following way. Bad channels were identified as those with normalised kurtosis greater than 5. They were removed and interpolated. Because the stimulus consisted of three lines of text, there were many saccades, which could influence the analysis. Therefore, the signals were decomposed into independent components using the runica algorithm. Components related to blinks and saccades were identified and removed using the MARA procedure [124]. The remaining components were used to reconstruct the clean signal at the electrodes.
Statistical procedures
The distribution of response accuracy was not Gaussian; therefore, the significance of effects concerning this variable was assessed using the Kruskal-Wallis test.
The effects concerning other variables, with approximately normal distributions, were assessed using ANOVA with repeated measures in a hierarchical procedure. We investigated behavioural effects (logarithm of reaction time) and the classical EEG component amplitude effects. The significant main effects were analysed with post-hoc paired t-tests with Holm's correction for repeated comparisons [125]. The significant two-way interactions were similarly investigated using post-hoc paired t-tests with Holm's correction (we report the corrected pvalues). In the case of significant three-way interactions, they were further analysed by a series of two-way ANOVAs with the levels of a selected variable set iteratively to subsequent levels. The selected variables were permuted. The significance of the effects repeatedly appearing in the series was assessed, taking into account the Bonferroni correction for the number of multiple comparisons, but note that for these analyses, we report the uncorrected p-values. The significant two-way interactions were further investigated using post-hoc t-tests with Holm's correction. In case an effect could be obtained by different paths in the hierarchical analysis, we report the most conservative result.
We also performed an exploratory analysis of the EEG effects. In this case, there were additionally two factors that we had to consider: time-window and region of interest (ROI). On the first level of the procedure, we performed a four-way ANOVA with repeated measures, one for each time window. The significance of the effects repeatedly appearing in the series was corrected for multiple comparisons by the Bonferroni correction.
The mean ERP amplitude within a given time window was the dependent variable, and the independent variables were valence, arousal, significance, and ROI. Similarly, we investigated the interaction effects occurring between the factors at subsequent steps through the analysis of variance, which took as independent variables the interacting factors from a previous step, as for behavioural and classical component-based ERP analysis. We continued the investigation to a level at which one could understand the interactions in terms of differences in effects of simple factors, or by the interaction of two factors, under specific conditions determined by the particular levels of the other factors. We performed the post-hoc analysis using pairwise ttests. We handled the problem of multiple comparisons by utilising the Holm procedure. We checked the sphericity with Mauchly's test and applied the Greenhouse-Geisser correction where necessary. The analyses were implemented in the R statistical package [126].
Moreover, a three-way interaction between arousal, valence, and subjective significance (F (8, 264) = 3.21, p = .002; η 2 = 0.09) was found. Further ANOVA tests within each level of subjective significance showed that, for the moderate level, there was an interaction between valence and arousal (F(4, 132) = 5.63, p < .001; η 2 = 0.15). Post-hoc tests revealed that the amplitude in case of neutral, high arousal words (M = 5.00, SEM = 0.58) was more positive In the time window of 220-290 ms, we did not find any significant main effects. However, an effect of the three-way interaction between valence, arousal, and subjective significance was found (F(8, 264) = 2.972, p < .003; η 2 = 0.08). Further analysis within each level of arousal showed that, in the case of high arousal words, there was an interaction between valence and subjective significance (F(4, 132) = 4.39, p = .002; η 2 = 0.12). The post-hoc tests revealed that, for moderately significant words, the amplitude was more (Fig 5E). Furthermore, the main effect of subjective significance (F(2, 66) = 8.46, p = .001; η 2 = 0.20) was found. The amplitude for words of high 5I). The main effects of the other two factors, i.e., valence and arousal, turned out to be insignificant.
Further analysis within the levels of arousal showed that the effect of valence was significant for low arousal words (F(2, 66)
Classical approach.
Besides the exploratory approach, we also analysed relevant components known from the literature to possibly be related to the tasks in the current experiment. We analysed the mean amplitude in the EPN, P2, N450, and LPC components. In the following subsections, we describe the details of effects found for each of them. Additionally, in S2 Appendix we report the descriptive statistics and ANOVa results including the non-significant main effects.
We analysed the mean P2 amplitude in the time window 160-250 ms in the region of interest characteristic for this component (ROI P2 ), i.e., F3, Fz, F4, C3, Cz, C4, P3, Pz, and P4. The ground mean for the P2 component is shown in Fig 7A. We (Fig 7D).
Furthermore, we obtained a three-way interaction between valence, arousal, and subjective significance (F(8, 264) = 4.12, p < .001; η 2 = 0.11). We analysed it further with a series of twoway ANOVAs, each time keeping successive levels of one of the factors constant. The analysis within the levels of arousal showed that for the high arousal condition, there was an interaction (Fig 7E). For the neutral valence condition, we obtained an interaction of arousal and subjective significance (F(4, 132) = 3.01, p = .020; η 2 = 0.08). The post-hoc tests showed that the amplitude for moderately significant and high arousal words (M = 5.81, SEM = 0.65) was more positive than for moderately significant but low arousal words (M = 4.90, SEM = 0.63; t(33) = -3.77, p = .023, d = -1.31) (Fig 7E).
Discussion
This experiment was the first to search for the role of emotional factors such as valence arousal and subjective significance in the control of inhibition effectiveness, measured for both perceptual and conceptual control. We expected two distinct effects due to two types of inhibition. On the behavioural level, we observed them all simultaneously, but in ERPs, analyses differentiated perceptual inhibition from conceptual inhibition.
Behavioural results
At the behavioural level, the current study revealed a significant difference in reaction times between congruent and incongruent trials. Such an effect is consistent with the effect of congruency observed in both the classical Stroop task [37,40,41,113,127,128] and in studies employing the flanker task [29,37,38,40,41,57,116,129]. Taking these into account, we might conclude that the experiment's paradigm employed interference control accordingly and thus delivers useful modification and synthesis of both paradigms of a flanker task and EST.
The second observed result was the effect of subjective significance; words low on this dimension elicited longer reaction times than those with moderate significance. This effect is partially congruent with previous studies employing the EST and combined Stroop task (with In the inset, the topography of the mean amplitude in this time window is shown; the enlarged dots mark the channels constituting the ROI P2 . b) Main effect of valence. c) Main effect of flanker congruency. d) Effect of valence for moderate arousal words. e) Interaction between valence and subjective significance for high arousal words. f) Interaction between valence and arousal for moderately significant words.
https://doi.org/10.1371/journal.pone.0258177.g007 emotional words displayed below and above the colour-meaning word of the classical Stroop task), where mildly significant words evoked longer reaction times than highly significant ones [25,98]. Typically, such effects are because some stimuli are less commonly used. Thus, they require more time to be processed, but in the current experiment, words were aligned for the frequency of usage in the Polish language, so this explanation is not valid here. The effect observed in this study, together with those from previous studies, suggests that high subjective significance can speed up the reaction in tasks requiring cognitive control.
The last behavioural result was an interaction between valence and arousal. The arousal effect was found for neutrally valenced stimuli: reaction latencies to trials with high arousal stimuli were longer than both low and moderate arousal ones, which remains in line with the current state of knowledge about the influence of arousal on cognitive control [6,7,13,14,28,36,[130][131][132][133][134]. A high load of arousal slows down reactions, as the words bring a high emotional charge to processing. In contrast to previous studies (e.g., [14,58,94,129,135,136]), we did not observe a valence effect in the current study. However, some studies have revealed a leading role of arousal in tasks employing cognitive control, when valence and arousal are precisely controlled in orthogonally crossed manipulations [57,114,116,119,127].
Electrophysiological results
We will discuss EEG results starting from the earliest effects after stimulus onset. In the exploratory analysis, we note that the perceptual effect of flanker incongruence started as early as 60 ms after onset. In the first 60-120 ms window, the amplitude for incongruent colours was more negative when compared to congruent colours. The effect of the flanker persisted throughout the analysis. However, from the second window (120-220 ms) onward, the direction of the effect flipped, as incongruent colours produced a more positive amplitude when compared to congruent colours. The effect of incongruent contextual information observed as an EPN manifestation was revealed experimentally [137]. This global effect may be seen as the reflection of a general slow-down in the processing of incongruent trials that was reflected in behavioural results. However, early after stimuli onset we also observed some more localized effects of incongruent colors, which indicated early preferential processing of conflict between possible responses in the frontal regions. Specifically, in the 120-220 ms time window, there was an interaction between ROIs and congruency. The post-hoc analysis indicated it stemmed not from a change in direction of the effect, but a larger size of the effect in the frontal compared to central electrodes. This result is congruent with previous research indicating that in such tasks, the resolution of flanker conflict is associated with components in the 120-220 ms time range [48]. This result expands this result by suggesting that this resolution may be performed primarily in the frontal regions.
The first component in the classical analysis was the EPN, a negative deflection of amplitude observed in occipito-temporal regions, commonly occurring between 200 and 300 ms after stimulus presentation [82]. However, our analysis indicated an EPN component occurring before 200 ms, that is, before the start of the first electrophysiological signs of processing of the word's meaning [87]. We obtained a main effect of congruency; a higher amplitude was observed for the incongruent colour of a word than for the congruent condition.
Despite such an early EPN time range, we observed some emotional modulation: we obtained a third-order interaction between colour congruence, valence, and subjective significance. In the moderate significance condition, there was an interaction between valence and congruence. For both positive and negative valence, congruence conditions differed in amplitude in the same way as in the main effect (more positive amplitude for incongruent stimuli). This effect, however, disappeared in the neutral valence condition. Research has shown that a larger amplitude of EPN component occurs for emotionally charged words [83,100,105,138]. However, as this effect occurred before the analysis of the word's meaning, so this effect cannot be attributed directly to emotional word processing. As emotional effects have repeatedly been reported to start only after 200ms [100], two possible explanations of this effect emerge. First, the effect could be a sign of some rapid decoding of the word's emotional load. This is extremely unlikely as such rapid reactions seem to be possible only for high-frequency words, yet we controlled for frequency while picking word stimuli for this experiment. Such an effect would also likely not be a complex interaction like the one we observed. However, this effect becomes theoretically interesting when we consider that, as the present study employed a block design, a second explanation is more credible. Namely, that such early emotional interference should be interpreted as stemming from the affect induced in previous trials of the same levels of emotional factors, i.e., a spillover effect. The influence that the emotional load of the previous trial has on responses has been shown even on the level of behavioral effects [89]. Thus a subsequent presentation of many stimuli with the same emotional load has all the more power to influence early potentials, which presents a prime opportunity to observe the interaction between perceptual and emotional interference. Interpreting the effect in this light, we see that no congruence effect appears in blocks of sufficiently neutral emotional load (moderate significance, neutral valence). Thus, employing a crossover design with both emotional and perceptual interference allowed us to observe that the perceptual interference is exacerbated under affect induced from previous stimuli. Coming to the P2 component, we observed an apparent effect of incongruent colours of flanker words. This modulation continues from the modulation observed in the EPN and exhibits a pattern repeated throughout the flanker literature, in which congruent trials produce a greater amplitude than incongruent ones [42,[44][45][46]48].
It is at this component that we started to observe a simultaneous, parallel occurrence of perceptual and emotional interference, indexed by the start of emotional modulation. Precisely, we observed a main effect of valence: neutral words produced a larger P2 amplitude than negative words. Moreover, we saw an interaction between valence and arousal. The shape of this interaction showed that the main effect of valence was exacerbated in the high arousal condition. There, both positive and neutral words produced larger P2 amplitudes than negative words. A valence effect was expected; the direction of differences, however, is different from that commonly reported [91][92][93][94]. Three key things may help us to understand this effect: behavioural results, task differences, and the role of subjective significance.
Firstly, this pattern may reflect the fact that no behavioural effect of valence was found. P2 amplitudes tend to resemble behavioural responses and serve as an even more sensitive measure of inhibitory control [99]. For this reason, even when a pattern of behavioural slow down for valenced words is not present, we may still observe emotional modulation. However, the direction of amplitude differences may be different, revealing the underlying cognitive control mechanism involved in the task [98]. Causally, the shape of the observed pattern may come down to the specifications of the task [82], as our task differs from previous EST studies by the inclusion of flanker words and the inclusion of subjective significance as a controlling factor. Suppose subjective significance is a factor that partly explains previously seen emotional effects. In that case, its inclusion may have contributed to a change in the contribution of valence to emotional interference, indexed by the lack of a behavioural effect of valence.
The third level interaction between valence, arousal, and subjective significance may give us a clue about the role that controlling arousal and subjective significance plays in the shape of this component. Notably, for this discussion, we observed that the amplitude for positive words significantly differed between levels of subjective significance for high arousal words, and the amplitude for neutral words differed between levels of arousal in the moderately subjective significance condition. As the amplitude for positive and neutral words, relative to negative words, is the source of the different patterns of valence effects, this points to a crucial role in controlling these factors and examining their interaction with each other. Crucially, this also suggests that all three factors in some regard contribute to modulating the processing of words and inhibitory control in this time window.
Interestingly, in the exploratory analysis, valence effects failed to reach significance in the two corresponding time windows of 120-220 ms and 220-290 ms. Only in the third window of 290-390 ms was there significant emotional modulation of the ERP amplitude. The factor that instigated this modulation, however, was subjective significance.
The N450 component is connected with conflict detection and conflict monitoring in interference control [139]. For this component, we observed a more positive amplitude for incongruent trials in comparison with congruent ones, which puts the present study in line with effects reported in the flanker literature [140]. Moreover, a subjective significance effect was observed, in that the amplitude for highly significant words was more positive than for words with a low score of this factor. This result indicates a clear difference in the processing of low and highly significant stimuli, pointing to the prominent role of this factor for cognitive control. As before, we noted the absence of an arousal effect; together, these results indicate that subjective significance may be a factor that better explains the previous effects of the emotional load of words seen in studies of control inhibition.
Comparing these results to the exploratory analysis, we see that the flanker effect, which started at around 60 ms, persists in the 390-550 ms window, as the amplitude was significantly more positive in the incongruent condition than in the congruent condition. However, there is no corresponding effect of subjective significance in this window. We may now see that the subjective significance modulation we had seen in the previous window was not associated with the emotional modulation of the P2 component, but with the modulation of N450. Thus, we may speculate that the process responsible for modulation in the N450 time window started even earlier, maybe as early as 290 ms after stimulus onset.
Lastly, we should discuss the lack of the predicted valence effect on the LPC. While early studies of emotional word processing reported emotional modulation of the LPC in tasks such as the EST [83,84,94,104], a difference between these and later studies, that could've caused this result may have been that these early studies frequently didn't control for factors such as arousal. However, a more general principle may better explain the observed lack of LPC modulation. As the LPC is associated with later stages of semantic processing [105,106] and conscious attention to the word [107], our result may be seen as supporting the more recent proposition, that the LPC becomes more emotionally modulated, as the level of attention to the word's emotionality increases [108,109], and thus that we should not expect the LPC to be emotionally modulated in tasks such as the EST, that does not draw the participants attention to the word's meaning. Concretely, our result mirrors the mentioned results of González-Villar et al. [108], who compared the LPC modulation using the Emotional Stroop Task and the Emotional Decision Task and found it to be modulated by valence only in the latter, as well as corroborate the results of our previous EST study [99].
Limitations
There are several limitations of the study. The first one refers to the study design. Three dimensions were crossed orthogonally and used to create the study design: valence, arousal, and subjective significance. In general, affective dimensions are correlated to each other. For example, a U-shaped relation naturally occurs between valence and arousal, i.e., highly negative and positive stimuli are characterised by high arousal [20]. On the other hand, the advantages of the applied design are more significant than potential risks: the orthogonal design enables (1) the identification of effects that may work in opposite directions in the natural world (i.e., increasing arousal and increasing subjective significance levels), and (2) controlling for factors such as word length and frequency of usage in language.
Another limitation is related to sample selection. The study was run on a group of students. Although the sample was highly homogenous and specific, this type of sample selection allowed for omitting confounding factors related to the diversification of cognitive abilities according to age and education and was congruent with the sample assessing the word stimuli in affective norms studies used for selection of stimuli for an affective manipulation.
Conclusions
The current electrophysiological experiment was investigating the role of the three emotional factors simultaneously, namely valence, arousal, and subjective significance, in processes of inhibition control. The advantage of our approach was the use of orthogonal manipulation, providing the opportunity to precisely separate the effect of each emotional dimension on control effectiveness. We have also used a paradigm merging two types of interferences: perceptual (associated with physical traits interferences in vision) and conceptual (associated with meaning interferences). We have shown that: (1) to some extent, interferences at the perceptual and conceptual levels are distinct from each other, i.e., the flanker congruency effect (more polarised amplitude for the incongruent condition in comparison to congruent conditions) was present only in the EPN component, while in the later component, the effect was significant, but surprisingly reversed (more polarised amplitude for the congruent condition in comparison to incongruent conditions). (2) Emotional factors shape each interference type in a different way, i.e., valence was found to interact with earlier ERP components (EPN, P2), while subjective significance was found to interact with the later component (N450). (3) Once more [24,25,99,112,121,[141][142][143][144], subjective significance was found to reduce cognitive control costs, both in behavioural measures and indexed by the amplitude of the N450 component. | 2021-10-16T05:12:07.016Z | 2021-10-14T00:00:00.000 | {
"year": 2021,
"sha1": "a133ebe4ffe60f60016d971a6a3f587395fb6682",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0258177&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a133ebe4ffe60f60016d971a6a3f587395fb6682",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
11241677 | pes2o/s2orc | v3-fos-license | YouTube-8M: A Large-Scale Video Classification Benchmark
Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learning and inexpensive commodity hardware have reduced the barrier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Although large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classification datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classification dataset, composed of ~8 million videos (500K hours of video), annotated with a vocabulary of 4800 visual entities. To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals. We filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to extract the hidden representation immediately prior to the classification layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. We trained various (modest) classification models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using TensorFlow. We plan to release code for training a TensorFlow model and for computing metrics.
INTRODUCTION
Large-scale datasets such as ImageNet [6] have been key enablers of recent progress in image understanding [20,14,11]. By supporting the learning process of deep networks with millions of parameters, such datasets have played a crucial role for the rapid progress of image understanding to near-human level accuracy [30]. Furthermore, intermediate layer activations of such networks have proven to be powerful and interpretable for vari- ous tasks beyond classification [41,9,31]. In a similar vein, the amount and size of video benchmarks is growing with the availability of Sports-1M [19] for sports videos and ActivityNet [12] for human activities. However, unlike ImageNet, which contains a diverse and general set of objects/entities, existing video benchmarks are restricted to action and sports classes.
In this paper, we introduce YouTube-8M 1 , a large-scale benchmark dataset for general multi-label video classification. We treat the task of video classification as that of producing labels that are relevant to a video given its frames. Therefore, unlike Sports-1M and ActivityNet, YouTube-8M is not restricted to action classes alone. For example, Figure 1 shows random video examples for the Guitar entity.
We first construct a visual annotation vocabulary from Knowledge Graph entities that appear as topic annotations for YouTube videos based on the YouTube video annotation system [2]. To ensure that our vocabulary consists of entities that are recognizable visually, we use various filtering criteria, including human raters. The entities in the dataset span activities (sports, games, hobbies), objects (autos, food, products), scenes (travel), and events. The entities were selected using a combination of their popularity on YouTube and manual ratings of their visualness according to human raters. They are an attempt to describe the central themes of videos using a few succinct labels.
We then collect a sample set of videos for each entity, and use a publicly available state-of-the-art Inception network [4] to extract features from them. Specifically, we decode videos at one frameper-second and extract the last hidden representation before the classification layer for each frame. We compress the frame-level features and make them available on our website for download.
Overall, YouTube-8M contains more than 8 million videosover 500,000 hours of video-from 4,800 classes. Figure 2 illustrates the scale of YouTube-8M, compared to existing image and video datasets. We hope that the unprecedented scale and diversity of this dataset will be a useful resource for developing advanced video understanding and representation learning techniques.
Towards this end, we provide extensive experiments comparing several state-of-the-art techniques for video representation learning, including Deep Networks [26], and LSTMs (Long Short-Term Memory Networks) [13] on this dataset. In addition, we show that transfering video feature representations learned on this dataset leads to significant improvements on other benchmarks such as Sports-1M and ActivityNet.
In the rest of the paper, we first review existing benchmarks for image and video classification in Section 2. We present the details of our dataset including the collection process and a brief analysis of the categories and videos in Section 3. In Section 4, we review several approaches for the task of multi-label video classification given fixed frame-level features, and evaluate the approaches on the dataset. In Section 5, we show that features and models learned on our large-scale dataset generalize very well on other benchmarks. We offer concluding remarks with Section 6.
RELATED WORK
Image benchmarks have played a significant role in advancing computer vision algorithms for image understanding. Starting from a number of well labeled small-scale datasets such as Caltech 101/256 [8,10], MSRC [32], PASCAL [7], image understanding research has rapidly advanced to utilizing larger datasets such as ImageNet [6] and SUN [38] for the next generation of vision algorithms. Im-ageNet in particular has enabled the development of deep feature learning techniques with millions of parameters such as the AlexNet [20] and Inception [14] architectures due to the number of classes (21841), the diversity of the classes (27 top-level categories) and the millions of labeled images available.
A similar effort is in progress in the video understanding domain where the community has quickly progressed from small, well-labeled datasets such as KTH [22], Hollywood 2 [23], Weizmann [5], with a few thousand video clips, to medium-scale datasets such as UCF101 [33], Thumos'14 [16] and HMDB51 [21], with more than 50 action categories. Currently, the largest available video benchmarks are the Sports-1M [19], with 487 sports related activities and 1M videos, the YFCC-100M [34], with 800K videos and raw metadata (titles, descriptions, tags) for some of them, the FCVID [17] dataset of 91, 223 videos manually annotated with 239 categories, and ActivityNet [12], with ∼200 human activity classes and a few thousand videos. However, almost all current video benchmarks are restricted to recognizing action and activity categories, and have less than 500 categories.
YouTube-8M fills the gap in video benchmarks as follows: • A large-scale video annotation and representation learning benchmark, reflecting the main themes of a video.
• A significant jump in the number and diversity of annotation classes-4800 Knowledge Graph entities vs. less than 500 categories for all other datasets.
• A substantial increase in the number of labeled videos-over 8 million videos, more than 500,000 hours of video.
• Availability of pre-computed state-of-the-art features for 1.9 billion video frames.
We hope the pre-computed features will remove computational barriers, level the playing field, and enable researchers to explore new technologies in the video domain at an unprecedented scale.
YOUTUBE-8M DATASET
YouTube-8M is a benchmark dataset for video understanding, where the main task is to determine the key topical themes of a video. We start with YouTube videos since they are a good (albeit noisy) source of knowledge for diverse categories including various sports, activities, animals, foods, products, tourist attractions, games, and many more. We use the YouTube video annotation system [2] to obtain topic annotations for a video, and to retrieve videos for a given topic. The annotations are provided in the form of Knowledge Graph entities [3] (formerly, Freebase topics [1]). They are associated with each video based on the video's metadata, context, and content signals [2].
We use Knowledge Graph entities to succinctly describe the main themes of a video. For example, a video of biking on dirt roads and cliffs would have a central topic/theme of Mountain Biking, not Dirt, Road, Person, Sky, and so on. Therefore, the aim of the dataset is not only to understand what is present in each frame of the video, but also to identify the few key topics that best describe what the video is about. Note that this is different than typical event or scene recognition tasks, where each item belongs to a single event or scene. [38,28] It is also different than most object recognition tasks, where the goal is to label everything visible in an image. This would produce thousands of labels on each video but without answering what the video is really about. The goal of this benchmark is to understand what is in the video and to summarize that into a few key topics. In the following sub-sections, we describe our vocabulary and video selection scheme, followed by a brief summary of dataset statistics.
Vocabulary Construction
We followed two main tenets when designing the vocabulary for the dataset; namely 1) every label in the dataset should be distinguishable using visual information alone, and 2) each label should have sufficient number of videos for training models and for computing reliable metrics on the test set. For the former, we used a combination of manually curated topics and human ratings to prune the vocabulary into a visual set. For the latter, we considered only entities having at least 200 videos in the dataset.
The Knowledge Graph contains millions of topics. Each topic has one or more types, that are curated with high precision. For example, there is an exhaustive list of animals with type animal and an exhaustive list of foods with type food. To start with our initial vocabulary, we manually selected a whitelist of 25 entity types that we considered visual (e.g. sport, tourist_attraction, inventions), and also blacklisted types that we thought are non-visual (e.g. music artists, music compositions, album, software). We then obtained all entities that have at least one whitelisted type and no blacklisted types, which resulted in an initial vocabulary of ∼50, 000 entities.
Following this, we used human raters in order to manually prune this set into a smaller set of entities that are considered visual with high confidence, and are also recognizable without very deep domain expertise. Raters were provided with instructions and examples. Each entity was rated by 3 raters and the ratings were averaged. Figure 4a shows the main rating question. The process resulted in a total of ∼10, 000 entities that are considered visually recognizable and are not too fine-grained (i.e. can be recognized by non-domain experts after studying some examples). These entities were further pruned: we only kept entities that have more than 200 popular videos, as explained in the next section. The final set of entities in the dataset are fairly balanced in terms of the specificity of the topic they describe, and span both coarse-grained and fine-grained entities, as shown in Figure 4b.
Collecting Videos
Having established the initial target vocabulary, we followed these (a) Screenshot of the question displayed to human raters.
(b) Distribution of vocabulary topics in terms of specificity. Figure 4: Rater guidelines to assess how specific and visually recognizable each entity is, on a discrete scale of (1 to 5), where 1 is most visual and easily recognizable by a layperson. Each entity was rated by 3 raters. We kept only entities with a maximum average score of 2.5, and categorized them by specificity, into coarse-grained, medium-grained, and fine-grained entities, using equally sized score range buckets.
steps to obtain the videos: • Collected all videos corresponding to the 10, 000 visual entities and have at least 1, 000 views, using the YouTube video annotation system [2]. We excluded too short (< 120 secs) or too long (> 500 secs) videos.
• Randomly sampled 10 million videos among them.
• Obtained all entities for the sampled 10 million videos using the YouTube video annotation system. This completes the annotations.
• Filtered out entities with less than 200 videos, and videos with no remaining entities. This reduced the size of our data to 8, 264, 650 videos.
• Split our videos into 3 partitions, Train : Validate : Test, with ratios 70% : 20% : 10%. We publish features for all splits, but only publish labels for the Train and Validate partitions.
Features
The original size of the video dataset is hundreds of Terabytes, and covers over 500, 000 hours of video. This is impractical to process by most research teams (using a real-time video processing engine, it would take over 50 years to go through the data). Therefore, we pre-process the videos and extract frame-level features using a state-of-the-art deep model: the publicly available Inception network [4] trained on ImageNet [14]. Concretely, we decode each video at 1 frame-per-second up to the first 360 seconds (6 minutes), feed the decoded frames into the Inception network, and fetch the ReLu activation of the last hidden layer, before the classification layer (layer name pool_3/_reshape). The feature vector is 2048-dimensional per second of video. While this removes motion information from the videos, recent work shows diminishing returns from motion features as the size and diversity of the video data increases [26,35]. The static frame-level features provide an excellent baseline, and constructing compact and efficient motion features is beyond the scope of this paper. Nonetheless, we hope to extend the dataset with audio and motion features in the future. We cap processing of each video up to the first 360 seconds for storage and computational reasons. For comparison, the average length of videos in UCF-101 is 10 − 15 seconds, Sports-1M is 336 seconds and in this dataset, it is 230 seconds. Afterwards, we apply PCA (+ whitening) to reduce feature dimensions to 1024, followed by quantization (1 byte per coefficient). These two compression techniques reduce the size of the data by a factor of 8. The mean vector and covariance matrix for PCA was computed on all frames from the Train partition. We quantize each 32-bit float into 256 distinct values (8 bits) using optimally computed (non-uniform) quantization bin boundaries. We confirmed that the size reduction does not significantly hurt the evaluation metrics. In fact, training all baselines on the full-size data (8 times larger than what we publish), increases all evaluation metrics by less than 1%.
Note that while this dataset comes with standard frame-level features, it leaves a lot of room for investigating video representation learning approaches on top of the fixed frame-level features (see Section 4 for approaches we explored).
Dataset Statistics
The YouTube-8M dataset contains 4, 800 classes and a total of Table 2 shows the number of videos for which we are releasing features, across the three datasets.
We processed only the first six minutes of each video, at 1 frameper-second. The average length of a video in the dataset is 229.6 seconds, which amounts to ∼1.9 billion frames (and corresponding features) across the dataset.
We grouped the 4, 800 entities into 24 top-level categories to measure statistics and illustrate diversity. Although we do not use these categories during training, we are releasing the entity-to-category mapping for completeness. Table 1 shows the top entities per category. Note that while some categories themselves may not seem visual, most of the entities within them are visual. For instance, Jobs & Education includes universities, classrooms, lectures, etc., and Law & Government includes police, emergency vehicles, militaryrelated entities, which are well represented and visual. Figure 5 shows a log-log scale distribution of entities and videos. Figures 6a and 6b show the size of categories, respectively, in terms of the number of entities and the number of videos.
Human Rated Test Set
The annotations from the YouTube video annotation system can be noisy and incomplete, as they are automatically generated from metadata, anchor text, comments, and user engagement signals [2]. To quantify the noise, we uniformly sampled over 8000 videos from the Test partition, and used 3 human raters per video to exhaustively rate their labels. We measured the precision and recall of the ground truth labels to be 78.8% and 14.5%, respectively, with respect to the human raters. Note that typical inter-rater agreement on similar annotation tasks with human raters is also around 80% so the precision of these ground truth labels is perhaps comparable to (non-expert) human-provided labels. The recall, however, is low, which makes this an excellent test bed for approaches that deal with missing data. We report the accuracy of our models primarily on the (noisy) Validate partition but also show some results on the much smaller human-rated set, showing that some of the metrics are surprisingly similar on the two datasets.
While the baselines in section 4 show very promising results, we believe that they can be significantly improved (when evalu-ated on the human-based ground truth), if one explicitly models incorrect [29] (78.8% precision) or missing [40,25] (14.5% recall) training labels. We believe this is an exciting area of research that this dataset will enable at scale.
Models from Frame Features
One of the challenges with this dataset is that we only have video-level ground-truth labels. We do not have any additional information that specifies how the labels are localized within the video, nor their relative prominence in the video, yet we want to infer their importance for the full video. In this section, we consider models trained to predict the main themes of the video using the input frame-level features. Frame-level models have shown competitive performance for video-level tasks in previous work [19,26]. A video v is given by a sequence of frame-level features x v 1:Fv , where x v j is the feature of the j th frame from video v.
Frame-Level Models and Average Pooling
Since we do not have frame-level ground-truth, we assign the video-level ground-truth to every frame within that video. More sophisticated formulations based on multiple-instance learning are left for future work. From each video, we sample 20 random frames and associate all frames to the video-level ground-truth. This results in about 120 million frames. For each entity e, we get 120M instances of (xi, y e i ) pairs, where xi ∈ R 1024 is the inception feature and y e i ∈ 0, 1 is the ground-truth associated with entity e for the i th example. We train 4800 independent one-vs-all classifiers for each entity e. We use the online training framework after parallelizing the work for each entity across multiple workers. During inference, we score every frame in the test video using the models for all classes. Since all our evaluations are based on video-level ground truths, we need to aggregate the frame-level scores (for each entity) to a single video-level score. The frame-level probabilities are aggregated to the video-level using a simple average. We choose average instead of max pooling since we want to reduce the effect of outlier detections and capture the prominence of each entity in the entire video. In other words, let p(e|x) be the probability of existence of e given the features x. We compute the probability
Deep Bag of Frame (DBoF) Pooling
Inspired by the success of various classic bag of words representations for video classification [23,36], we next consider a Deep Bag-of-Frames (DBoF) approach. Figure 7 shows the overall architecture of our DBoF network for video classification. The Ndimensional input frame level features from k randomly selected frames of a video are first fed into a fully connected layer of M units with RELU activations. Typically, with M > N , the input features are projected onto a higher dimensional space. Crucially, the parameters of the fully connected layer are shared across the k input frames. Along with the RELU activation, this leads to a sparse coding of the input features in the M -dimensional space.
The obtained sparse codes are fed into a pooling layer that aggregates the codes of the k frames into a single fixed-length video representation. We use max pooling to perform the aggregation. We use a batch normalization layer before pooling to improve stability and speed-up convergence. The obtained fixed length descriptor of the video can now be classified into the output classes using a Logistic or Softmax layer with additional fully connected layers in between. The M -dimensions of the projection layer could be thought of as M discriminative clusters which can be trained in a single network end to end using backpropagation.
The entire network is trained using Stocastic Gradient Descent (SGD) with logistic loss for a logistic layer and cross-entropy loss for a softmax layer. The backpropagated gradients from the top layer train the weight vectors of the projection layer in a discriminative fashion in order to provide a powerful representation of the input bag of features. A similar network was proposed in [26] where the convolutional layer outputs are pooled across all the frames of a video to obtain a fixed length descriptor. However, the network in [26] does not use an intermediate projection layer which we found to be a crucial difference when learning from input frame features. Note that the up-projection layer into sparse codes is similar to what Fisher Vectors [27] and VLAD [15] approaches do but the projection (i.e., clustering) is done discriminatively here. We also experimented with Fisher Vectors and VLAD but were not able to obtain competitive results using comparable codebook sizes.
Hyperparameters: We considered values of {2048, 4096, 8192} for the number of units in the projection layer of the network and found that larger values lead to better results. We used 8192 for all datasets. We used a single hidden layer with 1024 units between the pooling layer and the final classification layer in all experiments. The network was trained using SGD with AdaGrad, a learning rate of 0.1, and a weight decay penalty of 0.0005.
Long Short-Term Memory (LSTM)
We take a similar approach to [26] to utilize LSTMs for videolevel prediction. However, unlike that work, we do not have access to the raw video frames. This means that we can only train the LSTM and Softmax layers.
We experimented with the number of stacked LSTM layers and the number of hidden units. We empirically found that 2 layers with 1024 units provided the highest performance on the validation set. Similarly to [26], we also employ linearly increasing per-frame weights going from 1/N to 1 for the last frame.
During the training time, the LSTM was unrolled for 60 iterations. Therefore, the gradient horizon for LSTM was 60 seconds. We experimented with a larger number of unroll iterations, but that slowed down the training process considerably. In the end, the best model was the one trained for the largest number of steps (rather than the most real time).
In order to transfer the learned model to ActivityNet, we used a fully-connected model which uses as inputs the concatenation of the LSTM layers' outputs as computed at the last frame of the videos in each of these two benchmarks. Unlike traditional transfer learning methods, we do not fine-tune the LSTM layers. This approach is more robust to overfitting than traditional methods, which is crucial for obtaining competitive performance on Activ-ityNet due to its size. We did perform full fine-tuning experiments on Sports-1M, which is large enough to fine-tune the entire LSTM model after pre-training.
Video level representations
Instead of training classifiers directly on frame-level features, we also explore extracting a task-independent fixed-length video-level feature vector from the frame-level features x v 1:Fv for each video v. There are several benefits of extracting fixed-length video features: 1. Standard classifiers can apply: Since the dimensionality of the representations are fixed across videos, we may train standard classifiers like logistic, SVM, mixture of experts.
Compactness:
We get a compact representation for the entire video, thereby reducing the training data size by a few orders of magnitude.
3. More suitable for domain adaptation: Since the videolevel representations are unsupervised (extracted independently of the labels), these representations are far less specialized to the labels associated with the current dataset, and can generalize better to new tasks or video domains. Formally, a video-level feature ϕ(x v 1:Fv ) is a fixed-length representation (at the video-level). We explore a simple aggregation technique for getting these video-level representations. We also experimented with Fisher Vectors (FV) [27] and VLAD [15] approaches for task-independent video-level representations but were not able to achieve competitive results for FV or VLAD representations of similar dimensionality. We leave it as future work to come up with compact FV or VLAD type representations that outperform the much simpler approach described below.
First, second order and ordinal statistics
From the frame-level features x v 1:Fv where x v j ∈ R 1024 , we extract the mean µ v ∈ R 1024 and the standard-deviation σ v ∈ R 1024 . Additionally, we also extract the top 5 ordinal statistics for each dimension. Formally, Top K (x v (j)1:F v ) returns a K dimensional vector where the p th dimension contains the p th highest value of the feature-vector's j th dimension over the entire video. We denote Top K (x v 1:Fv ) to be a KD dimensional vector obtained by concatenating the ordinal statistics for each dimension. Thus, the resulting feature-vector ϕ(x v 1:Fv ) for the video becomes:
Feature normalization
Standardization of features has been proven to help with online learning algorithms [14,37] as it makes the updates using Stochastic Gradient Descent (SGD) based algorithms (like Adagrad) more robust to learning rates, and speeds up convergence.
Before training our one-vs-all classifiers on the video-level representation, we apply global normalization to the feature vectors ϕ(x v 1:Fv ) (defined in equation 2). Similar to how we processed the frame features, we substract the mean ϕ(.) then use PCA to decorrelate and whiten the features. The normalized video features are now approximately multivariate gaussian with zero mean and identity covariance. This makes the gradient steps across the various dimensions independent, and learning algorithm gets an unbiased view of each dimension (since the same learning rate is applied to each dimension). Finally, the resulting features are L2 normalized. We found that these normalization techniques make our models train faster.
Models from Video Features
Given the video-level representations, we train independent binary classifiers for each label using all the data. Exploiting the structure information between the various labels is left for future work. A key challenge is training these classifiers at the scale of this dataset. Even with a compact video-level representation for the 6M training videos, it is unfeasible to train batch optimization classifiers, like SVM. Instead, we use online learning algorithms, and use Adagrad to perform model updates on the weight vectors given a small mini-batch of examples (each example is associated with a binary ground-truth value).
Logistic Regression
Given D dimensional video-level features, the parameters Θ of the logistic regression classifier are the entity specific weights we. During scoring, given x ∈ R D+1 to be the video-level feature of the test example, the probability of the entity e is given as p(e|x) = σ(w T e x). The weights we are obtained by minimizing the total log-loss on the training data given as: where σ(.) is the standard logistic, σ(z) = 1/(1 + exp(−z)).
Hinge Loss
Since training batch SVMs on such a large dataset is impossible, we use the online SVM approach. As in the conventional SVM framework, we use ±1 to represent negative and positive labels respectively. Given binary ground-truth labels y (0 or 1), and predicted labelsŷ (positive or negative scalars), the hinge loss is: where b is the hinge-loss parameter which can be fine-tuned further or set to 1.0. Due to the presence of the max function, there is a discontinuity in the first derivative. This results in the subgradient being used in the updates, slowing convergence significantly.
Mixture of Experts (MoE)
Mixture of experts (MoE) was first proposed by Jacobs and Jordan [18]. The binary classifier for an entity e is composed of a set of hidden states, or experts, He. A softmax is typically used to model the probability of choosing each expert. Given an expert, we can use a sigmoid to model the existence of the entity. Thus, the final probability for entity e's existence is p(e|x) = h∈He p(h|x)σ(u T h x), where p(h|x) is a softmax over |He| + 1 states. In other words, . The last, (|He| + 1) th , state is a dummy state that always results in the non-existence of the entity. Denote p y|x = p(y = 1|x), p h|x = p(h|x) and p h = p(y = 1|x, h). Given a set of training examples (xi, gi) i=1...N for a binary classifier, where xi is the feature vector and gi ∈ [0, 1] is the ground-truth, let L(pi, gi) be the log-loss between the predicted probability and the ground-truth: We could directly write the derivative of L p y|x , g with respect to the softmax weight w h and the logistic weight u h as We use Adagrad with a learning rate of 1.0 and batch size of 32 to learn the weights. Since we are training independent classifiers for each label, the work is distributed across multiple machines. For MoE models, we experimented with varying number of mixtures (1,2,4), and found that performance increases by 0.5%-1% on all metrics as we go from 1 to 2, and then to 4 mixtures, but the number of model parameters correspondingly increases by 2 or 4 times. We chose 2 mixtures as a good compromise and report numbers with the 2-mixture MoE model for all datasets.
EXPERIMENTS
In this section, we first provide benchmark baseline results for the above multi-label classification approaches on the YouTube-8M dataset. We then evaluate the usefulness of video representations learned on this dataset for other tasks, such as Sports-1M sports classification and AcitvityNet activity classification.
Evaluation Metrics
Mean Average Precision (mAP): For each entity, we first round the annotation scores in buckets of 10 −4 and sort all the non-zero annotations according to the model score. At a given threshold τ , the precision P (τ ) and recall R(τ ) are given by where I(.) is the indicator function. The average precision, approximating the area under the precision-recall curve, can then be computed as where where τj = j 10000 . The mean average precision is computed as the unweighted mean of all the per-class average precisions.
Hit@k: This is the fraction of test samples that contain at least one of the ground truth labels in the top k predictions. If rankv,e is the rank of entity e on video v (with the best scoring entity having rank 1), and Gv is the set of ground-truth entities for v, then Hit@k can be written as: where ∨ is logical OR. Precision at equal recall rate (PERR): We measure the videolevel annotation precision when we retrieve the same number of entities per video as there are in the ground-truth. With the same notation as for Hit@k, PERR can be written as: 1 |Gv| e∈Gv I(rankv,e ≤ |Gv|) . Table 3 shows results for all approaches on the YouTube-8M dataset. Frame-level models (row 1), trained on the strong Inception features and logistic regression, followed by simple averaging of predictions across all frames, perform poorly on this dataset. This shows that the video-level prediction task cannot be reduced to simple frame-level classification.
Results on YouTube-8M
Aggregating the frame-level features at the video-level using simple mean pooling of frame-level features, followed by a hinge loss or logistic regression model, provides a non-trivial improvement in video level accuracies over naive averaging of the frame-level predictions. Further improvements are observed by using mixtureof-experts models and by adding other statistics, like the standard deviation and ordinal features, computed over the frame-level features. Note that the standard deviation and ordinal statistics are more meaningful in the original RELU activation space so we reconstruct the RELU features from the PCA-ed and quantized features by inverting the quantization and the PCA using the provided PCA matrix, computing the collection statistics over the reconstructed frame-level RELU features, and then re-applying PCA, whitening, and L2 normalization as described in Section 4.2.2. This simple task-independent feature pooling and normalization strategy yields some of the most competitive results on this dataset.
Finally, we also evaluate two deep network architectures that have produced state-of-art results on previous benchmarks [26]. The DBoF architecture ignores sequence information and treats the input video as a bag of frames whereas LSTMs use state information to preserve the video sequence. The DBoF approach with a logistic classification layer produces 2% (absolute) gains in Hit@1 and PERR metrics over using simple mean feature pooling and a single-layer logistic model, which shows the benefits of discrimintatively training a projection layer to obtain a task-specific videolevel representation. The mAP results for DBoF are slightly worse than mean pooling + logistic model, which we attribute to slower training and convergence of DBoF on rare classes (mAP is strongly affected by results on rare classes and the joint class training of DBoF is a disadvantage for those classes).
The LSTM network generally performs best, except for mAP, where the 1-vs-all binary MoE classifiers perform better, likely for the same reasons of slower convergence on rare classes. LSTM does improve on Hit@1 and PERR metrics, as expected given its ability to learn long-term correlations in the time domain. Also, in [26], the authors used data augmentation by sampling multiple snippets of fixed length from a video and averaged the results, which could produce even better accuracies than our current results.
We also considered Fisher vectors and VLAD given their recent success in aggregating CNN features at the video-level in [39]. However, for the same dimensionality as the video-level representations of the LSTM, DBoF and mean features, they did not produce competitive results.
Human Rated Test Set
We also report results on the human rated test set of over 8000 videos (see Section 3.5) in Table 4 for the top three approaches. We report PERR, Hit@1, and Hit@5, since the mAP is not reliable given the size of the test set. The Hit@1 numbers are uniformly higher for all approaches when compared to the incomplete validation set in Table 3 whereas the PERR numbers are uniformly lower. This is largely attributable to the missing labels in the validation set (recall of the Validation set labels is around 15% compared to exhaustive human ratings). However, the relative ordering of the various approaches is fairly consistent between the two sets, showing that the validation set results are still reliable enough to compare different approaches.
Results on Sports-1M
Next, we investigate generalization of the video-level features learned using the YouTube-8M dataset and perform transfer learning experiments on the Sports-1M dataset. The Sports-1M dataset [19] consists of 487 sports activities with 1.2 million YouTube videos and is one of the largest benchmarks available for sports/activity recognition. We use the first 360 seconds of a video sampled at 1 frame per second for all experiments.
To evaluate transfer learning on this dataset, in one experiment we simply use the aggregated video-level descriptors, based on the PCA matrix learned on the YouTube-8M dataset, and train MoE or [24] 53.8 --Heilbron et al. [12] 43.0 --(b) ActivityNet: Since the dataset is small, we see a substantial boost in performance by pre-training on YouTube-8M or using the transfer learnt PCA versus the one learnt from scratch on ActivityNet. logistic models on top using target domain training data.
For the LSTM networks, we have two scenarios: 1) we use the PCA transformed features and learn a LSTM model from scratch using these features; or 2) we use the LSTM layers pre-trained on the YouTube-8M task, and fine-tune them on the Sports-1M dataset (along with a new softmax classifier). Table 5a shows the evaluation metrics for the various video-level representations on the Sports-1M dataset. Our learned features are competitive on this dataset, with the best approach beating all but the approach of [26], which learned directly from the pixels of the videos in the Sports-1M dataset, including optical flow, and made use of data augmentation strategies and multiple inferences over several video segments. We also show that even on such a large dataset (1M videos), pre-training on YouTube-8M still helps, and improves the LSTM performance by ∼1% on all metrics (vs. no pre-training).
Results on ActivityNet
Our final set of experiments demonstrate the generality of our learned features for the ActivityNet untrimmed video classification task. Similar to Sports-1M experiments, we compare directly training on the ActivityNet dataset against pre-training on YouTube-8M for aggregation based and LSTM approaches. As seen in Table 5b, all of the transferred features are much better in terms of all metrics than training on ActivityNet alone. Notably, without the use of motion information, our best feature is better by up to 80% than the HOG, HOF, MBH, FC-6, FC-7 features used in [12]. This result shows that features learned on YouTube-8M generalize very well to other datasets/tasks. We believe this is because of the diversity and scale of the videos present in YouTube-8M.
CONCLUSIONS
In this paper, we introduce YouTube-8M, a large-scale video benchmark for video classification and representation learning. With YouTube-8M, our goal is to advance the field of video understanding, similarly to what large-scale image datasets have done for image understanding. Specifically, we address the two main challenges with large-scale video understanding-(1) collecting a large labeled video dataset, with reasonable quality labels, and (2) removing computational barriers by pre-processing the dataset and providing state-of-the-art frame-level features to build from. We process over 50 years worth of video, and provide features for nearly 2 billion frames from more than 8 million videos, which enables training a reasonable model at this scale within 1 day, using an open source framework on a single machine! We expect this dataset to level the playing field for academia researchers, bridge the gap with large-scale labeled video datasets, and significantly accelerate research on video understanding. We hope this dataset will prove to be a test bed for developing novel video representation learning algorithms, and especially approaches that deal effectively with noisy or incomplete labels.
As a side effect, we also provide one of the largest and most diverse public visual annotation vocabularies (consisting of 4800 visual Knowledge Graph entities), constructed from popularity signals on YouTube as well as manual curation, and organized into 24 top-level categories.
We provide extensive experiments comparing several strong baselines for video representation learning, including Deep Networks and LSTMs, on this dataset. We demonstrate the efficacy of using a fairly unexplored class of models (mixture-of-experts) and show that they can outperform popular classifiers like logistic regression and SVMs. This is particularly true for our large dataset where many classes can be multi-modal. We explore various video-level representations using simple statistics extracted from the framelevel features and model the probability of an entity given the aggregated vector as an MoE. We show that this yields competitive performance compared to more complex approaches (that directly use frame-level information) such as LSTM and DBoF. This also demonstrates that if the underlying frame-level features are strong, the need for more sophisticated video-level modeling techniques is reduced.
Finally, we illustrate the usefulness of the dataset by performing transfer learning experiments on existing video benchmarks-Sports-1M and ActivityNet. Our experiments show that features learned on this dataset generalize well on these benchmarks, including setting a new state-of-the-art on ActivityNet. | 2016-09-27T21:21:49.000Z | 2016-09-27T00:00:00.000 | {
"year": 2016,
"sha1": "c9a1e8e1ba2913ef0bdf1c5eaaa1ac0a79be3716",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "7f6061c83dc36633911e4d726a497cdc1f31e58a",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
37010945 | pes2o/s2orc | v3-fos-license | Maternal nutrition: how is Eastern and Southern Africa faring and what needs to be done?
BACKGROUND
The progress in key maternal health indicators in the Eastern and Southern Africa Region (ESAR) over the past two decades has been slow.
OBJECTIVE
This paper analyzed available information on nutrition programs and nutrition-specific interventions targeting maternal nutrition in the ESAR and proposes steps to improve maternal nutrition in this region.
METHODS
Search was conducted in relevant databases. Meta-analysis was done where there was sufficient data, while data from the nutrition programs was abstracted for objectives, settings, beneficiaries, stakeholders, impact of interventions and barriers encountered during implementation.
RESULTS
Findings from our review suggest that multiple nutrition programs are in place in the ESAR; including programs that directly address nutrition indicators and those that integrate corresponding sectors like agriculture, health, education, and water and sanitation. However, their scale and depth differ considerably. These programs have been implemented by a diverse range of players including respective government ministries, international agencies, non government organisations and the private sector in the region. Most of these programs are clustered in a few countries like Kenya, Uganda and Ethiopia while others e.g. Comoros, Somalia and Swaziland have only had a limited number of initiatives.
CONCLUSION
These programs have been associated with some improvements in overall maternal health and nutritional indicators; however these are insufficient to significantly contribute to the progress in the region. Efforts should be prioritized in countries with the greatest burden of maternal undernutrition and associated risk factors with a focus on existing promising interventions to improve maternal nutrition.
Introduction
Nutrition status of mother, both undernutrition (body mass index (BMI) <18.5 kg/m2) 2 ) and overweight (BMI > 25 kg/m 2 ), are intricately linked to pregnancy outcomes and later on infant and child nutrition and health 1 . Maternal malnutrition is a key contributor to poor fetal growth, intrauterine growth restriction (IUGR) and consequent low birth weight (LBW). These can in turn contribute to infant and child undernutrition, increased morbidity and mortality and are also associated with long-term, irreversible cognitive, motor and health impairments [2][3][4][5] . Likewise, short maternal stature, often a result of childhood stunting in girls, is a significant risk factor for obstructed labour and caesarean delivery. On the other hand, maternal overweight and obesity are associated with excess maternal morbidity, preterm birth and increased infant mortality 3 as well as increased risk of childhood obesity continuing into adolescence and early adulthood 5 .
Besides malnutrition, maternal infections especially malaria and HIV/AIDS also contribute majorly towards maternal undernutrition and adverse pregnancy outcomes. According to the global burden of diseases 2010, child and maternal undernutrition risk factors including maternal micronutrient deficiencies, suboptimal breastfeeding and childhood underweight are collective-532 533 African Health Sciences Vol 15 Issue 2, June 2015 ly accountable for almost 7% of the global disease burden 6 contributing to at least a fifth of maternal deaths along with the increased probability of poor pregnancy outcomes 7 . These are most prevalent in the regions of South East Asia, South America and Africa, with some countries in Eastern and Southern Africa Region (ESAR) having maternal undernutrition prevalence rate as high as 35% 8 . Despite the declining trend over the past few decades, the prevalence of low BMI (<18.5 kg/m 2 ) among women of reproductive age (WRA) in Africa and Asia still looms higher than 10% 1 . The eastern, northern and western African regions have shown some improvements; however in southern Africa the situation has not improved or might have even worsened 8 . Simultaneously, prevalence of overweight (BMI ≥ 25 kg/m 2 ) and obesity (BMI ≥ 30 kg/m 2 ) among WRA has been rising in all regions of the world and reached more than 30% and 10% globally and in Africa respectively 1 .
Most countries in the region belong to the group of lower income countries while Angola, Lesotho and Swaziland are among the middle-income countries and Botswana, Namibia and South Africa among the upper middle income group 9 . These middle income countries in southern Africa are amongst the countries with the highest gross national income, however, severe income inequalities exist in the middle income group countries as well.
The progress in key maternal health indicators over the past two decades has been painfully slow in the ESAR; with merely 1% reduction in maternal anemia and a decline in maternal mortality from 638 to 409 per 100,000 live births. The available data from the recent demographic health surveys (DHS) and multiple indicator cluster surveys (MICS) for ESAR countries suggests high prevalence of both undernutrition (BMI <18.5 kg/m 2 ) and overweight (BMI ≥25 kg/m 2 ) at 14% and 15% respectively with a prevalence of undernutrition as high as 40% in some countries like Eritrea. Recent analysis reveals that globally, Africa has the highest proportion of pregnant women with iron deficiency anemia (haemoglobin <110 g/L) at 20.3% and vitamin A deficiency (serum retinol <0.70 μmol/L) at 14.3% 1 . Coverage rates for the evidence based interventions for maternal health across the life cycle including contra-ceptive prevalence (36.5%), family planning need satisfaction (48.7%), intermittent preventive treatment for malaria in pregnant women (IPTp) (28.3%), postnatal care (32.5%), iron/folic acid supplementation (53.2%), antenatal care (ANC) (1 visit: 82%, 4 visits: 51.5%) and skilled attendance at delivery (55.9%) also remain low. There are wide disparities across the extreme equity strata and coverage for contraceptive prevalence, ANC (at least 1 visit) and skilled delivery are well below 40% for the poorest quintiles of the population.
We undertook this review to assess available information on nutrition programs in the region and systematically review the data from studies focusing on nutritional interventions aimed at improving maternal nutrition in the ESAR countries. Our review was guided by a conceptual framework based on various interventions to improve maternal nutrition status either directly or indirectly ( Figure 2). Nutrition specific interventions or programs include micronutrient supplementation, food fortification, food distribution, nutrition education and counseling, disease prevention and control including deworming, malaria and HIV; while nutrition sensitive interventions or programs include; agricultural interventions, water sanitation and hygiene (WASH) interventions, reproductive health and women empowerment and social protection. All indexed publications including trials, quasi studies and pre-post evaluations were also identified and relevant papers were selected for abstraction. Hand search was done through the reference lists of the existing reviews and meta-analyses to identify relevant papers from the region.
We performed both qualitative and quantitative analysis for the included papers. Meta-analysis was done where there was sufficient data to do so, while qualitative analysis was done for all the identified papers. Data for program objectives, settings, beneficiaries, stakeholders, impact of interventions and barriers encountered during implementation was abstracted from the included programs. Indexed papers for the meta-analysis were abstracted for study design, study setting, target population, allocation concealment, blinding, randomization, generalizability of the intervention(s) and outcomes of interest. All the data was recorded in a standardized abstraction table. Two reviewers were responsible for data abstraction and any discrepancies found were resolved with consultation from the third reviewer.
Data extracted from articles included for meta-analysis was pooled and the results were expressed as relative risks (RR) for discrete data and mean difference (MD) for continuous variables with respective 95% confidence intervals (CI). Generic inverse variance (GIV) method was used to calculate pooled estimates for discrete outcomes. Assessment of statistical heterogeneity was done by using p values and I 2 values and a p value of <0.05 and I 2 of > 30% was indicative of high heterogeneity and in these instances, cause was sought and a random effects model was used. All analyses were performed using Review Manage 5.2.
Results
We report the findings from the nutrition specific and sensitive programs and meta-analysis separately below: Findings from the nutrition programs: Most of the programs in the ESAR were focused on agricultural interventions followed by the interventions to prevent and manage infection (including HIV/AIDS and malaria), family planning and reproductive health. A substantial number of large scale programs were also found on food fortification and food provision along with post-crisis rehabilitation. There were programs on financial incentives, nutrition education, micronutrient supplementation, gender based violence prevention and very few on substance abuse. Most of these programs were implemented in collaborations with the respective government ministries and international agencies whereby the agencies provided support via funding or technical assistance, while governments were responsible for the implementation. Some of the programs were completely assisted by the international agencies alone or in partnership with the for-profit private sector. A few programs were solely run by government or through the support from NGOs. Figures 3a and 3b shows the distribution of programs in the ESAR countries.
Nutrition specific programs: Programs aimed at directly improving nutrition status included micronutrient supplementation, food fortification and prevention and management of infections like malaria and HIV/AIDS. A greater number of such programs were seen in Ethi-opia, Kenya, and Uganda while Swaziland and Comoros had relatively fewer numbers of micronutrient supplementation and food fortification programs. Most of the supplementation and fortification programs focused on nutrition specific objectives while infection control programs majorly focused on prevention and management and lacked nutrition specific evaluations; however, some of these infection control programs included micronutrient supplementation as a part of the package.
Micronutrient supplementation programs especially iron/folic acid supplementation for pregnant women have led to increased coverage and compliance with a resultant decline in anemia among pregnant women and WRA 10 . National level data indicates that in the year 2008, Kenya had a 69% coverage for maternal iron/ folic acid supplementation, while the coverage in Malawi and Zambia were 76% and 84% respectively 11 . For some countries like Ethiopia, despite a large number of existing micronutrient supplementation programs, the coverage remains low (17% in 2008) 11 , perhaps suggesting more targeted delivery of the interventions. Maternal anemia rates in these countries have also shown declining trends as in Kenya as it has declined from 54% in 1990 to 34% in 2012; and similar trends have been seen in Uganda (a decrease from 49% to 33%) and Ethiopia (decline from 37% to 22%) during the same time period 12 . However Swaziland achieved a minor decline of about 6% during the same period, reflecting a need for more attention in some countries 12 .
In 2002, the health ministries of ESAR countries passed a resolution to initiate and promote food fortification initiative which was subsequently launched in 2004. Based on the latest data available by flour fortification initiative (FFI), four countries have mandatory iron fortification in wheat and maize i.e. Kenya, Rwanda, South Africa and Uganda 13 . The activities of the fortification projects are directed towards producing enhanced quality of food available to the public. Universal salt iodization program has proved to be a great success in sustainable elimination of iodine deficiency. Countries such as Burundi (98%), Kenya (98%), Rwanda (99%), Uganda (96%) and Zimbabwe (94%) have greatly benefited from salt iodization and achieved almost universal coverage for households consuming adequately iodized salt (15 parts per million or more) in 2012 14 .
Other countries in the region reporting an encouraging coverage in 2012 include; Comoros (82%), Lesotho (84%), Botswana (95%) and Namibia (63%), however much has to be done for countries such as Djibouti, Ethiopia and Somalia that currently have very low coverage for salt iodization 14 . Some programs have also used iron fortified bread as well as vitamin A fortified cooking oil 15 and surveillance data have reported decrease in neural tube defects due to folate fortification 16 . Moreover there has been a steady increase in the number of African countries since 2002 that have passed legislature to make food fortification of commodities such as flour mandatory or at least voluntary 17 . More recently bio-fortification programs have been seen in the region whereby food enriched with vitamin A, Vitamin B complex, zinc, iron and folic acid are being produced [18][19][20][21][22] . There has been an increase in the initiatives of food fortification and coverage of a few like salt iodization has improved significantly in a few countries, but there is lack of evidence whether the quality was sufficient and met the standards desired for such a strategy to make maximal impact.
Infection control programs mainly involved malaria and HIV/AIDS preventive and control measures. Malaria control programs in the region comprise of free distribution of insecticides treated nets (ITNs), redeemable coupons/vouchers for ITNs, insecticide spraying, IPTp and environmental vector control measures 23, 24 . These programs have led to an increased ITN ownership and usage and have also led to substantial improvements in IPTp coverage to above 90% in countries like Tanzania, Uganda, Kenya, Zambia, Malawi, Ethiopia and Mozambique; followed by 69% and 63% for Zambia and Tanzania in 2012 7 . However these coverage rates are still inadequate to meet the endemic burden of malaria for some countries in the region like Somalia and Swaziland where the coverage still remained 1% and 9.9% respectively in 2012 7 . Various HIV/AIDS targeted programs have been launched and have varied in the interventions and services delivered ranging from policy making, HIV prevention and treatment, monitoring and evaluation of HIV/AIDS services, nutrition in HIV to providing support to orphans and vulnerable children (OVC) affected by HIV/AIDS. These programs have significantly increased HIV testing during pregnancy and have led to an increase in the number of women having access to HIV testing and getting antiretroviral therapy (ART) 25 . HIV prevention programs have also led to a decrease in risky sexual behaviors as well as an increase in number of people reporting condom use in their last sexual encounter 6 .
HIV incidence has dramatically reduced from 2001 to 2011 by 41% in South Africa, by 73% in Malawi, by 71% in Botswana, by 68% in Namibia, by 58% in Zambia, by 50% in Zimbabwe and by 37% in Swaziland which has the highest HIV prevalence in the world 26,27 . Ethiopia achieved a 90% reduction in the rate of new HIV infections in the last decade. However, countries such as Tanzania, Kenya, Mozambique and Uganda currently still have some of the highest HIV prevalence rate in the ESAR 26 reflecting the need for more programs in these countries. The coverage of effective antiretroviral regimens in low-middle-income countries for preventing mother to child transmission (MTCT) is 57% in 2011, and still a lot is desired to eliminate it completely. A recent report suggests that on average, nearly half of all children newly infected with HIV in the 20 countries of Africa surveyed were acquiring HIV during breastfeeding, due to low antiretroviral coverage during this period. Although in 2012, 375,000 more pregnant women living with HIV received antiretroviral medicines than in 2009 27,28 . Three countries in the region (Botswana, Namibia and Rwanda) have attained universal access to ART, which is achieving more than 80% coverage, while Botswana, Namibia and Zambia have further met their goal of providing antiretroviral medicines to 90% of the potential women in 2012 25 .
Nutrition sensitive programs: Among the programs that broadly addressed nutrition, a large number of programs involved family planning and reproductive health initiatives that led to an increase in knowledge and contraception prevalence rate along with increasing number of women completing at least three antenatal visits during pregnancy and institutional deliveries [29][30][31] . Kenya, Tanzania and Uganda have successfully improved the number of pregnant women receiving at least 1 antenatal visit at 92%, 94% and 88% respectively in 2012 7 . However the contraception prevalence in some countries stays disappointing; with Kenya, Swaziland and Ethiopia having a current contraceptive prevalence rate of 46%, 65% and 29% respectively 2012 7 .
Africa lags behind on the WASH measures as safe drinking water was available to 66% of the population in 2010, which has increased from 61% in 2000 and has a wide equity gap (urban: 85% and rural 54%), while population having access to improved sanitation facility is 40% in 2010, up from 37% in 2000 (urban: 54% and rural: 31%) 7 . The programs to improve WASH measures included rehabilitation or construction of new water points and wells along with protection from ground water pollution, provision of piped water to house-holds, community mobilization and training to empower the target population to maintain the newly constricted or rehabilitated water infrastructure. Sanitation was improved by the construction of latrines in various settings; within households, schools as well as public places. New sewage disposal and treatment plants were a feature of a small number of the programs. Hygiene was improved by rural community education sessions, which entailed information dissemination on hand washing, hygienic food preparation as well as safe waste disposal. Many encouraging results have been reported, such as an increasing number of rural households having access to improved water and sanitation sources as well as having the facility of piped water being delivered to their home 32 . Moreover, a substantial number of community members have been educated on the importance of hygiene, a fact which is represented in the large number of households that have incorporated these practices in their daily routines as a result of the community led total sanitation program (CLTS) [33][34][35] . But these programs need to be implemented at larger or national scale in order to achieve wider gains. A steady progress has been made in provision of access to safe drinking water in Ethiopia and Malawi reporting 93% and 84% access respectively in 2012 7 . Uganda and Tanzania have also been making progress, with an encouraging coverage of 70% and 59% respectively in 2012 7 . However countries such as Kenya, Malawi, Uganda, Somalia and Mozambique require renewed efforts as the coverage is still below the 50% mark in 2012 7 .
Investing in agricultural sector aims to strengthen the economy, foster poverty alleviation, household financial security, food security and food diversity of countries in the ESAR. The interventions employed in these programs include small holder farmer training in the latest farming techniques and technology, irrigation infrastructure rehabilitation, improved seeds and fertilizer provision, as well as instituting a loaning system to help small holder farmers to start and sustain their businesses. Another area of focus was to provide farmers with access to markets and equipping them with the tools necessary to be competitive in their trade. Impact of these programs has been multifold in the areas where they were implemented, as evidenced by the increase in the hectors of unused land that has been brought under cultivation as a result of introduction of farmers to new farming techniques, provision of cultivating African Health Sciences Vol 15 Issue 2, June 2015 tools such as tractors, as well as the construction of new irrigation systems 36 . The provision of improved seeds and fertilizers to farmers has resulted in not only a higher yield of cash crops such as cotton, but has also contributed to crop diversity as well as quality, all of which has had a beneficial impact of small holder farmers in the trade market 36 . Moreover the effect on household security was evident in the boost in the monthly income incurred by the farmers 37 . Established in 2003 by the African Union, Comprehensive Africa Agriculture Development Program (CAADP), an African owned initiative to boost agricultural productivity by extending the area under sustainable land management and reliable water control, improving rural infrastructure and trade-related capacities for market access, increased food supply, reducing hunger and improving responses to food emergencies and improving agricultural research, technology dissemination and adoption. National governments have committed to increasing their agricultural investment by at least 10% of the national budget for average annual growth of 6 per cent in agriculture by 2015. However, most of these programs have reported direct agriculture impact, and does not quite clearly outline impacts on nutrition indicators. Notably, most such programs did not include nutrition impacts as part of the overall objectives.
Implementation barriers:
Most of the programs reported administrative barriers in implementation including delays in procurement, lack of communication and coordination between implementing agencies, lack of project monitoring and evaluation, high staff turnover, challenges in obtaining transportation to project area and inability to meet demands of the target population. Lack of political will and good governance has also been reported as evidenced by delays due to lack of authorization from local authorities and lack of support from government organizations. The most common financial problem identified was lack of long term financial sustainability and delays in release of funds leading to interruption in service. Environmental issues were mostly related to natural disasters such as floods or droughts leading to delays in product distribution to target population due to physical inaccessibility. A major setback of these programs is the lack of rigorous evaluation designs and monitoring mechanisms. Most of the nutrition sensitive programs including agricultural programs, WASH strategies, sexual and reproductive health and infection prevention and control, did not include nutrition specific objectives and indicators in the evaluation plan; hence making it more challenging to draw conclusions on their effectiveness on maternal nutrition indicators.
Findings from the Meta-analysis
We included 39 indexed studies for the systematic review and meta-analysis conducted in ESAR countries; of which 37 were randomized controlled trials while 2 were pre-post intervention studies. 10 studies were conducted in Tanzania, 6 each in Uganda and Kenya, 5 each in Malawi and South Africa, 3 each in Zimbabwe and Mozambique and 1 in Ethiopia. Five studies involved interventions in adolescent girls, aged 12 to 18 years while the rest involved adult pregnant or lactating women. Table 1 summarizes the findings from the systematic review.
Our analysis showed that daily or weekly iron supplementation versus no supplementation significantly improved serum ferritin and hemoglobin among pregnant women [38][39][40][41][42] . Vitamin A supplementation among HIV positive women during pregnancy significantly improved birth weight with non-significant impacts on infants' weight and length at 6 months 43-47 . Using everyday food items including carrots, papayas, sunflower oil, red palm oil and ß carotene among postpartum women significantly increased serum retinol levels 48,49 . Agricul-tural interventions by promoting production and use of vitamin A rich sweet potatoes significantly increased vitamin A intake from orange sweet potato sources and serum retinol levels 50,51 . Multiple micronutrient supplementation (MMN) among pregnant and lactating women significantly improved hemoglobin, serum ferritin, birth weight, small-for-gestational age (SGA), anemia in children with non-significant impact on serum retinol levels and preterm birth [52][53][54][55][56][57][58][59][60][61] . Malaria prevention in pregnancy showed that the use of ITNs in pregnancy sig-
Vitamin A supplementation:
Five studies evaluated the impact of vitamin A supplementation with synthetic vitamin A in doses of 5000 IU, 10 000 IU and 25 000 IU for pregnant and lactating women. Three of these studies were done among HIV positive women. nificantly reduced the risk of peripheral parasitemia [62][63][64][65][66][67] . These findings should be interpreted with caution as limited number of studies were pooled for each intervention due to restricted region and population for inclusion. In addition the studies pooled in the meta-analysis were heterogeneous in terms of dose, duration and follow up period of the intervention. These findings were compared with recent reviews of the global evidence from the Lancet series on maternal child nutrition detailing the effectiveness of nutrition specific interventions to improve maternal nutrition and birth outcomes 68 .
Discussion
Findings from our review suggest that multiple programs impacting nutrition are in place in the ESAR; including programs that directly address nutrition indicators and those that integrate corresponding sectors like agriculture, health, education, and water and sanitation. However, their scale and depth differ considerably. Nutrition directed programs in the region include micronutrient supplementation, food fortification, direct food provision and prevention and control of infectious diseases including HIV and malaria while broader programs focus majorly on agricultural interventions along with family planning, reproductive health, water and sanitation, reduction of gender-based violence, women's empowerment and cash incentives. These programs have been implemented by a diverse range of players including respective government ministries, international agencies, NGOs and the private sector in the region. Most of these programs are clustered in a few countries like Kenya, Uganda and Ethiopia while others e.g. Comoros, Somalia and Swaziland have only had a limited number of initiatives. During the last couple of years, many programs are being initiated and implemented in the region; however these donot appear in our review since they have started recently and their evaluations are not yet reported.
These programs have been associated with some improvements in overall maternal health and nutritional indicators; however these are insufficient to significantly contribute to the progress in the region. A major short coming is the lack of rigorous evaluation plans and nutrition specific objectives to measure the actual impact on relevant maternal nutrition indicators. Nutrition sensitive programs like agricultural and financial initiatives supporting women empowerment and even infection control programs may broadly impact maternal nutrition in the region. However, it is challenging to gauge their contribution towards reduction in maternal undernutrition due to the lack of nutrition specific program objectives and evaluations. Key barriers and bottlenecks identified were the weak health systems, poor governance, limited financial and human resources, limited supplies and competing priorities that pose major barriers towards not only the implementation but also scale-up and sustainability of maternal nutrition programs.
Recommendations
• Focus should be on existing promising interventions to improve maternal nutrition including simple interventions like periconceptional folic acid supplementation/fortification, maternal balanced energy protein, vitamin A, multiple micronutrient and calcium supplementation, breast feeding promotion, appropriate complementary feeding, preventive zinc supplementation and management of acute malnutrition in children.
• Country specific plans should be devised which should be led by respective governments in collaboration with international agencies and other stakeholders.
• Targeted nutrition specific programs should be prioritized towards the countries with greatest burden of maternal undernutrition in the region including Swaziland, Comoros, Djibouti, Ethiopia and Somalia. • Countries with highest HIV prevalence rate in the ESAR like Tanzania, Kenya, Mozambique and Uganda should be focused for implementing infection prevention and management programs. • These interventions need to be properly packaged and delivered at scale through appropriate delivery channels in order to reach the masses.
• Nutrition-sensitive programs in ESAR need to be more nutrition sensitive by explicitly incorporating nutrition issues in the initial design, monitoring and evaluating nutrition impacts with specific/appropriate nutrition indicators and ensuring that there are no unintended negative consequences to nutrition from the programs like inconsistent strategies or duplication of efforts.
• Evaluations should follow rigorous study designs and gather high quality data with enhanced estimation methods for reliable outcome measures in order to gauge the actual impact of these intervention packages on maternal nutrition and identify the region specific best buys.
Conclusion
Maternal nutrition in ESAR is affected by a range of factors including chronic conflicts and emergencies, droughts and famines, periodic outbreaks and endemic infectious diseases, rampant poverty and inequities in access and utilization. Political stability and country leadership holds an unparalleled and pivotal role for any measures to take maximal impact and guaranteeing long term sustainability. If ensured, this could go a long way in reducing the funding gap and promotion domestic investments, especially from public sector programs. Public private partnership should be enhanced or scaled up at all levels with better coordination between the various stakeholders in order to avoid duplication of effort and promote synergies. With firm commitment and implementation of evidence based interventions, attention to the region specific contextual factors and transparent monitoring and evaluation, the ESAR region can move a step closer to achieving the goal of reducing maternal undernutrition. | 2017-04-01T10:51:23.439Z | 2015-06-01T00:00:00.000 | {
"year": 2015,
"sha1": "5e8250479fe2b45f0eaf3549b846030485705a41",
"oa_license": "CCBY",
"oa_url": "https://www.ajol.info/index.php/ahs/article/download/117572/107140",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "93750b3d507a16d64a901e9440d9fa76bf8f386f",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119453685 | pes2o/s2orc | v3-fos-license | Modified Gauss-Bonnet theory as gravitational alternative for dark energy
We suggest the modified gravity where some arbitrary function of Gauss-Bonnet (GB) term is added to Einstein action as gravitational dark energy. It is shown that such theory may pass solar system tests. It is demonstrated that modified GB gravity may describe the most interesting features of late-time cosmology: the transition from deceleration to acceleration, crossing the phantom divide, current acceleration with effective (cosmological constant, quintessence or phantom) equation of state of the universe.
1. The explanation of the current acceleration of the universe (dark energy problem) remains to be a challenge for theoretical physics. Among the number of the approaches to dark energy, the very interesting one is related with the modifications of gravity at large distances. For instance, adding 1/R term [1,2] to Einstein action leads to gravitational alternative for dark energy where late-time acceleration is caused by the universe expansion. Unfortunately, such 1/R gravity contains the instabilities [3] of gravitationally bound objects. These instabilities may disappear with the account of higher derivative terms leading to consistent modified gravity [4]. Another proposals for modified gravity suggest lnR [5] or T r1/R terms [6], account of inverse powers of Riemann invariants [7] or some other modifications [8]. The one-loop quantization of general f (R) in de Sitter space is also done [9]. In addition to the stability condition which significally restricts the possible form of f (R) gravity, another restriction comes from the study of its newtonian limit [10]. Passing these two solar system tests leads to necessity of finetuning of the form and coefficients in f (R) action, like in consistent modified gravity [4]. That is why it has been even suggested to consider such alternative gravities in Palatini formulation (for recent discussion and list of references, see [11]).
In the present paper we suggest new class of modified gravity, where Einstein action is modified by the function f (G), G being Gauss-Bonnet (GB) in-variant. It is known that G is topological invariant in four dimensions while it may lead to number of interesting cosmological effects in higher dimensional brane-world approach (for review, see [12]). It naturally appears in the low energy effective action from string/M-theory (for recent discussion of latetime cosmology in stringy gravity with GB term, see [13]). As we demonstrate below, modified f (G) gravity passes solar system tests for reasonable choice of the function f . Moreover, it is shown that such modified GB gravity may describe late-time (effective quintessence, phantom or cosmological constant) acceleration of the universe. For quite large class of functions f it is possible to describe the transition from deceleration to acceleration or from nonphantom phase to phantom phase in the late universe within such theory. Thus, modified GB gravity represents quite interesting gravitational alternative for dark energy with more freedom if compare with f (R) gravity. 2. Let us start from the following action: Here G is the GB invariant: G = R 2 − 4R µν R µν + R µνξσ R µνξσ . By introducing two auxilliary fields A and B, one may rewrite the action (1) as Varying over B, it follows A = G. Using it in (2), the action (1) is recovered. On the other hand, by the variation over A in (2), one gets B = f ′ (A). Hence, The scalar is not dynamical, it has no kinetic term and is introduced for simplicity. Varying over A, the relation A = G is obtained again.
The spatially-flat FRW universe metric is chosen as The first FRW equation has the following form: (5) Here the Hubble rate H is defined by H ≡ȧ/a. For (4), GB invariant G ( A) has the following form: In general, Eq.(5) has deSitter universe solution where H and therefore A = G are constants. If H = H 0 with constant H 0 , Eq.(5) looks as: For large number of choices of the function f , Eq. (7) has a non-trivial (H 0 = 0) real solution for H 0 (de-Sitter universe). Hence, such deSitter solution may be applied for description of the early-time inflationary as well as late-time accelerating universe. Let us check now how modified GB gravity passes the solar system tests. The GB correction to the Newton law may be found from the coupling matter to the action (1). Varying over g µν , we obtain Here T µν is matter EMT. In the expression (8), the In the equation (5) corresponding to the first FRW equation, however, the terms including f ′′′ do not appear. When T µν = 0, the (t, t)-component of Eq. (8) reproduces Eq. (5) by identifying G with A. The perturbation around the deSitter background which is a solution of (7) may be easily constructed. We now write the deSitter space metric as g (0)µν , which gives the following Riemann tensor: The flat background corresponds to the limit of H 0 → 0. Represent g µν = g (0)µν + h µν . For simplicity, the following gauge condition is chosen: The GB term contribution does not appear except the length parameter 1/H 0 of the deSitter space which is determined with the account of GB term. Eq. (10) proves that there is no correction to Newton law in deSitter and even in the flat background corresponding to H 0 → 0 whatever is the form of f . We should note that the expression (10) could be valid only in the deSitter background. In more general FRW universe, there could appear the corrections coming from f (G) term. We should also note that in deriving (10), we have used a gauge condition g µν (0) h µν = 0 but if we include the mode corresponding to g µν (0) h µν , there might appear corrections from f (G) term. Eq.(10) only shows that for the mode corresponding to the usual graviton, any correction coming from f (G) does not appear in the deSitter background.
In case of f (R)-gravity [2,4], for most interesting choices of the function f an instability was observed in [3]. The instability is generated since the FRW equation, contains derivatives of the fourth order and therefore the scalar curvature propagates. This may cause the appearence of growing force between the galaxies. Only special forms of f (R) may be free of such instability [4].
In f (G)-gravity case (1), the scalar field denoted by A in (3) has no kinetic term. Then the scalar field or the curvature itself does not propagate and therefore there is no such instability as in [3], which is clear from the (no) correction to the Newton law in (8). Thus, modified GB gravity may pass the solar system tests (at least, for some functions f (G)).
Having in mind the fundamental property of the current accelerating universe, it is interesting to study the possible transition between deceleration and acceleration of the universe for the action (1). First, Eq.(6) could be rewritten as G = A = 24H 2ä /a. Then at the transition pointä = 0, the Gauss-Bonnet term vanishes: G = A = 0. Let us assume the transition occurs at t = t 0 . The Hubble rate H may be expanded as At the transition point t = t 0 , one finds Since G = 0 there, it follows Hence, G (A) is: We now also assume that f (A) could be expanded as Then substituting (11), (14), and (15) into (5), we find Combining (13) and (16), one can show that the Hubble rate can be determined consistently, which suggests the existence of the transition between deceleration and acceleration of the universe. We should note that H 0 could be determined by a proper initial condition. Then the transition condition could be f 2 = 0, that is, f (A) contains the quadratic term on A. Let us consider now the possible transition between non-phantom phase and phantom phase of the universe (if current universe is phantom one). We now assume that the transition occurs at t = t 1 .
SinceḢ = 0 at the transition point, it is natural to assume the Hubble rate behaves as Hence, A (G) behaves as (18) Eq. (5) shows The transition between non-phantom phase and phantom phase could be regarded as a perturbation from the deSitter solution in (7). The following perturbation may be suggested where H 0 satisfies (7). Eq. (5) gives
δH .(21)
Here, it is supposed that f ′′ 24H 4 0 = 0. Let λ satisfies Then δH behaves as δH ∼ e λt . If λ is real, δH is monotonically increasing or decreasing,Ḣ does not vanish and therefore there is no transition. If λ becomes imaginary and δH oscillates. Therefore the transition between the phantom phase, wherė H = δḢ > 0, and the non-phantom phase, wherė H = δḢ < 0 could be repeated in oscillation regime. The amplitude of the oscillation of δH decreases as |δH| ∼ e −3H0t/2 and the universe asymptotically goes to deSitter space.
3. To show that modified GB gravity may lead to quite rich and realistic cosmological dynamics we consider some explicit examples of the function f (G). The following solvable model, which does not belong to the above class, may be discussed Here f 0 is a constant. Assuming Eq. (5) gives The solution differs for h 0 > 1 and h 0 < 0 cases. The simple analysis shows that when f 2 0 > 3/2κ 4 , there are two negative solutions, which decribe effective phantom universe with w < −1. When f 2 0 < 3/2κ 4 , we have one h 0 > 1 solution, which describes the effective quinteessence −1 < w < −1/3 and one h 0 < 0 solution describing effective phantom. If 16f 0 /3κ 4 < 1/κ 8 , there are two real solutions which satisfy 0 < h 0 < 1.
The above results show that the asymptotic solution behaving as (28) with α = 0 can be obtained only for the cases where β = 1/2 or 1/2 < β < 2/3 when t → ∞ or β = −1/3 or 1/4 < β < 1/2 when t or t 0 − t → 0. In other situations, the asymptotic solution corresponds to the deSitter space (7). The following model may be taken as next example: Then Eq. (7) gives It could be difficult to solve the above equation exactly, but if the curvature and therefore H 0 is small, one gets | 2019-04-14T02:53:56.927Z | 2005-08-08T00:00:00.000 | {
"year": 2005,
"sha1": "aa3d8b1f676f84003ba098872f805e015b531393",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-th/0508049",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "aa3d8b1f676f84003ba098872f805e015b531393",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
264377807 | pes2o/s2orc | v3-fos-license | Protocol for assessing translation in living Drosophila imaginal discs by O-propargyl-puromycin incorporation
Summary Translation is a fundamental process of cellular behavior. Here, we present a protocol for measuring translation in Drosophila epithelial tissues using O-propargyl-puromycin (OPP), a puromycin derivative. We detail steps for larval dissection, OPP incorporation, fixation, OPP labeling, immunostaining, and imaging. We also provide details of quantification analysis. Significantly, OPP addition to methionine-containing media enables polypeptide labeling in living cells. Here, we study wing imaginal discs, an excellent model system for investigating growth, proliferation, pattern formation, differentiation, and cell death. For complete details on the use and execution of this protocol, please refer to Lee et al. (2018), Ji et al. (2019), and Kiparaki et al. (2022).1,2,3
Highlights Visualizing protein synthesis in Drosophila tissues by click-based OPP labeling
Compare translation rates of different genotypes within mosaic Drosophila tissues OPP incorporation can be combined with immunostaining OPP incorporation into newly synthesized proteins in live epithelial cells 1. Ensure that all the necessary buffers for the protocol are freshly prepared.These buffers include 13 PBS, Schneider medium supplemented with serum, fixatives, blocking agents, and antibody solutions.2. It is recommended to conduct control experiments in parallel to the main experiments to ensure reproducibility of the OPP protocol.For example, in our original manuscript we used cycloheximide as negative control to confirm that OPP incorporation in cells of the imaginal discs depends on active translation. 1In addition, we confirmed that cells with reduced levels of Tor, myc or eIF2a activity present reduced OPP incorporation.In that work, we showed that cells that have heterozygous mutations in ribosomal protein genes (Rp +/-cells) have reduced OPP incorporation compared to wild-type cells.Another negative control that we suggest is a sample without OPP, to distinguish the OPP signal from background cytoplasmic staining.
Note: Although in this study we used the Drosophila melanogaster third instar larvae wing imaginal discs for OPP staining, this protocol can be used for other types of imaginal discs and tissue as well.
MATERIALS AND EQUIPMENT
REAGENT In this step, the larvae are inverted to make the imaginal discs accessible to the labeling medium, which is supplemented with O-propargyl-puromycin (OPP).The labeling time is short (only 15 min), but sufficient to allow OPP incorporation into newly synthesized proteins.Following the OPP labeling, the Drosophila tissues are fixed.
1. To begin, transfer the flies to a fresh vial and allow them to lay eggs for one day.Depending on the genotype, proceed with the dissection of third instar larvae after 4-5 days which marks Day 1 of this Protocol.
Note:
The dissection time depends on the developmental growth of the flies.For example, Minute flies require more days (1-2) to reach the same developmental stage compared to wild-type files.
Note: During this period, it is possible to generate clones by utilizing heat shock inducible flippase and FRT recombination.
2. Dissection.a. Collect wandering third instar larvae of the desired genotype from the fly food and wash them twice with 13 PBS in a 9-well glass plate.
Note: This is important to remove any excess of food particles that may be adhered to their cuticles and may interfere with the subsequent reactions.
Note: 13 PBS is freshly prepared using a 103 PBS solution.
b. Transfer the clean larvae (usually 20 in number) to a new well containing Schneider medium ($500 mL) which is supplemented with 10% FBS serum (Schneider/FBS medium).
Note: FBS is important to ensure that the larvae have the appropriate nutrients and growth factors to support survival and growth of the tissues during the protocol.
c. Using a pair of forceps dissect the larvae.i. Grab the mouth hooks of the larvae and with a microdissection spring scissors cut and remove the posterior one-third of the larvae (usually around 20 larvae).ii.Invert dissected larvae using forceps.iii.Remove quickly the largest portion of unrelated tissues (i.e., fat body, salivary glands, gut).
Note: Complete this step within 10-15 min of initial dissection, at room temperature (RT, temperature range between 22 C and 25 C).
Note: Practice may be required to invert the larvae fast enough.Untrained users can refer to published videos demonstrating dissections of wing and eye imaginal discs. 5,6 OPP incorporation.
a. Remove the medium used during dissection and add fresh Schneider/FBS medium supplemented with Component A.
Note: The concentration of Component A is in the range of 10-20 mM, which corresponds to a dilution of 1:1,000-1:2,000.
b. Gently shake the samples for 15 min at RT, at speed 100 rpm.
CRITICAL: OPP becomes covalently attached to the nascent amino acid chains and blocks further elongation.Longer incubation time should be avoided due to potential toxicity, and additionally since it will increase the dependence of the OPP signal on turnover rate of the truncated polypeptides.Differences in proteasome function between samples could then affect the signal as much as differences in translation rate. 4,7 Fixation.a. Wash the samples once with 13 PBS and immediately proceed to sample fixation.
CRITICAL: Follow safety measures when handling formaldehyde, since it is highly toxic.
Wearing gloves and working in a chemical fume hood are good safety practices to prevent accidents.
Note: At this point, thaw the Click Components.
c. Block with 3% BSA in 13 PBS for 10 min.
Note: While blocking with 3% BSA, prepare the Click reaction mix according to the proportions in the Materials section.
Note:
The mix should be used within 10 min.
d. Remove the blocking solution and add 300-400 mL of Click reaction mix to each sample in the 9-well glass plate.e. Incubate the samples in the dark at RT, on a shaker, for 30 min.f.Remove reaction mix and wash once with the Rinse buffer (component F). g.Wash with 3% BSA in PBS for 2 min.
Note: If there are no primary antibodies, proceed to NuclearMask staining (step 8).
Note:
The intensities of all the lasers and detectors were set to avoid excessive or weak signal.The Quick LUT feature in the Leica software was used to assess the intensity saturation and optimizing the settings for the image capturing.
12. Save all images in the original format as well as any other format appropriate for subsequent analysis.For Image J quantification, .tifimages of a single section were used.For details, check the ''quantification and statistical analysis'' section below.
EXPECTED OUTCOMES
OPP was originally used as a click chemistry reagent for fluorescent labeling of nascent protein synthesis in tissue culture cells (Liu et al. 2012). 8OPP labeling is detected both in the cytoplasm and in the nucleus.If we aim to compare nascent protein synthesis between different cells in mosaic tissues or between different genotypes, it is important to determine the signal intensity at the level where nuclei are present in both cell types.Comparison of cytoplasmic labeling in one cell type with nuclei labeling in another cell type can lead to inaccurate conclusions.Importantly, control experiments should be done in parallel.For example, in the original paper (Lee et al., 2018) where we first used OPP to measure bulk translation in Drosophila mosaic wing imaginal discs, we showed that OPP incorporation is reduced in cells that have mutations in genes that affect translation, including Tor, myc and Gadd34 (Figure 4 and S4 in 1 ).By using this protocol we confirmed that cells that have heterozygous mutations in ribosomal protein genes (Rp +/-cells) have reduced bulk translation rates compared to wild-type cells.Unexpectedly, we showed that the transcription factor Xrp1, and not a reduced number of ribosomal subunits, was responsible for the reduced translation in the Rp +/-cells.This finding then led to the discovery that Xrp1 reduced global translation through PERK-dependent phosphorylation of eIF2a, a key mechanism of global regulation of CAP-dependent translation. 3Figure 1 shows a mosaic wing disc containing cells with heterozygous mutations in the ribosomal protein gene RpL27A (RpL27A +/À cells) and wild-type cells.RpL27A +/À cells (labeled green) present reduced OPP incorporation compared to wild-type cells.Control clones that differ only in the presence of the Ubi-GFP transgene do not present differences in OPP incorporation (Figures 1K-1Q).Notice that OPP labeling in imaginal discs from wild-type larvae is intrinsically patchy, likely as a consequence of the patchy activation of the Tor pathway (Figure 1M). 1,9ANTIFICATION AND STATISTICAL ANALYSIS We recommend using the ImageJ analysis tool for quantification purposes of microscopic images.For Image J quantification, .tifimages of a single section were used.For consistent and reliable comparison between different tissues, imaging should be performed with the same confocal settings, such as laser power, gain, and objective.For quantification of OPP signal, the .tifimage of a single confocal section showing the overlap staining for both NuclearMask and b-gal was opened in the ImageJ software and multiple regions of interest (ROI) were drawn in each disc for each genotype (Figure 1E).We selected regions that contained nuclei for both genotypes and we avoided cytoplasmic regions.In our case we focus our analysis in the pouch region of the wing discs, but similar analysis can be performed for the other areas of the wing, such as the notum.For example, in the disc shown in Figures 1A-1D, by selecting the ''Freehand selection'' drawing tool we draw 4 ROIs for regions containing RpL27A +/À cells (labeled green) and 4 different ROIs for regions containing wild-type cells (unlabeled) (Figure 1E).By pressing Ctrl+T, we added each ROI in the ROI Manager tool.We opened the .tifimage showing NuclearMask staining and transferred all saved ROIs from the ROI manager to the NuclearMask image (Figure 1F).By selecting each ROI separately and pressing Ctrl+M we calculated the mean fluorescence intensity of each ROI for the NuclearMask staining (Figure 1I).We verified that our areas contain similar percentage of nuclei by checking the mean intensity value of the NuclearMask fluorescence (Figure 1I).We perform similar analysis for the .tifimage of the b-gal channel, to confirm the genotype of the areas.For example, in our case b-gal labels the RpL27A +/À cells.Therefore we confirm that the areas #1, #2, #3 and #4 contain RpL27A +/-cells, while areas #5, #6, #7 and #8 wild type cells (Figures 1G and 1I).Finally, we opened the .tifimage showing the OPP staining and by the same way we measured the MEAN value of OPP fluorescence for each ROI (Figures 1H and 1J).Using Microsoft Excel, we calculated the AVERAGE and the standard deviation (SD) of the genotypes.We performed statistical comparisons between RpL27A +/- mutant areas and wild type areas by Student's t-test.Similar analysis of neutral clones differing only in the presence of the Ubi-GFP transgene do not present different levels of OPP incorporation.We performed statistical comparison between double GFP positive clones (GFP +/+ ), GFP negative clones (GFP À/À ) and the background unrecombined cells (GFP +/À ) by one way ANOVA test.Notice that due to the inherent patchiness of the OPP signal in wing imaginal disc, in addition to the quantification, we find the visual comparison important.
LIMITATIONS
OPP as a puromycin analog, is known to be cytotoxic, since it inhibits protein synthesis by disrupting peptide transfer and causing premature chain termination during translation.Therefore OPP labeling can lead to cell death and stress responses, which can interfere with the interpretation of the results.Minimize the labeling time in order to not affect cell viability or secondary effects.
In addition, we favor comparing translational cellular rates of distinct genotypes within the same tissue, using mosaic analysis generating tools, as opposed to comparing the translational rates of distinct genotypes from different imaginal discs.If it is necessary to compare cells from different imaginal discs, we recommend co-labeling control samples together with experimental samples.We use the OPP labeling of these control discs as a way to normalize OPP signaling between genotypes and ensure that the treatment of different samples is consistent and comparable.
We have observed that the translational rates vary depending on the developmental stage of the larva (unpublished observations).Therefore, researcher should be cautious to compare the translational rates between discs that may differ in their developmental stage.Our suggestion is co-staining with developmental markers (such as Senseless for the wing disc) to ensure that we are comparing the translational rates of distinct genotypes at the same developmental stages.
Importantly OPP incorporation in polypeptides leads to their release from the ribosome and diffusion over long distances from the site of synthesis, even after short OPP incubations. 10Therefore, OPP labeling cannot faithfully detect the site of polypeptide synthesis. 10OUBLESHOOTING Problem 1 Black areas, indicating the absence of OPP signaling in those regions (step 2).
Potential solution
The reason of the appearance of black areas in part of the tissues could be the mechanical damage of the tissue during dissection.Cells that are damaged during dissection do not incorporate OPP, resulting in the absence of signal in those areas.Additionally, we have noticed that dead cells (e.g., apoptotic), do not incorporate OPP.
Problem 2
Uneven signal is detected in some areas of the imaginal discs (step 3).
Potential solution
This could be true due to unequal access of Component A throughout the tissues.To avoid this, remove any larval tissues that are attached to the imaginal discs (such as the fat body and gut) and ensure that there is adequate shaking during the incubation period with Component A and the subsequent incubation procedures.
Problem 3
Positive control experiment does not present differences in OPP incorporation.
Potential solution
Ensure that all buffers are prepared according to the provided instructions.
Use multiple positive control experiments.
Figure 1 .
Figure 1.Protein synthesis in mosaic wing imaginal discs (A-D) Protein synthesis in mosaic wing imaginal discs containing wild-type cells and RpL27A +/À mutant cells.RpL27A +/À mutant cells (labeled with m-b-gal, panel B) present reduced OPP incorporation compared to wild-type cells (unlabeled).Genotype: hsFLP; alz RpL27A -FRT40/FRT40.(E-J) Regions of Interests were drawn to quantify the differences of OPP incorporation between wild-type cells and RpL27A +/À mutant cells.(p < 0.001 by Student's t test) Figure reprinted with permission from Lee et al., 2018.(K-Q) Protein synthesis in mosaic wing imaginal discs containing control clones differing only in levels of the Ubi-GFP transgene.Cells do not present different levels of OPP incorporation (p > 0.1, one-way ANOVA) Genotype: hsFLP; UbiGPP FRT40/FRT40. | 2023-10-22T06:18:08.760Z | 2023-10-19T00:00:00.000 | {
"year": 2023,
"sha1": "ff63a6e532095b11458202cd0b166754331dc9bd",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.xpro.2023.102653",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "eef19f7983aa2d752422866a75a9e7bcdf552e49",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
251040393 | pes2o/s2orc | v3-fos-license | Scaling limit of graph classes through split decomposition
We prove that Aldous' Brownian CRT is the scaling limit, with respect to the Gromov--Prokhorov topology, of uniform random graphs in each of the three following families of graphs: distance-hereditary graphs, $2$-connected distance-hereditary graphs and $3$-leaf power graphs. Our approach is based on the split decomposition and on analytic combinatorics.
INTRODUCTION
In the present article we obtain scaling limit results for large graphs taken uniformly at random in the class of distance-hereditary graphs (DH graphs for short) and in two interesting subclasses: 2-connected distance-hereditary graphs and 3-leaf power graphs. In all cases, the limit is the celebrated Brownian continuum random tree (Brownian CRT for short). We start by giving some background on these graph classes.
1.1. Distance-hereditary graphs and interesting subclasses. DH graphs are the connected graphs for which the distances in any connected induced subgraph are the same as in the original graph. They enjoy many other characterizations, for instance by avoidance of induced subgraphs. Among other properties, they form a subclass of perfect graphs and have clique-width at most three. They have been widely studied in the algorithmic literature: in particular, it has been proved that many NP-hard problems can be solved in polynomial time for DH graphs (see e.g. [CT05]); additionally, DH graphs can be recognized efficiently, both in the static and dynamic framework (see [GP12], and references therein).
To establish such algorithmic properties, a key feature of distance-hereditary graphs is that they are nicely decomposable for the so-called split decomposition. More recently, this split decomposition has also been used to give precise enumerative results and sampling algorithms on the class of distance-hereditary graphs and some of its subclasses [CFL17,BL18]. The analysis of distance-hereditary graphs (and subclasses) via the symbolic method, as done by Chauve, Fusy and Lumbruso [CFL17] (and reviewed in Section 3 below) is a starting point for the present paper. More precisely, in our work, we aim at illustrating the usefulness of the split decomposition (combined with symbolic and analytic combinatorics) to study large random DH graphs.
Let us comment on the choice of graph classes considered in this article, in addition to the DH graphs already discussed. The class of 3-leaf power graphs has been studied in [GP12] (resp. [CFL17]) to illustrate the versatility of algorithmic (resp. enumerative) results obtained through the split decomposition. It is therefore natural for us to use it to illustrate as well the versatility of the probabilistic approach through the split decomposition. Since 3-leaf power graphs are defined via trees (see Definition 7.1), their convergence to an infinite tree might seem expected. On the contrary, conditioning random DH graphs to be 2-connected makes them further from being trees. Our result indicates that, nevertheless, at the level of scaling limits, 2-connected DH graphs are tree-like and converge to the Brownian CRT.
Another motivation for considering 3-leaf power graphs and 2-connected DH graphs is that, unlike unconstrained DH graphs, they do not form what is called a block-stable class of graphs. Indeed, such block-stable graph classes have already been studied in the discrete probability literature [DFKKR11,DN13]. In particular, a scaling limit result for random graphs in such classes (under an additional subcriticality hypothesis) is provided in [PSW16], covering the case of unconstrained DH graphs. It is therefore important to show that our approach through split decomposition works also for classes which are not block-stable; and an obvious way to obtain a class of graphs which is not block-stable is to impose the constraint of being 2-connected.
1.2. The results. A standard question in the theories of random trees, random maps and more recently random graphs is to look for limits of random graph sequences, for various topologies. To this end, we consider graphs as discrete metric measure spaces. A metric measure space (mmspace for short) is a triple (X, d, µ), where (X, d) is a complete and separable metric space and µ a probability measure on X. A finite connected graph can be seen as a mm-space, where X is the vertex set of the graph, d the graph distance, and µ the uniform distribution on X. In this setting scaling limits of random graphs correspond to the convergence of random mm-spaces, after renormalization of the distances.
For metric measure spaces there are two classical topologies used in the literature, the Gromov-Prohorov (GP) topology and the stronger Gromov-Hausdorff-Prohorov (GHP) topology. Our result holds with respect to the GP topology (see Section 2 for the definition). We believe that it could be extended to the GHP topology, using a criterion provided by Athreya-Löhr-Winter [ALW16]. However, this would likely require tools and methods very different from those of the present paper, and is therefore beyond its scope.
We also denote by (T ∞ , d ∞ , µ ∞ ) the Brownian CRT equipped with the mass measure µ ∞ . The Brownian CRT has been introduced by Aldous in [Ald93] and is a now standard object in the discrete probability literature (for details and references, see Section 2). (1) [n], 3 ≈ 0.9266 where γ E,3 is defined in Eq. (67) p. 38.
Figs. 1 to 3 show two realizations of uniform distance-hereditary graphs with a few hundred vertices, respectively in the unconstrained, 2-connected and 3-leaf power graph cases. FIGURE 1. Two samples of uniform random DH graphs of respective sizes n = 290 and n = 388. Both graphs were generated with a Boltzmann sampler (see [DFL+04]) using the combinatorial specification given in Eq. (4) p. 14 and plotted with python library networkx. As mentioned above, in the case f = d (i.e. random unconstrained DH graphs), Theorem 1.1 is not new. Indeed, DH graphs form a subcritical block-stable class of graphs, and it is proved in [PSW16] that uniform random graphs in such classes converge to the Brownian CRT 1 . On the contrary, 3-leaf power graphs and 2-connected DH graphs are not block-stable graph classes, and Theorem 1.1 is new in these cases. The stronger connectivity of 2-connected DH graphs is reflected in the value of the renormalizing constant, which is smaller in the 2-connected case than in the unconstrained and 3-leaf power cases.
We can restate Theorem 1.1 in more concrete terms, which actually describe how we intend to prove Theorem 1.1. It is known (see [GPW09] or Section 2 below) that convergence in distribution in the Gromov-Prohorov sense is equivalent to the convergence in distribution of the relative distances between k uniform vertices in the graph, for every k. For k = 2 Theorem 1.1 says that if v 0 , v 1 are uniform i.i.d. vertices in G where v 0 , v 1 are independent and µ ∞ -distributed in T ∞ . It turns out that the random variable d ∞ (v 0 , v 1 ) is known to follow the Rayleigh distribution, i.e. has density xe −x 2 /2 on R + . More generally Theorem 1.1 amounts to saying that (2) holds jointly for k uniform i.i.d. vertices in G (n) f . The joint limiting distribution, i.e. the distribution of the distances between k random points in the CRT, is given below in Lemma 2.4 (see also [Ald93]).
We finish by discussing how our result fits in the literature on convergence of discrete graph models to the CRT. It is now well established that the Brownian CRT is the universal limit of many important families of random trees, see, e.g., [LG05]. In addition, a few families of graphs which are not trees are also known to converge towards the CRT, although such results are less common in the literature. We can cite some models of random planar maps [AM08,Bet15,1 In [PSW16], the convergence is proven only for the Gromov-Hausdorff (GH) topology, which is incomparable with the GP topology we use here. We believe however that without much further effort, their argument in fact proves convergence in the stronger GHP topology, see Appendix C for details. Car16,JS15], some models of random dissections [CHK15], and random graphs in subcritical block-stable graph classes [PSW16], as mentioned above. Our paper exhibits two new families of nontree graphs classes converging to the CRT (and an alternative proof for a third class).
1.3. Proof strategy. As indicated above, one goal of this paper is to investigate the possibility of establishing scaling limit results for graphs using the split decomposition. In this regard, it is a natural continuation of a series of papers, using other tree decompositions to obtain limiting results for combinatorial objects: the substitution decomposition for permutations [BBF+18,BBF+20a,BBF+19,BBF+20b], and the modular decomposition for graphs [BBF+22,Stu21]. In most of these papers, the general proof strategy is the following. First, we use some criteria to characterize the convergence of our combinatorial objects G n (either in the permuton or graphon sense) as the convergence of the density of every substructure H (pattern or induced subgraph) induced by k random elements in our random permutation or random graph. Then, fixing a substructure H 0 of size k, we study the combinatorial class of objects with k marked elements inducing that substructure H 0 . From this, we use singularity analysis to compute the asymptotic density of H 0 in a random object G n .
The general strategy used in this paper is similar, albeit with important novelties. As said above, the Gromov-Prohorov convergence is equivalent to the convergence, for all k, of the matrix of the distances between k uniform random vertices in the graph. This criterion resembles those for permuton and graphon convergence, except that, for a fixed k, distance matrices live in a continuous space (real-valued k × k matrices), while patterns or induced subgraphs belong to a finite set. Hence, to prove Gromov-Prohorov convergence, it is not possible to simply consider the probability that the distance matrices are equal to a given matrix, and study its asymptotics through analytic combinatorics. To overcome this difficulty, we need to consider multivariate generating series, where the additional parameters encode various distances in the graph induced by k random points. Then, instead of the classical transfer theorem, we use a slightly generalized version of the Semi-large powers Theorem (see Appendix A); this theorem is known to explain from an analytic point of view the appearance of the Rayleigh distribution, it is therefore not surprising that we use it here.
As a final note, let us mention that we are not aware of other works, where convergence to the Brownian CRT is proved through the same set of tools. We hope that this method will prove useful in other contexts in the future.
Remark 1.2. A natural alternative strategy to prove our main result would be the following: first prove that the split decomposition tree T (n) f are close, up to some scaling factor, for the GP topology. This is in essence the strategy used in [PSW16] for subcritical block-stable classes of graphs, except that the block-decomposition tree is used instead of the split decomposition tree. There are however important (though not necessarily impossible to overcome) difficulties to follow this route in our case.
First, the split decomposition trees T (n) f associated to our three models can be represented as multitype Galton-Watson trees conditioned to having a given number of leaves (as witnessed by the systems of equations (4), (47) and (64)). Convergence results to the CRT for conditioned multitype Galton-Watson trees are available in the literature (see, e.g., [Mie08]). However, such results are usually obtained for trees conditioned to having a given number of vertices, and in the irreducible case. Here we want to condition on the number of leaves, and, in one of our models, namely for 3-leaf power graphs, the system of equations defining the class is not irreducible, see Eq. (64). Therefore proving the convergence of the split decomposition trees to the CRT would need some work on models of random trees.
A second difficulty is that the convergence of the split decomposition trees does not imply directly the convergence of the associated graphs. For this, we would need to prove that distances in the graph are close, up to a constant factor, to that in the tree. But distances in the graph are determined by the decoration of vertices in the split decomposition (see Section 3.2). One would therefore need to understand the distribution of such decorations (i.e. of types in our multitype model) on paths between marked leaves and branching points in split decomposition trees. Again, this might be feasible but certainly requires work.
We have preferred to develop an approach via analytic combinatorics, as explained above, which is in some sense more direct and more original.
1.4. Outline of the paper. In order to simplify the presentation of the proofs we chose to focus first on the class of unconstrained DH graphs. We explain later (in Sections 6 and 7) how to adapt the result to 2-connected DH graphs and to 3-leaf power graphs.
• In Section 2 we state a criterion for the convergence towards the Brownian CRT w.r.t. the Gromov-Prohorov topology. This criterion essentially follows from [GPW09,Loh13] and from Aldous' construction of the Brownian CRT [Ald93]. • In Section 3 we give the necessary background of graph theory. We will see that there is a correspondence between DH graphs and certain clique-star trees. Section 3 ends with exact and asymptotic enumerative formulas for DH graphs. The material of this section is mainly taken from papers of Gioan-Paul [GP12] and Chauve-Fusy-Lumbroso [CFL17]. • Section 4 is devoted to the combinatorial and analytic study of clique-star trees with a marked leaf. These are building blocks for the combinatorial decomposition of trees with several marked leaves done in Section 5, keeping track of distances in the graph between the corresponding vertices. The convergence of a uniform random unconstrained DH graph to the Brownian CRT, i.e. the case f = d in Theorem 1.1, is proved at the end of Section 5. • In Sections 6 and 7 we extend the main result to 2-connected DH graphs and to 3-leaf power graphs, respectively. • In Appendix A we give a complete proof of a (minor) generalization of the Semi-large powers Theorem ([FS09, Theorem IX.16]), which is central in our proofs. • Appendix B and Appendix C clarify the relation between the present work and the paper [PSW16].
Note: Some computations in the proofs of our main results require the use of a computer algebra system. To help the reader, we provide a companion Maple worksheet, both in mw and pdf formats. These files are embedded into this pdf (alternatively you can download the source of the arXiv version to get the files). : THE GROMOV-PROHOROV TOPOLOGY AND THE BROWNIAN CRT 2.1. A criterion for Gromov-Prohorov convergence.
TOOLBOX
Definition 2.1. A metric measure space (called mm-space for short) is a triple (X, d, µ), where (X, d) is a complete and separable metric space and µ a probability measure on X.
Gromov-Prohorov distance. We let M be the set of all mm-spaces 2 , modulo the following relation: (X, d, µ) ∼ (X , d , µ ) if there is an isometric embedding Φ : X → X such that Φ * (µ) = µ . Note that Φ does not need to be surjective, so that we need to consider the transitivity and reflexivity closure of that relation. In particular one always has (X, d, µ) ∼ (Supp(µ), d, µ), where Supp(µ) is the support of µ.
On the set M, one can define a distance as follows. First we recall the notion of Prohorov distance: for Borel probability measures µ and ν on the same metric space Y , we set where A ε is the ε-halo of A, i.e. the set of all points at distance at most ε of A. This distance metrizes the weak convergence of probability measures. Then, given two mm-spaces (X, d, µ) and (X , d , µ ), we set where the infimum is taken over isometric embeddings Φ : X → Y and Φ : X → Y into a common metric space (Y, d Y ). One can prove [GPW09, Section 5] that d GP is a distance on M and that the resulting metric space (M, d GP ) is complete and separable.
Criterion of convergence. Let X = (X, d, µ) be an mm-space and fix an integer k ≥ 0. We let x 1 , . . . , x k be i.i.d. random elements of X, with law µ. We record their pairwise distances in a matrix, namely we set A X k = d(x i , x j ) 1≤i,j≤k . This is a random k × k square matrix, whose law depends on the mm-space X we start with.
We will also consider random mm-spaces, which we denote with boldface. In this case, conditionally on X = (X, d, µ), we let x 1 , . . . , x k be i.i.d. random elements of X, with law µ and we define as above A X k to be their distance matrix. We have the following characterization of convergence in distribution in (M, d GP ), essentially given in [GPW09,Loh13].
Theorem 2.2. Let X n = (X n , d n , µ n ) for any n ≥ 1 and X = (X, d, µ) be random mmspaces. Then the following properties are equivalent: i) X n converges in distribution to X for the Gromov-Prohorov distance d GP as n → +∞. ii) For any fixed k ≥ 1, the random distance matrix A X n k converges in distribution to A X k as n tends to +∞.
2
To avoid Russell's paradox, throughout the section, we actually take the set of mm-spaces whose elements are not themselves metric spaces.
Proof. In [GPW09, Theorem 5], it is proved in the deterministic setting that convergence for Gromov-Prohorov distance is equivalent to the convergence of the so-called polynomial functions, i.e. of bounded continuous functions of (the entries of) distance matrices. It is then observed in [Loh13, Corollary 2.8] that polynomial functions are convergence-determining, i.e. one has convergence in distribution of random mm-spaces if the expectations of all polynomial functions converge. On the other hand since polynomial functions are the continuous bounded functions of distance matrices, the convergence of expectations of polynomial functions is equivalent to the convergence in distribution of the distance matrices. This completes the proof.
2.2. Distance matrix of the CRT. The Brownian CRT (T ∞ , d ∞ , µ ∞ ) is a random variable taking values in the set of compact metric measure spaces (see [Ald93]).
Informally, the mutual distances of k points in (T ∞ , d ∞ , µ ∞ ) have the same distribution as the distances between the k leaves of a uniform random k-proper tree (defined below) in which edges have random length distributed according a multivariate Rayleigh distribution. This actually characterizes the distribution of T ∞ , as stated below.
Definition 2.3. A k-proper tree t 0 is an (unrooted) nonplane tree with k + 1 leaves where each internal node has degree 3. One of the leaves is considered as the root-leaf ( 0 ) and the other leaves are identified with { 1 , . . . , k }.
Lemma 2.4. For every k ≥ 2 and every k-proper tree t 0 , we fix a labeling of its edges e 0 , . . . , e 2k−2 .
The distribution of (T ∞ , d ∞ , µ ∞ ) is characterized by the property that for every k ≥ 2, if we take v 0 , . . . , v k uniform and independent in T ∞ with distribution µ ∞ , then where the RHS is a random matrix whose distribution is defined as follows (where (2k − 3)!! is the product of all odd positive integers less than or equal to 2k − 3): • t 0 is a uniform k-proper tree; • (X 0 , . . . , X 2k−2 ) have joint distribution and are independent from t 0 ; • The sum in (3) runs over the set of edges e r of the path P t 0 i,j joining leaves i and j in t 0 . Aldous proved that this object exists and indeed properly defines a random metric space [Ald93, Lemma 21]. The reader may be more familiar with an alternative and more constructive definition of the CRT which we briefly recall. Starting from a normalized Brownian excursion e, T ∞ is defined as the quotient [0, 1]/ ∼ e where ∼ e is the "gluing" procedure which identifies any two points of e at the same height having only higher points of e between them (see [LG05, Section 2]). Aldous [Ald93,Cor. 22] proved that both constructions coincide. Through the latter construction, the mass measure µ ∞ is defined as the push-forward of the Lebesgue measure by the quotient map associated with ∼ e .
COMBINATORIAL ANALYSIS OF DISTANCE-HEREDITARY TREES
In this section, we first recall the encoding of distance-hereditary graphs by clique-star trees (which is a special case of the encoding of general graphs by split decomposition trees). This is done in Section 3.1 and largely follows [GP12, Sections 2.1-2.2] (itself inspired by [Cun82]). We then explain how distances in a DH graph can be recovered from the associated clique-star tree (Section 3.2). We could not find this result in the literature, though this might be known to experts. The last two sections provide a combinatorial and analytic study of the generating series of DH graphs (or rather of the associated trees); this mainly follows the work of Chauve-Fusy-Lumbroso [CFL17]. This whole section can be seen as combinatorial preliminaries for the proof of the convergence of unconstrained DH graphs to the Brownian CRT (case f = d in Theorem 1.1).
Definition 3.1. A graph-decorated 3 tree is a (nonplane unrooted) tree τ in which every internal node v of degree k is decorated with a graph Γ v with k vertices; moreover, for each v, we fix a bijection ρ v from the tree-edges incident to v to the vertices of Γ v .
We fix some terminology and conventions. To avoid confusion between decoration graphs Γ v and other graphs, we use the term decoration for Γ v and marker vertices for its vertices. An edge e of τ between two nodes v and v is sometimes seen as connecting the marker vertices q = ρ v (e) to q = ρ v (e). In particular in graphical representations, we draw an edge e of the tree between nodes v and v from q = ρ v (e) to q = ρ v (e). When we refer to the bijection ρ v , we say that an edge e incident to v is attached to the corresponding marker vertex (say, x) of Γ v . When e is incident to v and to a leaf , we make a small abuse of notation by saying that is attached to x.
Let τ be a graph-decorated tree and , be leaves of τ . We consider the (unique) path p from to in τ . For any node v on this path, we denote e in (v) (resp. e out (v)) the edge of p entering (resp. leaving) v. Then is said to be accessible from (or equivalently accessible from ) if, for every node v on p, the pair {ρ v (e in (v)), ρ v (e out (v))} is an edge of the decoration Γ v . With this notion in hand, we can associate to τ a graph Gr(τ ), whose vertex set is the leaf set of τ , and where { , } is an edge in Gr(τ ) if and only if is accessible from in τ . This construction is illustrated on Fig. 4.
In the sequel, we only consider graph-decorated trees τ where all decorations Γ v are either cliques or stars -following [CFL17], we speak of clique-star trees. It is known (see [GP12, Section 3.1]) that the graphs which can be obtained as Gr(τ ) where τ is a clique-star tree, are precisely the distance-hereditary graphs (DH graphs). By convention the graph with a single vertex and the connected graph with two vertices are DH graphs.
We note that a DH graph G can possibly be obtained as Gr(τ ) for several clique-star trees τ . Uniqueness can nevertheless be ensured adding extra conditions on τ . In [GP12], the term graph-labeled tree is used; we prefer here to speak of graph-decorated tree to avoid confusion with labeling in the sense of labeled combinatorial classes [FS09], a notion that we will use throughout the article. . Left: A clique-star tree τ with n = 9 leaves drawn with its 4 decorations. Right: The corresponding graph Gr(τ ). To illustrate the construction of Gr(τ ), we have highlighted two pairs of leaves and the paths between them. Following the blue path, we say that 3 is accessible from 2 in τ ; accordingly {2, 3} is an edge in Gr(τ ). On the opposite, 6 is not accessible from 7 in τ ; accordingly {6, 7} is not an edge in Gr(τ ). (Jumps are defined in Section 3.2.) Definition 3.2. A clique-star tree τ is called reduced if it satisfies the following conditions: i) every internal node v has degree at least 3; ii) no edge of τ connects two internal nodes both decorated with cliques; iii) no edge of τ connects marker vertices q and q where q is the center of a star Γ v and q a leaf of another star Γ v .
Then uniqueness follows directly from [GP12, Theorem 2.9] (which considers all graphs, not only DH graphs). Namely, the following holds.
Proposition 3.3. For every labeled DH graph G of size at least 3, there exists a unique reduced clique-star tree τ such that G = Gr(τ ).
3.2.
Distances in DH graphs through their clique-star trees. Let τ be a clique-star tree and G = Gr(τ ) be the corresponding graph (which is a DH graph as we have seen). We denote by d G the graph distance in G. In this section, we explain how d G can be read on the tree τ . We recall that the leaves of τ are identified with the vertices of G.
For a path p in τ , the jumps of p are defined as follows. When p goes through a node v, it enters and exits through edges e in (v) and e out (v) (both incident to v). If {ρ v (e in (v)), ρ v (e out (v))} is not an edge in Γ v , we say that v is a jump of p. (In particular, and unless otherwised specified, the starting and ending points of p are not jumps of p.) Now, for two leaves and of τ , letting p be the unique path from to in τ , the number of jumps of p is denoted by jp(τ, , ).
Example 3.5. Consider the clique-star tree τ of Fig. 4, and its leaves 6 and 7. The path from 6 to 7 (in red on the picture) has exactly two jumps. Accordingly, the distance between vertices 6 and 7 in the associated DH graph (also drawn on Fig. 4) is 3.
Remark 3.6. According to Lemma 3.4, is accessible from in τ (i.e. jp(τ, , ) = 0) if and only if { , } is an edge of G (i.e. d G ( , ) = 1). In other words, the lemma superseeds and generalizes the definition of the edge set of Gr(τ ).
Proof. We proceed by induction. If τ has a single internal node, then G is isomorphic to the decoration of that node (hence, either a clique or a star), and the statement holds trivially.
Let τ have k > 1 internal nodes and assume that the statement holds for all clique-star trees with fewer internal nodes.
Consider a node v of τ , all of whose neighbors but one are leaves (such a node always exists). Denote by d ≥ 3 the degree of v, by 1 , . . . , d−1 the leaves adjacent to v, and by u the internal node of τ adjacent to v. We also denote by Γ v the decoration of v, and by x the marker vertex of Γ v corresponding to the edge (v, u). We let τ be the clique-star tree obtained by replacing v and Illustration of the proof of Lemma 3.4.
As we shall see, G can be obtained by performing some local modifications on G , which depend on Γ v and x. First note that leaves of τ and τ different from , 1 , . . . , d−1 are the same and are therefore vertices in both G and G ; we will call them old vertices, refering to 1 , . . . , d−1 as new. By construction, adjacency relations between old vertices are identical in G and G . So, knowing G , to know G entirely, we just have to describe the adjacency relations among new vertices, and between the new vertices and the old ones. To this end, we distinguish several cases.
• If Γ v is a clique, then the definition of the construction Gr implies that G is obtained from G by replacing with d − 1 vertices 1 , . . . , d−1 , which form a clique of size d − 1, and such that the old neighbors of each i are the neighbors of in G . • If Γ v is a star with x the center of the star, then similarly G is obtained from G by replacing with d − 1 vertices 1 , . . . , d−1 , which form an independent set of size d − 1, and such that the old neighbors of each i are the neighbors of in G .
• Finally, assume that Γ v is a star and x is not the center of the star. Let j be the leaf of τ attached to the center of Γ v . Here, G is obtained from G by keeping the vertex (with its adjacent edges) but renaming it j , and adding d − 2 vertices 1 , . . . , j−1 , j+1 , . . . , d−1 , which form an independent set of size d − 2, and all connected only to j . In particular, G always contains at least one vertex with exactly the same old neighbors as in G ; call such vertices copies of . Moreover, new vertices of G which are not copies of are pendant vertices incident to a copy of .
With this remark, it becomes clear that distances between old vertices are the same in G and G . Moreover, the path between any two old leaves and in τ also matches the path between and in τ , so that we have d G ( , ) = d G ( , ) = jp(τ , , ) + 1 = jp(τ, , ) + 1 as claimed. When and are both new vertices, their distance d G ( , ) is either 1 or 2, depending on whether the corresponding marker vertices in Γ v are connected or not. Thus, in this case also, we have d G ( , ) = jp(τ, , ) + 1. The interesting case is when is a new vertex and an old vertex. Again, we proceed by case analysis. Denote by p the path from to in τ and by p the path from to in τ . The path p is obtained from p by replacing the first edge ( , u) by the two edges ( , v), (v, u). (Recall that u is the only nonleaf node of τ adjacent to v, corresponding to the marker vertex x of Γ v .) • Assume first that is a copy of . Note that this happens when Γ v is a clique, or when Γ v is a star with attached to the center of Γ v , or when Γ v is a star with x the center of the star. Since is a copy of , of course d G ( , ) = d G ( , ). On the other hand, in all cases, the marker vertices of Γ v attached to and u are adjacent. Therefore, we have jp(τ , , ) = jp(τ, , ), and it follows that d G ( , ) = jp(τ, , ) + 1. • The last case to consider is when Γ v is a star with x an extremity of the star, and attached to another extremity of the star. In this case, p has one more jump than p , since the marker vertices to which x and are attached are not adjacent in Γ v . On the other hand, the only neighbor of in G is the leaf of τ attached to the center of Γ v , previously denoted j . Since j is a copy of , we have d G ( , ) = 1 + d G ( j , ) = 1 + d G ( , ), which gives d G ( , ) = jp(τ, , ) + 1 as desired.
3.3. Clique-star trees as a labeled combinatorial class. In Section 3.1, we have seen that DH graphs are in bijection with reduced clique-star trees. We recall that the latter are nonplane unrooted trees. To use the symbolic method and tools of analytic combinatorics, it is more convenient to deal with rooted trees. Starting from a DH graph with vertex set {0, 1, . . . , n}, we consider the reduced clique-star tree associated with it by Proposition 3.3 and see the leaf with label 0 as the root.
Definition 3.7. A distance-hereditary tree (DH-tree for short) of size n ≥ 2 is a reduced cliquestar tree with n + 1 leaves labeled from 0 to n, where the leaf 0 is seen as the root, therefore called the root-leaf.
By construction, DH-trees of size n are in bijection with DH graphs with vertex set {0, 1, . . . , n}. Most of the time, we forget the root-leaf and think at the tree as rooted in the internal node to which the root-leaf is attached; this node is referred to as root-node below. The root-leaf is represented by the symbol ⊥ in pictures.
Having broken the symmetry when selecting a root, a node v decorated with a star can be of two types.
• Either the path from v to the root 4 exits v through an edge attached to an extremity of the star Γ v . In this case, we say that v is of type S X . Note that one of the children of v is attached to the center of the star. We see this child as distinguished. • Or the path from v to the root exits v through the edge attached to the center of the star. In this case, we say that v is of type S C . Note that all children of v are attached to extremities of the star so that there is no distinguished child in this case.
A node decorated with a clique is of type K.
With this in mind, and recalling the conditions of Definition 3.2, one can describe DH-trees directly as follows. A DH-tree is a nonplane rooted tree T such that i) T has n leaves labeled 1, . . . , n; ii) internal nodes of T (including the root) carry decorations, called types, taken from the set {K, S C , S X }; iii) every node of type K has at least 2 children, none of which can be of type K; iv) every node of type S C has at least 2 children, none of which can be of type S C ; v) every node of type S X has at least 2 children, one of which is distinguished; the distinguished child cannot be of type S X , while other are forbidden to be of type S C . Root-node or root-leaf, equivalently, unless v is the root-node; in this latter case, the type of v is defined in the same way considering the path (of length 1) from v to the root-leaf.
We will now translate this description into the framework of labeled combinatorial classes (see [FS09] for an introduction). We recall that + is used for the disjoint union of combinatorial classes; A × B is the set of pairs (a, b) where a is in A and b in B (with the convention that the label sets of a and b are disjoint; we refer to [FS09] for details on how to deal with labelings in combinatorial classes). Also, if C is a combinatorial class with no element of size 0, then Set(C) is the class of (unordered) sets of elements of C. An index on Set indicates restrictions on the number of elements in the set.
We say that a DH-tree is of type t if its root-node is of type t. We let D K (resp. D S C , D S X ) be the (labeled) combinatorial class of DH-trees of type K (resp. S C , S X ). As usual, we use the symbol Z to represent the trivial tree reduced to one vertex (which is a leaf).
Proposition 3.8 (Chauve-Fusy-Lumbroso 5 [CFL17]). The combinatorial classes D K , D S C , D S X have the following specification: The class D of all DH-trees is simply the disjoint union of the three classes above, i.e.
3.4. Singularity analysis of the specification. We associate to each combinatorial class of DHtrees a generating function By loose estimates on the number of DH-trees, it is easy to see that each of the above series has a positive radius of convergence. A key step in the proof of our main theorem will be given by the singularity analysis of the above series. A similar analysis is provided in [CFL17] in the unlabeled case, we here give all the details of the labeled case. We first note that using Eqs. (4) and an immediate induction on i ≥ 0, we have [z i ]D K = [z i ]D S C for all i ≥ 0, i.e. D K = D S C as formal power series. We will therefore drop D S C and use only D K . Eqs. (4) yield: where exp ≥r (y) = ≥r y / !. The system (5) satisfies the assumptions of the Drmota-Lalley-Woods Theorem (see [BBF+19, Theorem A.6] 6 ). It follows that the series D K , D S X have the same radius of convergence ρ and 5 The equation given for D SX in [CFL17, Theorem 3] is different from the one given here. The one given here can however be found in the proof of [CFL17, Theorem 3]. 6 More classical references for variants of this theorem are [FS09, Section VII.6] and [Drm09, Section 2.2.5], but the first one assumes that we have a polynomial system, while the second one has a different well-posedness condition, which is not satisfied here (and uses extra parameters which are not needed here).
both have a square-root singularity at ρ. Moreover they are ∆-analytic, meaning that they are defined and analytic on some set of the form {z ∈ C, |z| < R 1 and | Arg(z − ρ)| > θ}, for some R 1 > ρ and θ > 0, where Arg is the principal determination of the logarithm. The notion of ∆-analyticity is standard in analytic combinatorics, see [FS09,Chapter VI]. Let us introduce an auxiliary series Lemma 3.9. We have Using F , we can rewrite the system (5) as We solve this linear system for D K and D S X , seeing F as a parameter. This gives the formulas of the lemma.
Proposition 3.10. The series F is ∆-analytic at ρ and admits the following singular expansion around ρ: is the unique positive root of 2F (ρ) 2 + 2F (ρ) − 1 = 0; The expression for γ F is computed in the companion Maple worksheet. This also holds for other constants γ H , γ H,2c , γ E,3 arising later.
Throughout the paper, when a series S has a square-root singularity, we denote by γ S the coefficient of the square-root term in the singular expansion of S near its radius of convergence, with the same sign convention as above. Also, for a variable x and a (multivariate) function Proof. By Eq. (6) the series F is ∆-analytic at ρ and has a square-root singularity at ρ, therefore the expansion of F around ρ is given by Eq. (8) for some F (ρ), γ F which are to be determined. In addition, since F is a series in z with nonnegative coefficients, the transfer theorem ensures that γ F > 0.
Thanks to Eqs. (7) one can eliminate D K and D S X in Eq. (6). We obtain that F is the solution of the equation Plugging Eq. (8) into F = G(z, F ) and comparing the expansions of both sides show that necessarily (10) F (ρ) = G(ρ, F (ρ)) and G w (ρ, F (ρ)) = 1.
(These equations are usually referred to as the characteristic system [FS09, Section VII.4].) Observing that w)), the characteristic system yields the following equation whose only positive solution is 3−1 2 , we can solve for ρ the first of Eqs. (10), giving an explicit expression for ρ and the numerical estimate ρ ≈ 0.1597 (see Maple worksheet). Furthermore, using the Singular Implicit Functions Lemma [FS09, Lemma VII.3], the constant γ F is given by where the last equality is justified in the companion Maple worksheet.
Remark 3.11. Since F is the solution of the implicit equation F = G(z, F ), it is tempting to use the smooth implicit-function schema [FS09, Theorem VII.3] to find its dominant singularity and asymptotic expansion. We can however not proceed like this since the expansion of G contains negative coefficients, contradicting [FS09, Hypothesis (I 2 ) p. 468]. This explains the indirect path used here. In short, the system (5) has the advantage of having nonnegative coefficients: it is used to prove without effort that all series have square-root singularities. On the other hand, F is defined by a single equation, giving simpler computations to determine explicitly the coefficients in its singular expansion.
In the sequel we also need the asymptotic expansion of D K , D S X and its derivative. Using D S X = F 2 1+F , Proposition 3.10 and singular differentiation ([FS09, Theorem VI.8]) we get that D S X and D S X are ∆-analytic and that where: Similarly we obtain that D K is ∆-analytic and that where Summing, we have for D the following expansion:
DH-TREES WITH A MARKED LEAF
In this section, we introduce and analyze combinatorial classes of DH-trees with a marked leaf and certain conditions. This is a first step in the proof of Theorem 1.1 for unconstrained DH graphs (f = d). Indeed, the classes studied here are building blocks in the decomposition of trees with several marked leaves, which we will consider in the next section in order to study distance matrices of uniform random DH graphs. 4.1. A combinatorial system of equations for DH-trees with a marked leaf.
Definition 4.1. Let T be a DH-tree and v a vertex of T different from its root (note that v may be a leaf). Let p be the parent of v in T . Informally, the cotype of v is the type that p would have if the root were in v. More precisely, • if p is of type K, then v is of cotype K; • if p is of type S C , then v is of cotype S X ; • if p is of type S X and v is the distinguished child of p, then v is of cotype S C ; • if p is of type S X and v is not the distinguished child of p, then v is of cotype S X .
The reader is invited to look at the example of Fig. 7.
Definition 4.2. Let a, b ∈ {K, S C , S X }. We define D b a as the set of DH-trees with one marked leaf of cotype b, whose root-node is of type a.
We further set, for a, b ∈ {K, S C , S X }, In other words, a bullet as index (resp. exponent) denotes an unconstrained type of the root-node (resp. cotype of the marked leaf).
We now introduce the following statistics. Let (T, ) be a DH-tree with one marked leaf. We denote jp(T, ) the number of jumps on the path from the marked leaf to the root-leaf in T (in particular, the root-node might be a jump in this path, see Fig. 8).
v 1 2 FIGURE 7. In this DH-tree, the leaf 1 has cotype K, the leaf 2 has cotype S C and the node v has cotype S X (its parent has type S C but if we would re-root the tree in v, it would have type S X ).
We consider (exponential) bivariate generating series of families of DH-trees with one marked leaf with respect to the size (variable z) and to the number of jumps (variable u). Namely, for a, b ∈ {•, K, S C , S X }, We take the convention that for a DH-tree with a marked leaf (T, ), its size |T | is the number of unmarked leaves of T (in other words, the marked leaf is not counted).
Proposition 4.3. The bivariate series D b a (z, u) for a, b ∈ {K, S C , S X } are solutions of the following systems of equations: Proof. We prove in details the case of D S C S X (second equation in the system (18)). The eight other equations are proved in a similar way.
Hence we consider a DH-tree T of type S X with a marked leaf of cotype S C . We can decompose T as a root-node v, to which several subtrees are attached. (Notations are summarized in Fig. 8.) The subtrees attached to v are: • The subtree T containing the marked leaf. In order to keep track of variable u we need to consider two cases.
-First, T may be attached to the center of Γ v (Case A of Fig. 8). In this case, there is no jump in v. Also, T (if not reduced to a leaf) is of type K or S C . Note that T may also be reduced to a leaf (hence, the marked leaf), since a leaf attached to the center of Γ v has indeed cotype S C . -Otherwise, T is attached to an extremity of Γ v (Case B of Fig. 8). In this case, there is a jump in v. Here, T can be of type K or S X , and T cannot be reduced to a leaf since a leaf attached to an extremity of Γ v would have cotype S X . • Attached to every (other) extremity of v one has a tree of type K or S X or a leaf.
• Attached to the center of v (if this is not where T is attached, i.e. in Case B in Fig. 8) there is a tree of type K or S C or a leaf.
We now translate this decomposition on generating functions. According to the case analysis above, T is counted by: • in Case A: 1 + D S C S C + D S C K (since a single marked leaf, counted by 1, is allowed for T ); • in Case B: D S C K + D S C S X . The remaining trees attached to v are counted by: • in Case A: exp ≥1 (D S X + D K + z) (because they form a nonempty unordered sequence of trees in D S X + D K + Z) • in Case B: (D S C + D K + z) exp(D S X + D K + z) (with a distinguished tree attached to the center which is either in D S C or in D K or a leaf, and other trees which form an unordered sequence of trees in D S X + D K + Z).
Finally, a factor u appears in Case B to take into account the jump in v. Hence 4.2. Resolution of the system. Recall (see Section 3.4) that It implies (D S C + D K + z) exp(D S X + D K + z) = F , allowing to simplify the systems (16) to (18) as follows: Solving the system 7 gives the following formulas (put under a suitable form for the subsequent asymptotic analysis): Remark 4.4. Symmetries D b a = D a b in above equations can easily be explained combinatorially. Indeed, we can see a DH-tree with root-leaf r and a marked leaf as a DH-tree rooted in where r is a marked leaf; doing so, the type of the (old) root becomes the cotype of r and the cotype of becomes the type of the (new) root.
Recalling that F depends only on z (not on u), in each case, the series can be written under the form , 7 see Maple worksheet.
where Q b a , M b a and H b a are rational functions in F . For example, looking at Eq. (22), we have .
Moreover all these formulas immediately extend to the case where a or b or both is/are equal to • (unconstrained type of the root-node or cotype of the marked leaf), with the natural convention that and conventions similar to the second line for M • a , Q • a and Q b • . Since F has nonnegative coefficients and 2F (ρ) = √ 3 − 1 < 1, the denominators of Q b a (z), M b a (z), Λ b a (z) and H(z) are positive for z in [0, ρ] and thus, the series Q b a , M b a , Λ b a (z) and H all have radius of convergence ρ and a square-root singularity in ρ, inherited from that of F . Later (in the proof of Proposition 5.7) the function H will play a particular role in the asymptotic analysis so let us now compute its expansion at ρ: √ ρ whose numerical estimate is γ H ≈ 3.9258 (see Maple worksheet).
k-POINT DISTANCES AND INDUCED SUBTREES
The goal of this section is to obtain the joint convergence in distribution of distances between marked leaves in a uniform DH-tree (see Corollary 5.8 below). This allows us to complete the proof of Theorem 1.1 in the case of unconstrained DH graphs (f = d).
Marked leaves and induced subtrees.
In this section, we consider DH-trees with k marked leaves ( 1 , . . . , k ) with the following convention.
Definition 5.1. A DH-tree of size n with k marked leaves is a nonplane rooted tree T such that • T has n nonmarked leaves labeled 1, . . . , n; • additionally, T has k ordered leaves carrying marks ( 1 , . . . , k ); • items ii) to v) p. 13 are satisfied.
Equivalently, it is a DH-tree of size n + k where leaves with labels n + 1, . . . , n + k are seen as marked and get marks 1 , . . . , k , respectively. These marked leaves are not counted in the size. With this convention, the exponential generating series of DH-trees with k marked leaves is D (k) (z) (the k-th derivative of D). Proposition 3.3 is immediately rephrased as follows.
Proposition 5.2. Labeled DH graphs of size n + k + 1 are in bijection with DH-tree of size n and k marked leaves (when n + k + 1 ≥ 3).
We recall the definition of induced subtree.
Definition 5.3. Let T be a DH-tree with k marked leaves . We call essential vertices of T (w.r.t the marked leaves ) its root-leaf, its k marked leaves and their first common ancestors. Then, the subtree of T induced by is obtained as follows: • its vertices are the essential vertices of T ; • its genealogy (ancestor/descendant relation) is inherited from that of T . Fig. 9 illustrates this definition. We remark that the subtree of T induced by k marked leaves is naturally rooted at the vertex corresponding to the root-leaf of T . This vertex is always of degree 1, and will be called root-leaf of the induced subtree.
We now enrich the notion of induced subtrees to record the number of jumps along some paths of T . Consider a DH-tree T with k marked leaves . Let t 0 be the associated induced subtree. Each edge e in t 0 corresponds to a path between two consecutive essential vertices of T . We define jp e (T ; ) as the number of jumps of the path corresponding to e, with the convention that essential vertices are not counted as jumps (but note that the root-node of T can be a jump). We call enriched induced subtree of (T, ) the induced subtree t 0 , with the quantities jp e (T ; ) attached to its edges. It will be convenient to fix for each tree t with k leaves an enumeration (e 0 , e 1 , . . . ) of its edges such that e 0 is the edge adjacent to the root-leaf (for instance a breathfirst traversal of the tree with an arbitrary planar embedding). Then the enriched induced subtree of (T, ) can be written as a tuple (t 0 , a 0 , . . . , a m ), where t 0 is the induced subtree of (T, ) and a i = jp e i (T ; ). In the following we denote r(T, ) := (t 0 , a 0 , . . . , a m ).
Combinatorial decomposition.
Recall that a k-proper tree is an (unrooted) nonplane tree with k + 1 leaves where each internal node has degree 3 (with one leaf considered as the root-leaf and the other leaves denoted { 1 , . . . , k }). It is easily observed that a k-proper tree has k + 1 leaves, k − 1 internal vertices and 2k − 1 edges. It is also a standard fact (see, e.g., [Ald93]) that the cardinality of the set of k-proper trees is exactly (2k − 3)!! , where we recall that (2k − 3)!! is the product of all odd positive integers less than or equal to 2k − 3. Indeed, a k-proper tree can be obtained in a unique way from a (k − 1)-proper tree by selecting one of its 2k − 3 edges and grafting in the middle a new edge with a leaf k at its extremity. Let us fix a k-proper tree t 0 . We consider the following class of marked DH-trees.
Definition 5.4. We let D t 0 be the labeled combinatorial class of DH-trees T with k marked leaves such that: i) the subtree of T induced by is t 0 ; ii) no two essential vertices of T are neighbors of each other.
Item ii) is a technical condition to have a nicer combinatorial decomposition in Eq. (32) below. Recall that we have fixed an enumeration (e 0 , e 1 , . . . , e 2k−2 ) of the (2k − 1) edges of our kproper tree t 0 , in which e 0 is the edge attached to the root-leaf of t 0 . Consider the following multivariate generating series for D t 0 : In order to compute the series D t 0 we introduce the following new classes of DH-trees. For a, b, c ∈ {•, K, S X , S C }, let J b c a be the set of DH-trees T with two (ordered) marked leaves such that • the two marked leaves are children of the root-node; • if T 1 is a DH-tree of type b, one can glue T 1 on the first marked leaf of T (merging the marked leaf and the root-node of T 1 ) without violating the adjacency restrictions defining DH-trees (conditions iii) to v) p. 13); • the same condition holds with gluing a DH-tree of type c on the second marked leaf; • additionally, if T 0 is a DH-tree with a marked leaf of cotype a, one can glue T on the marked leaf of T 0 without violating the adjacency restrictions defining DH-trees.
Proof. We consider different cases depending on the type of the root-node. The trees of J b c a having a root-node of type K are counted by 1 K / ∈{a,b,c} exp(D S X + D S C + z). The ones having a root-node of type S C are counted by 1 A exp(D S X + D K + z). Finally the ones having a root-node of type S X are counted by since the center of the star may be connected to the first marked leaf, to the second marked leaf or to neither of them.
To conclude the proof, we use that D K = D S C , and Eqs. (6) and (7).
For 0 ≤ i ≤ 2k − 2 let v i (resp. w i ) be the vertex incident to e i in t 0 closest to (resp. farthest from) the root-leaf of t 0 . In particular, some v i 's are equal to each other, v 0 is the root-leaf and some w i are leaves (see Fig. 10, right).
If w i is not a leaf, let s i (resp. g i ) be the smallest (resp. greatest) index of the edges from w i to its (two) children.
Proposition 5.6. We have Proof. We shall build a size-preserving bijection from D t 0 to the disjoint union Then T is a DH-tree. Letv i (resp.w i ) be the essential vertex of T corresponding to v i (resp. w i ). We set tp 0 = •, and ct i = • when w i is a leaf of t 0 . Otherwise, we denote by ct i the cotype ofw i , and by tp i the type of the child ofv i which is the root of the subtree containingw i (this type is well-defined thanks to item ii) of Definition 5.4). We then have (tp i , ct i ) 0≤i≤2k−2 ∈ E. For example with (T, ) given in Fig. 10, We decompose (T, ) as follows. For each i such that w i is an internal node of t 0 , we cut the parent edge fromw i , as well as the two edges incident tow i which are the start of a path going to a marked leaf (since t 0 is a k-proper tree, there are always exactly two such edges). This operation turns (T, ) into a disjoint union of trees, which we call pieces. Each edge that is cut is replaced by a marked leaf (in the piece closer to the root of T ) and a root-leaf (in the piece further away from the root of T ). Then the piece containingw i belongs to J tps i tpg i ct i (for every internal node w i ). Moreover, the pieces containing none of thew i are in bijection with the edges of t 0 , and the piece corresponding to e i belongs to D ct i tp i . By decomposing (T, ) we have indeed obtained an element of (tp,ct)∈E Conversely, let (tp i , ct i ) 0≤i≤2k−2 ∈ E and take a tuple From these trees, we build a tree uniquely as follows. For every internal node w i of t 0 , we glue the root-leaf of T w i to the marked leaf of T e j , where e j is the edge from w i to its parent (when gluing, the two edges from the root-leaf and from the marked leaf become one edge, and the leaves disappear). Moreover, we glue the first marked leaf of T w i to the root-leaf of T es i and we glue the second marked leaf of T w i to the root-leaf of T eg i (recall that s i (resp. g i ) is the smallest (resp. greatest) index of the edges from w i to its children).
Since t 0 is a k-proper tree, once these gluings are done, we obtain only one tree T , with one root-leaf (the one of T e 0 ) and k marked leaves (those of T e i where e i is incident to a leaf of t 0 ) which are in one-to-one correspondence with the leaves of t 0 . By construction (recalling also the definition of J b c a ), T is a DH-tree, whose k marked leaves induce t 0 , and which satisfies item ii) of Definition 5.4 (since elements of D b a are DH-trees thus have one or more internal node(s)). All together, we have T ∈ D t 0 .
Finally, we have a size-preserving bijection, since the size of T is the sum of the sizes of the T e i and of the T w i . Indeed, for D t 0 , D b a and J b c a , the root-leaf and marked leaves are not counted in the size, and the leaves which have disappeared when gluing are all marked leaves or root-leaves. 5.3. Asymptotic analysis. Recall the notation r(T, ) from the end of Section 5.1, denoting the enriched induced subtree of (T, ).
Proposition 5.7. Let (T n , ) be a uniform random DH-tree of size n with k marked leaves (not counted in the size). Fix a k-proper tree t 0 and real numbers x 0 , . . . , x 2k−2 > 0. We set a i = x i √ n . Then where s = i x i . Moreover, this estimate is uniform for (x 0 , . . . , x 2k−2 ) in any compact subset of (0; +∞) 2k−1 .
Proof. We first note that, for n large enough, r(T n , ) = (t 0 , a 0 , . . . , a 2k−2 ) implies that (T n , ) is in D t 0 . Indeed, item i) of Definition 5.4 comes from the definition of t 0 ; item ii) follows from the fact that for every i > 0, we have a i > 0 (for n large enough): so, there must be some jumps between each pair of essential vertices, and thus they cannot be neighbors. Therefore, writing p(n) := P r(T n , ) = (t 0 , a 0 , . . . , a 2k−2 ) , we have, for n large enough, We first analyze the denominator. From Eq. (15) and singular differentiation, we have Applying the transfer theorem then yields Consider now the numerator of Eq. (34). We start from Eq. (32) and use that from Eq. (28) all D b a are of the form .
To this end we simplify the quantity κ = (tp,ct)∈E M (tp,ct) (ρ). Since M b a = Λ a Λ b , we have The first product runs over edges of t 0 . We can rearrange its terms according to vertices. Namely, we get a term Λ • (ρ) for the root-leaf of t 0 and one for each leaf of t 0 (the type of the root-leaf and the cotypes of the leaves are •; see the definition of E in Proposition 5.6). Additionally, for each i such that w i is an internal vertex, we get a factor Λ ct i (ρ) from the parent edge e i of w i , and two factors Λ tps i (ρ) and Λ tpg i (ρ) from the children edges e s i and e g i of w i . The above display therefore rewrites as We now want to sum this quantity over (tp, ct) in E. Note that choosing an element of E consists in choosing ct i , tp s i and tp g i for each internal vertex w i . The sum κ = (tp,ct)∈E M (tp,ct) (ρ) therefore factorizes over internal vertices of t 0 (there are k − 1 of them) and we get We can write κ = µν k , with Then Eq. (39) holds for any k ≥ 1 if γ 2 H = 2ρ ν and γ H µ = γ D , which we verify using Maple, from the definitions of the Λ α and Lemma 5.5 for the J b,c a (observing that J b,c a = J c,b a = J a,c b for all a, b, c). Proposition 5.7 is a kind of local limit theorem for r(T n , ). It is rather standard that such statements imply convergence in distribution statements. We now state the convergence in distribution of r(T n , ) (after normalization), which we prove for completeness.
Corollary 5.8. Recall that (T n , ) denotes a uniform random DH-tree of size n with k marked leaves (not counted in the size). We set (t n 0 , A 0 , . . . , A 2k−2 ) = r(T n , ).
has determinant 1): The inner integral is equal to s 2k−2 (2k−2)! . Thus we get where the second inequality is obtained by setting y = γ H s/ √ 2 and the third by repeating integration by part. Summing up, for any k-proper tree t 0 , we have Since there are (2k − 3)!! k-proper trees t 0 , the infimum limit needs to be an actual limit and the inequality is an equality. Therefore, we have proved that t n 0 converges in distribution to a uniform k-proper tree. Now, Eqs. (44) and (45) imply that Consequently, for any t 0 , conditioning on "(T n , ) induces t 0 " the vector converges in distribution to a vector (X 0 , . . . , X 2k−2 ) with density given by (42), whose expression does not depend on t 0 . This ends the proof of the corollary.
5.4.
Gromov-Prohorov convergence of DH graphs. Let G (n) be the uniform DH graph of size n. We want to deduce from Corollary 5.8 the convergence in distribution of the marginals of the distance matrix of G (n) . To do this recall that Lemma 3.4 allows us to estimate distances in G (n) in terms of jumps in the associated DH-tree. We first reformulate Lemma 3.4 with the vocabulary of induced subtrees. For k ≥ 2 let G be a DH graph of size n + k + 1 (whose vertex set is therefore {1, . . . , n + k + 1}). Let v 0 be the vertex of G with label n + k + 1 and, for 1 ≤ i ≤ k, let v i be the vertex of G with label n + i. Denote by (T, ) the DH-tree associated to G in the following way: the tree T , whose rootleaf 0 corresponds to v 0 , has size n and k marked leaves 1 , . . . , k respectively corresponding to vertices v 1 , . . . , v k . We denote by (t 0 , α 0 , . . . , α 2k−2 ) the enriched induced subtree r(T, ) (defined at the end of Section 5.1).
Lemma 5.9. For 0 ≤ i, j ≤ k, let P t 0 i,j be the path joining leaves i and j in t 0 . Then, for some ζ(G, k) such that 1 ≤ ζ(G, k) ≤ k, where e 0 , . . . , e 2k−2 is the enumeration of edges of t 0 .
Proposition 5.10. Let (G (m) ) m≥3 be a sequence of uniform random labeled DH graphs of size m. Let k ≥ 1 and V 0 , V 1 , . . . , V k be uniform i.i.d. vertices in G (m) . Then we have the joint convergence in distribution: where the right-hand side denotes the marginals of distances in the Brownian CRT defined by Eq. (3).
Proof. We fix k ≥ 1. We first observe that with probability 1 − O(k 2 /m) we have that k i.i.d. uniform vertices in G (m) are distinct. Therefore we can prove (46) where (V 0 , V 1 , . . . , V k ) is a uniform k + 1-tuple of distinct vertices.
Since the distribution of G (m) is invariant by relabeling of vertices, we have that where H m−k−1 is a uniform DH graph of size m − k − 1 with k + 1 marked vertices W 0 , . . . , W k not counted in the size. Using Lemma 5.9 with G = H m−k−1 yields We finally use the convergence obtained in Corollary 5.8 (put n = m − k − 1) and the criterion of Lemma 2.4 . From Theorem 2.2, Proposition 5.10 implies the convergence of uniform DH graphs of size n towards the Brownian CRT w.r.t. the Gromov-Prohorov topology. Thus this concludes the proof of Theorem 1.1 in the case f = d.
THE CASE OF 2-CONNECTED DH GRAPHS
The goal of this section is to prove the convergence of a uniform random 2-connected DH graph to the Brownian CRT, i.e. the case f = 2c in Theorem 1.1. We start by giving a characterization of 2-connected DH graphs through the associated (reduced) clique-star tree. The proof of the case f = 2c in Theorem 1.1 then follows essentially the same steps as that of the case f = d (unconstrained DH graphs). We shall indicate all necessary modifications.
6.1. Combinatorial characterization. Recall that a vertex v in a connected graph G is called a cut-vertex if removing v (and edges incident to v) disconnects G. A connected graph G without cut-vertices is said to be 2-connected. Cut-vertices in DH graphs, and hence 2-connected DH graphs, are easily characterized through the associated reduced clique-star tree.
Lemma 6.1. Let G be a DH graph and let τ be a clique-star tree such that G = Gr(τ ). A vertex in G is a cut-vertex if and only if the associated leaf in τ is connected to the center of a star. Consequently, a distance-hereditary graph G = Gr(τ ) is 2-connected if and only if no leaf of τ is connected to the center of a star.
Proof. We abusively call also the leaf of τ corresponding to the vertex of G, and v the unique vertex of τ adjacent to . We also denote by Γ v the decoration of v, and by x the marker vertex of Γ v corresponding to the edge (v, ). Finally we denote G\ the graph obtained by removing (and its incident edges) from G.
By construction G\ = Gr(τ \ ), where τ \ is the decorated tree obtained from τ by erasing the leaf and replacing in v the decoration Γ v by Γ v \x (note that τ \ might not be a clique-star tree). By [GP12, Lemma 2.3], G\ is connected if and only if all decorations of τ \ are connected. The only potentially non-connected decoration is Γ v \x and it is disconnected precisely when Γ v is a star, and x its center. This proves the characterization of cut-vertices given in the lemma. The characterization of 2-connected graphs follows immediately.
By abuse of terminology, we say that a clique-star tree, or a DH-tree, is 2-connected if the associated DH graph is 2-connected , i.e. if it does not contain a leaf (including the root-leaf in the case of DH-trees) linked to the center of a star. Specializing the bijection between DHgraphs and DH-trees to 2-connected objects, the above lemma allows to easily adapt the system of equations (4) to this setting: Here D is the class of all 2-connected DH-trees, while D K and D S X are the subclasses of D, consisting of trees with root of type K or S X , respectively. The class D S C is the class of DH-trees with a root of type S C , such that no other leaf than the root-leaf is connected to the center of a star. DH-trees in D S C are not 2-connected DH-trees since one of their leaves, namely the rootleaf, is connected to a center of a star. This explains why D S C does not appear in the equation defining D above. We nevertheless need to introduce this auxiliary class to write a full system of equations.
6.2. Singularity analysis of the system. As usual, for a class D α , we denote by D α its exponential generating function. From Eq. (47), we immediately check that, as in the unconstrained case, we have D K = D S C . Also, from the Drmota-Lalley-Woods theorem, all series D, D K and D S X have the same radius of convergence ρ 2c and square root-singularities. Again, it is useful to introduce the series The system 47 is then rewritten as which is easily solved as This implies that F 2c is solution of an equation of the type F 2c = G(z, F 2c ), with Arguing as in Proposition 3.10, we find, after some elementary computations (the last equality being computed in the companion Maple worksheet), that: • ρ 2c = 2 (F 2c (ρ 2c )) 2 + 2 F 2c (ρ 2c ) − 1; • F 2c (ρ 2c ) is the unique positive solution of the equation s = exp ≥1 (2s − 1 2 ); 6.3. 2-connected DH-trees with a marked leaf. As in the case of general DH graphs, the next step is to analyze families of 2-connected DH-trees with a marked leaf. For a, b in {K, S X , S C }, we denote D b a the class of DH-tree with a root of type a, a marked leaf of cotype b, and such that no leaf is connected to the center of a star, except possibly the root-leaf or the marked leaf (when a and/or b is equal to S C ). Moreover, we let D b a (z, u) be the corresponding bivariate (exponential) generating series, where the exponent of the variable z is the size (number of nonmarked nonroot leaves) of the tree and the exponent of u is the number of jumps on the path from the root-leaf to the marked leaf.
These nine series satisfy the following system of equations, whose proof is similar to that of Proposition 4.3: After these simplifications, the system is similar to that of Eqs. (19) to (21), except that F is replaced by F 2c and uF by u (F 2c − z). This system is solved as follows (either directly or by substituting F with F 2c and uF with u (F 2c − z) in Eqs. (22) to (27)): Recalling that F 2c depends only on z (not on u), we note that in each case, the series can be written under the form and .
6.4. 2-connected DH-trees with marked leaves inducing a given subtree. Using the same terminology as in Section 5, we define D t 0 to be the class of 2-connected DH-trees with k marked leaves inducing a given k-proper tree t 0 . Furthermore, we let D t 0 (z, u 0 , ..., u 2k−2 ) be the multivariate (exponential) generating series of D t 0 , where the exponent of z is the size of the tree and the exponent of u i the number of jumps in the path corresponding to e i (in the fixed enumeration (e 0 , e 1 , . . . , e 2k−2 ) of the edges of t 0 ). To write a combinatorial decomposition for D t 0 (z, u 0 , ..., u 2k−2 ), we need to introduce a subclass J b c a of J b c a , where the nonmarked (and nonroot) leaves are not allowed to be attached to the center of a star. The generating function of this auxiliary class is given by , where A, B, C are given in Lemma 5.5.
At this stage, there is a small difference with the case of unconstrained DH-trees. In a 2connected DH-tree, the root cannot have type S C and no leaves (in particular the marked ones) can have cotype S C . Therefore in the combinatorial decomposition of Fig. 10, the piece corresponding to e 0 has a root type different from S C , and pieces corresponding to leaf-edges of t 0 have a marked leaf with a cotype different from S C as well. This is easily captured in equations by defining In the unconstrained case, each of these equations had an extra term corresponding to the type (or cotype) S C . With these definitions, Proposition 5.6 is still valid when replacing each series by its 2-connected counterpart. The asymptotic analysis in the 2-connected case is then identical to that of the unconstrained case, up to the verification of the identities γ 2 H,2c = 2ρ 2c ν 2c and γ H,2c µ 2c = γ D,2c , where ν 2c and µ 2c are defined via the obvious analogs of Eqs. (40) and (41). Verifying these identities is done in the companion Maple worksheet. We therefore have the following analog of Proposition 5.7.
Proposition 6.2. Let (T n , ) be a uniform random 2-connected DH-tree of size n with k marked leaves (not counted in the size). Fix a k-proper tree t 0 and real numbers x 0 , . . . , x 2k−2 > 0. We set a i = x i √ n . Then where s = i x i . Moreover, this estimate is uniform for x 0 , . . . , x 2k−2 in any compact subset of (0; +∞) 2k−1 .
From here, the convergence to the Brownian CRT in Gromov-Prohorov topology, i.e. the second case of Theorem 1.1, follows using the same arguments as in the case of unconstrained DH graphs.
THE CASE OF 3-LEAF POWER GRAPHS
The goal of this section is to prove the convergence of a uniform random 3-leaf power graph to the Brownian CRT, i.e. the case f = 3 in Theorem 1.1. We start by recalling a characterization of 3-leaf power graphs through their associated (reduced) clique-star tree, given in [GP12]. The proof of convergence then follows essentially the same steps as in the two other cases. There is however one notable difference. As we shall see, in this model, first common ancestors of marked leaves are of type and cotype S X with probability tending to 1; therefore, we only need to consider two types of trees with one marked leaf, simplifying significantly the analysis. 7.1. Definition and combinatorial analysis of 3-leaf power graphs. This section follows closely [CFL17, Section 2].
Definition 7.1. Let T be a tree and L its set of leaves. The k-leaf power graph G of T has by definition vertex set L, and and are connected in G if they are at distance at most k in T . And a graph is a k-leaf power graph if it is the k-leaf power graph of some tree.
We are interested in the case k = 3. It is known, see e.g., [CFL17, Section 2] that 3-leaf power graphs form a subclass of distance-hereditary graphs, and that they can be characterized on the clique-star trees as follows (see [GP12,Section 3.3]).
Proposition 7.2. A distance hereditary graph G is a 3-leaf power graph if and only if its reduced clique star-tree τ satisfies the following properties: • the set of star nodes forms a connected subtree of τ ; • no edge connects two centers of star nodes.
In the sequel, we call 3-leaf power trees the DH-trees corresponding to (rooted) 3-leaf power graphs. Let E be the combinatorial class of 3-leaf power trees. To get a combinatorial decomposition of this class, it is convenient to introduce the following subclasses: • E S X , E S C and E K are the subclasses of E, where the root-node is required to have type S X , S C and K respectively; • L is the class containing the tree restricted to a single leaf, and trees consisting of a single internal node, of type K, with at least two pending leaves. Recall also that Z is a class with only one element, which is of size 1, representing a leaf. The following set of equations, characterizing all these classes, is obtained easily: In terms of generating series (with the usual convention that Y (z) is the exponential generating function of a class Y), the first equation implies L = e z − 1. The second equation yields: We note that other equations of the system Eq. (64) are nonrecursive and simply express E S C , E K and E in terms of E S X . It is thus not surprising that most of the asymptotic analysis reduces to that of E S X . We first prove the following result.
Proof. As in the proof of Proposition 3.10 we use the smooth implicit-function schema. We write which is analytic in z, w on the whole complex plane and which has nonnegative coefficients.
The characteristic system {G(r, s) = s; G w (r, s) = 1} is easily solved and the unique solution is We apply [FS09, Theorem VII.3] and obtain Eq. (66).
With system 64 and the singular expansion of E S X given above, we find that of the other series. In particular, which will be useful later.
7.2. 3-leaf power trees with a marked leaf. We now consider families of 3-leaf power trees with a marked leaf. It turns out that the only classes relevant for the asymptotic analysis are the classes E S X S X and E • S X defined as follows: we let E S X S X (resp. E S X • or E • S X ) be the subclass of 3-leaf power trees with a marked leaf such that the root has type S X and the marked leaf has cotype S X (resp. with no constraints on the type of the root or on the cotype of the marked leaf). As above, we consider the associated exponential bivariate generating series E S X S X , E S X • and E • S X , where the exponent of z is the size of the tree (number of nonmarked nonroot leaves) and that of u is the number of jumps on the path from the root-leaf to the marked leaf. By symmetry we have E S X • = E • S X . We also let L • be the class of objects in L with a marked leaf. Its generating series is L • = e z (there is no jumps in such objects). An easy combinatorial analysis yields the following equations: This is a 2x2 linear system of equations in the unknown series E S X S X and E • S X . The solutions can be put under a form similar to Eq. (28): where P = P (z) = (e z − 1) exp(e z − 1 + E S X ).
Using Eq. (66), we immediately see that P has radius of convergence ρ 3 , is ∆-analytic and admits the following singular expansion for z near ρ 3 : 7.3. 3-leaf power trees with k marked leaves. Let us fix a k-proper tree t 0 . We consider the following class of marked 3-leaf power trees.
Definition 7.4. We let E t 0 be the labeled combinatorial class of 3-leaf power trees T with k marked leaves such that: i) the subtree of T induced by is t 0 ; ii) no two essential vertices of T are neighbors of each other; iii) every internal essential vertex of T has type and cotype S X and its children which are roots of subtrees containing marked leaves are also of type S X .
As above, we fix an enumeration (e 0 , . . . , e 2k−2 ) of the edges of t 0 such that the edge adjacent to the root-leaf is labeled with e 0 ; here, we additionally require that the edges incidents to leaves of t 0 get labels e 1 , . . . , e k . Recall that we defined jp e (T ; ) as the number of jumps on the path corresponding to e, with the convention that essential vertices are not counted as jumps (but the root-node of T can be a jump). We consider the following multivariate generating series for E t 0 : Moreover, let I b c a be the set of 3-leaf power trees T with two marked leaves such that • the two marked leaves are children of the root-node; • if T 1 is a 3-leaf power of type b, one can glue T 1 on the first marked leaf of T (merging the marked leaf and the root-node of T 1 ) such that the tree obtained is a 3-leaf power tree; • the same condition holds with gluing a 3-leaf power tree of type c on the second marked leaf; • additionally, if T 0 is a 3-leaf power tree with a marked leaf of cotype a, one can glue T on the marked leaf of T 0 obtaining a 3-leaf power tree.
Lemma 7.5. The generating function of I S X S X S X is (70) I S X S X S X (z) = L exp(E S X + L).
Proof. Because of the allowed adjacencies between nodes of various types in 3-leaf power trees, a 3-leaf power tree in I S X S X S X necessarily has a root-node r of type S X . The factor L then accounts for the tree pending under the center of the star labeling r, and exp(E S X + L) accounts for the trees pending under its extremities which do not correspond to the marked leaves.
Proposition 7.6. We have Sketch of proof. We use the same decomposition as in the proof of Proposition 5.6. The main difference is that, because of item iii) in Definition 7.4 above, all types tp i and cotypes ct i which do not correspond to the marked leaves or the root-leaf must be equal to S X . Consequently the tuple (tp i , ct i ) 0≤i≤2k−2 can only take one possible value and we get We conclude using the symmetry E • S X = E S X • . Proposition 7.7. Let (T n , ) be a uniform random 3-leaf power tree of size n with k marked leaves (not counted in the size). Fix a k-proper tree t 0 and real numbers x 0 , . . . , x 2k−2 > 0. We set a i = x i √ n . Then where s = i x i . Moreover, this estimate is uniform for x 0 , . . . , x 2k−2 in any compact subset of (0; +∞) 2k−1 .
APPENDIX A. THE SEMI-LARGE POWERS THEOREM
In order to prove Propositions 5.7, 6.2 and 7.7, we need to estimate quantities of the form [z n ]M (z)H(z) p where p is of order √ n. The following statement is essentially the Semi-large powers Theorem ([FS09, Theorem IX.16] for λ = 1/2, see also [BFSS01] for the original reference) which deals with the case M (z) = 1. As we will see, there are no particular difficulties in generalizing the proof.
Theorem A.1. Let ρ > 0 and let M, H be ∆-analytic functions at ρ. Assume that i) H has a square-root singularity at ρ: for some σ, h > 0 ii) M converges at ρ. Let us fix a compact subset K of (0, +∞) and a constant C > 0. Take (a n ) a sequence of real numbers such that where Ray(x) = x 2 e −x 2 /4 is the Rayleigh density. The error term in the above convergence is uniform for all x ∈ K, and all sequences (a n ) satisfying (77), but depends on M , H, K and C. where γ = γ 0 ∪ γ 0 ∪ γ 1 ∪ γ 2 is a closed counter-clockwise contour surrounding 0 consisting of the following pieces (see Fig. 11): • γ 0 is a line segment starting at ρ + i/n, with a slope θ and stopping when it reaches the circle {z : |z| = R}; • γ 0 is its complex conjugate (in reverse direction); • γ 1 is a semi-circle centered at ρ of radius 1/n from ρ − iρ/n to ρ + iρ/n; • γ 2 is an arc of circle of radius R closing γ. The modulus of the integral on γ 2 is easily bounded by O(B Re(t) = −2n 2/3 For this, we recall that in general, blocks are either 2-connected graphs, or restricted to a single vertex or to two vertices with a single edge. Hence the series B of rooted blocks of DH graphs coincide, up to the coefficients of 1 and z, with the generating series D of rooted 2-connected DH graphs. In particular, ρ B = ρ 2c and our analysis in Section 6.2 shows that B (ρ B ) = +∞. Consequently, there exists τ < ρ B such that τ B (τ ) = 1. This implies that D(z) belongs to the smooth inverse-function schema in the sense of [FS09, Definition VII.3, p.453]. From [FS09, Theorem VII.2], we have that D(ρ) = τ . Therefore D(ρ) < ρ B as wanted, and the class of DH graphs is indeed subcritical. APPENDIX C. GROMOV-HAUSDORFF-PROHOROV CONVERGENCE IN [PSW16] The main result of [PSW16] is the convergence for the Gromov-Hausdorff topology of a uniform random graph in a subcritical block-stable class to the Brownian CRT. We argue here that without further effort, the authors could have proven convergence for the stronger Gromov-Hausdorff-Prohorov topology. We use here notation from [PSW16]. The proof compares a uniform random graph C • n in the class and its block decomposition tree T n . It uses the fact that the identity map from T n to C • n does not modify much distances. Obviously this identity map brings the uniform distribution on vertices of T n to that on vertices of C • n . Therefore, using [Mie09, Prop.6, p.763], we see that C • n and T n are close for the GHP topology. Besides, since T n has the distribution of a conditioned Galton-Watson tree, it is known that T n converges to the Brownian CRT for the GHP topology (for the GH topology, a classical reference is [LG05]; for the GHP topology, a much stronger result is given in [HW19]). We conclude that C • n also converges to the Brownian CRT for the GHP topology, as claimed. | 2022-07-26T01:16:23.080Z | 2022-07-25T00:00:00.000 | {
"year": 2022,
"sha1": "e206f840881875a9b49741e65bdddc8a6f59f3cf",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "e206f840881875a9b49741e65bdddc8a6f59f3cf",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
248499035 | pes2o/s2orc | v3-fos-license | VDI pacing with temporary esophageal and transvenous pacemaker leads to treat post-cardiac surgery cardiogenic shock
Background Post-operative atrio-ventricular (AV) block after cardiac surgery is not uncommon in high-risk patients. Case presentation Our case highlights the management of a 62-year-old female with cardiogenic shock post-cardiac surgery with concomitant complete heart block. With VVI pacing proving ineffective, it was postulated that the patient may benefit hemodynamically from AV sequential pacing, re-establishing her atrial kick. We describe a novel technique of attaching a temporary pacemaker wire to an orogastric tube to sense atrial p-waves and pace the ventricle transvenously to perform AV sequential pacing. This was done temporarily to stabilize the patient’s hemodynamic status while awaiting a permanent pacemaker implantation. Conclusions In hemodynamically unstable post-cardiac surgery patients with complete heart block in whom VVI pacing fails to improve their clinical status, clinicians should consider VDI pacing with an orogastric atrial sensing pacemaker lead, in consultation with the cardiac surgeon and the electrophysiology team. Of note, the patient needs to have underlying organized atrial activity for this setup to work.
Introduction Case
A 63-year-old female with heart failure, coronary artery disease, mitral regurgitation, bicuspid aortic valve, and a left atrial (LA) myxoma underwent coronary artery bypass grafting surgery in addition to the myxoma resection, mechanical aortic and mitral valve replacements. She had a complicated intra-operative course whereby she decompensated after she was taken off cardiopulmonary bypass and her chest was closed. She was put back on bypass and air was removed from her right coronary artery; at this juncture, a transesophageal echocardiogram showed grossly normal left ventricular functional with reduced right ventricular function. She was admitted to the cardiac surgery intensive care unit (ICU) post-operatively and required mechanical support with an intra-aortic balloon pump (IABP). She also required norepinephrine 0.5 μg/kg/min, epinephrine 0.5 μg/kg/min, vasopressin 2 units/hour, and milrinone 0.50 μg/kg/min infusions to support her blood pressure and cardiac output (see Fig. 1A). Furthermore, she was pacer-dependent for complete heart block and was being paced via epicardial leads (VVI). Her total time on cardiopulmonary bypass was 321 min and her aortic crossclamp time was 277 min.
On postoperative day (POD) one, cardiogenic shock persisted and the patient demonstrated significant hypoxemia. A transthoracic echocardiogram post-operatively displayed a left ventricular ejection fraction of 31% with a severely impaired right ventricle and a patent foramen ovale (PFO) with a right-to-left shunt. Prosthetic and native valve function were normal. To further aid with her hemodynamic status and manage her acidosis, she was started on inhaled nitric oxide (iNO) as well as continuous renal replacement therapy (CRRT).
With her underlying hypoxemia and the new evidence of a PFO, we removed the patient's IABP in an attempt to improve the right atrium to left atrium pressure gradient leading to interatrial shunting. Without mechanical support, the patient's hemodynamic status remained tenuous and she still required pacing as her underlying heart rate was in the 40 s (see Fig. 1B). We subsequently inserted a transvenous right ventricular pacemaker wire as the epicardial temporary wires inserted at the time of surgery were inconsistently capturing despite increased output settings. Despite pacing at a higher rate than the patient's intrinsic ventricular escape rate, the patient's cardiac indices remained poor (see Fig. 1C); of note, the patient was on similar doses of vasopressors and inotropes at this juncture.
In an effort to further augment the patient's cardiac output on POD 4, we attached a stiff temporary pacemaker wire to a standard orogastric (OG) tube and inserted it into the patient's mid-esophagus. This allowed for sensing of atrial activity (p-waves) and thus permitted atrio-ventricular (AV) synchronized pacing (see Fig. 2A-F) with the goal of increasing stroke volume due to reestablishing consistent atrial contribution to ventricular filling (atrial kick). With this VDI pacing instituted, the patient's cardiac indices improved significantly and she was able to be weaned off all of her inotropes and vasopressors (see Fig. 2G). Shortly thereafter, we replaced the transvenous pacemaker from her left internal jugular vein with a transvenous pacemaker from the left femoral vein to facilitate the insertion of a permanent pacemaker.
Several days later, the patient, the patient had a cardiac resynchronization device inserted by the Electrophysiology team and she was subsequently transferred out of the ICU 8 days after her initial admission in stable condition.
Discussion
Postoperative AV block is the most common conduction abnormality after cardiac surgery [1]. Its incidence is 1-4% depending on the procedure but can be as high as 25% after aortic valve replacement [1]. Overall, 1-5% of cardiac surgery patients require permanent pacemaker implantation [1]. Our patient had both her aortic and mitral valve replaced placing her at higher risk for developing AV block.
Post-surgery low cardiac output syndrome (LOCS) is defined as a monitored cardiac index < 2.2L/min/m 2 with adequate or elevated cardiac filling pressures; it is often secondary to left and/or right ventricular dysfunction, but arrhythmias or valvular heart disease may also contribute [2]. Its incidence varies between 3 and 45% and is associated with the following risk factors: advanced age, prolonged bypass time, urgent surgery, and impaired left ventricular function [2]. Furthermore, myocardial revascularization, either isolated or accompanied by valve intervention, are the most frequent operations leading to LCOS [2]. Our patient had all of these risk factors. Specifically, her poor ventricular contractility, rhythm disturbance, and the air found in the right coronary artery requiring going on bypass again were the primary contributors to her LCOS.
Despite being ventricularly paced with temporary epicardial wires, our patient deteriorated requiring a transvenous pacer to maintain an adequate heart rate as her epicardial wires stopped capturing consistently. There is limited and conflicting data on the use of epicardial pacing wires after cardiac surgery as their electrical performance deteriorates over time [3]. In fact, failure to pace is observed in > 60% right and > 80% left atrial wires after 5 days [3]. The transvenous approach to pacing is often preferred as it provides superior pacing and sensing thresholds, lower lead current, and longer lead functionality [4].
This case highlights the potential importance of the atrial contribution to ventricular filling or so-called atrial kick in patients with a low cardiac output. This atrial kick contributes to about 20-30% of left ventricular end diastolic volume. As such, the loss of atrial kick can result in adverse hemodynamic consequences with a reduction in cardiac output. With the failure of conventional methods of temporary pacing in a hemodynamically unstable patient, we opted to trial a novel technique with the use of a pacemaker attached to an OG tube to sense atrial electrical activity in order to perform AV sequential pacing, immediately resulting in significant increases in the patient's cardiac index from 1.8 to 2.5 L/min/m 2 (see Fig. 2G). Transesophageal atrial stimulation is sometimes used in the evaluation and treatment of supraventricular arrhythmias [5].
However, its use to sense the atria and pace ventricularly through a transvenous lead has not been described in the literature.
Our case is one of the first described uses of a pacemaker lead to sense the atrial activity through the esophagus and pace the ventricle transvenously in a synchronized fashion. It is important to note that a patient needs to have underlying organized atrial activity for this setup to work. With respect to landmarking, the right atrium is located roughly 35 cm from the oral cavity down. We suggest taping the pacemaker to an OG tube and inserting it to around 35-40 cm from the oral cavity until the atrial activity is consistently sensed as identified on the external pacemaker consol. The electrophysiology team should be consulted for long-term management of these patients and the patient's surgeon should also be informed of this maneuver being performed.
Learn more biomedcentral.com/submissions
Ready to submit your research Ready to submit your research ? Choose BMC and benefit from: ? Choose BMC and benefit from:
Funding
There was no funding for this case report.
Availability of data and materials
Data sharing not applicable to this article as no datasets were generated or analyzed during the current study.
Declarations
Ethics approval and consent to participate This a case report that does not require ethics approval.
Consent for publication
We have acquired the patient's signed consent. | 2022-05-03T13:45:06.669Z | 2022-05-03T00:00:00.000 | {
"year": 2022,
"sha1": "117c0628f88064749df6c96ce7364a88d64c4738",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "117c0628f88064749df6c96ce7364a88d64c4738",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
257725447 | pes2o/s2orc | v3-fos-license | Ethical principles for infodemiology and infoveillance studies concerning infodemic management on social media
Big data originating from user interactions on social media play an essential role in infodemiology and infoveillance outcomes, supporting the planning and implementation of public health actions. Notably, the extrapolation of these data requires an awareness of different ethical elements. Previous studies have investigated and discussed the adoption of conventional ethical approaches in the contemporary public health digital surveillance space. However, there is a lack of specific ethical guidelines to orient infodemiology and infoveillance studies concerning infodemic on social media, making it challenging to design digital strategies to combat this phenomenon. Hence, it is necessary to explore if traditional ethical pillars can support digital purposes or whether new ones must be proposed since we are confronted with a complex online misinformation scenario. Therefore, this perspective provides an overview of the current scenario of ethics-related issues of infodemiology and infoveillance on social media for infodemic studies.
Introduction
Social media are web-based interactive communication channels that enable the creation, sharing, and discussion of content by people and online communities (1). There are ~4.59 billion users on these platforms worldwide who interact on everyday topics such as health (2,3). In this context, social media was a primary source of information on COVID-19 in China at the outset of the pandemic, while four-in-ten Americans considered them essential to follow vaccinerelated news (4,5). Additionally, 81.4% of Saudis users believed that health-related information acquired from social media increased their healthcare awareness, with 73.3% perceiving positive impacts on their health status (6). The literature also shows that many people use these platforms to connect with their peers and exchange their experiences about health conditions (7). Indeed, the big data originating from these types of user interactions play an essential role in developing infodemiology and infoveillance studies (8,9). According to Eysenbach (10), "infodemiology is the science of distribution and determinants of information in an electronic medium, specifically the Internet, with the ultimate aim of informing public health and public policy, " while "infoveillance refers to using infodemiology data for digital surveillance purposes. " Although both sciences are essential to support the planning and implementation of public health actions, researchers must make ethical considerations when collecting, analyzing, and presenting digital data derived from people's activity on social media.
However, the differences in social media data create challenges for experts to adhere to the principles set out by the Declaration of Helsinki (11). For example, acquiring informed consent from each user is unfeasible for large-scale social network datasets, which may contain hundreds of thousands of metadata units (12). As a result, notable aspects concerning informed consent are intangible in social media research, such as the right to withdraw from a study (13). Specifically, it is necessary to propose technics to smooth the discrepancies that emerged from this absence of informed consent since the "participants' are rarely informed that their data were collected, stored, and analyzed for research purposes. In this sense, researchers can list current studies regarding social network platforms on open data storage to inform communities how their data is being used for public health studies. Additionally, the exponential evolution of social media functionalities exacerbates the difficulties associated with defining ethical research guidelines, which can often be function-specific. Although these concerns motivated several studies to investigate and discuss the adaptation of the conventional ethical approaches to contemporary public health digital surveillance perspectives (14)(15)(16), there is a lack of specific ethical guidelines to orient infodemiology and infoveillance studies concerning infodemic on social media, making it challenging to design digital strategies to combat it. Notably, mitigating false or misleading content on social media requires differentiated data treatment since they spread faster than trustworthy ones (17). As a result, it is necessary to clarify if infodemic-related studies' scope and type of data justify revisions of well-known ethical guidelines regarding digital surveillance, or if their extrapolation is enough to orient investigations in this field.
Infodemic can be defined as an overabundance of information, including false or misleading information, circulating in digital and physical environments during a disease outbreak, such as the COVID-19 pandemic (18). It was the first time that diverse actors employed different communication technologies and social media to inform and connect with people about a common worldwide disease, which generated a massive spreading of content online (19,20). Although the initial goal of people was to be better informed about COVID-19 toward better health decision-making, the content overload on social media ecosystems hampered users' selection of trustworthy information (21). Then, misinformation negatively impacted the acceptability of the various COVID-19 vaccines in different countries, contributing to an increased prevalence and severity of cases in some countries (22).
In this context, misinformation is conceptualized as an umbrella term that embraces different types of information disorders, such as misinformation, disinformation, and mal-information (17,(23)(24)(25)(26)(27). While misinformation is defined as false informationally-oriented content grounded on truth (25)(26)(27), disinformation is intentional false content to purposively harm a person, social group, organization, or country motivated by specific interests, such as social, financial, psychological, and political ones (25)(26)(27). Furthermore, mal-information is content based on reality but is used willfully and intentionally to inflict harm on a person, social group, organization, or country (26). It is noteworthy that despite the divergence in the definitions concerning the author's intentionality, both can result in adverse consequences for health consumers, e.g., developing and reinforcing damaging beliefs (28).
Therefore, this perspective aimed to provide an overview of the current scenario on ethics-related issues regarding infodemiology and infoveillance, proposing directions for infodemic management studies.
Ethical aspects concerning digital health studies
The most challenging ethical issue concerning public health is suitably balancing possible risks and harms to people and communities while protecting and promoting their health (29). This challenge also impacts infodemiology and infoveillance social media studies since their ultimate aim is supporting public health outcomes. In fact, principles-based ethics is internationally recognized as a coherent and justified set of moral issues for the field of biomedicine (30,31). More recently, high-impact systematic reviews used ethical principles to describe the best moral practices involving public health studies on social media, and, thus, supported the present perspective (14,32). Accordingly, we have presented the five principles of beneficence, nonmaleficence, autonomy, equity, and efficiency, highlighting their respective relevance to infodemic studies below.
Beneficence
Beneficence is the obligation of health providers to act for the benefit of people based on moral rules, such as charity, mercy, and kindness (33,34). Therefore, beneficence supports the prevention and control of conditions that cause harm to people (33). Regarding this pillar, infodemiology and infoveillance projects should be designed to promote populational health improvements regarding specific conditions. In this sense, social media interventions must support the healthcare needs of the target population, supporting the improvement of limitations of traditional epidemiological methods, such as extrapolating data generated outside the public health systems, i.e., data that was not originated primarily for epidemiology goals (35). For instance, screening out misinformation promotes beneficence to communities because it facilitates the selection of trustworthy information and, thus, better decision-making concerning current health, socialpolitical, and economic conditions. Furthermore, these strategies may provide advantages to people in different ways, including realtime monitoring of people's digital activity on specific issues and educational health policies endorsement grounded on users' behaviors (33,(36)(37)(38).
Frontiers in Public Health 03 frontiersin.org
Nonmaleficence
Nonmaleficence is the responsibility attributed to professionals who do not inflict harm on individuals, resulting in the need to weigh the benefits against the burdens of public health outcomes (33). Indeed, carefully planned and people-centered health outcomes are essential to achieving community trust and developing significant health actions for everyone (39). In this way, the use of non-health data, the stigmatization of risk factors, and the violation of privacy may lead to the mistrust of public health intentions, thus undermining nonmaleficence principles (14). Hence, researchers of misinformation studies should clearly define actions to reduce the potential harms of data collection and analysis, such as adopting a data management plan and restricting their studies to only using publicly available social media data. Notably, previous infodemic-related investigations have presented significant findings using public social networking content (8,40).
Autonomy
Autonomy is the state or condition of individuals leading their life according to authentically personal reasons, values, and desires (41). As a result, this principle recognizes people's right to selfdetermination and represents the determinants proposed to minimize possible violations (14). Individuals should be allowed to exercise their capacity for self-determination; all people have an intrinsic and unconditional worth that influences their personal and moral choices (42). People often do not expect their data to be employed in public health surveillance since there is no specification for health data reporting in user agreements, even though they cover consent from legal aspects. Nevertheless, some governments are actively implementing initiatives to give users autonomy over their data, such as the European Union General Data Protection Regulation (GDPR), which would allow public health authorities to directly request that people share their social media data when needed (35).
Consequently, anonymizing or aggregating data is fundamental for applying social media data in infodemic-related studies to respect the users' privacy (14,43). Although the anonymization process sometimes may not be sufficient to protect the users' privacy in a social media environment (demonstrating the importance of using public data), it is imperative that researchers remove data and metadata that allow the identification of individuals (16). In parallel, it is possible to aggregate similar social networking publications and present them together, mitigating the identification of the original content. Moreover, users' authenticity is also essential to ensure a genuine narrative of findings, respecting their autonomy (16). On the other hand, the high prevalence of fake and bots profiles on social network hamper ensuring users' identities. Specifically, health misinformation is frequently spread by these profiles, denoting the importance of users' authenticity for infodemic studies. For example, 66% of known bots disclosed COVID-19 information and misinformation on Twitter during the pandemic (44). Interestingly, new authentication user tools emerged as an option to detect automated bots present on social networks (45).
Equity
Equity is the absence of systematic disparities between groups with different underlying social advantage/disadvantage levels (46). In this regard, most definitions of health equity are based on ethical judgments and commitment to social justice, requiring that a target population be afforded fair, equitable, and appropriate opportunities regarding public health interventions (33,47). Thus, it is essential to determine whether the short-and long-term benefits and burdens are fairly distributed between different socio-demographic groups (48). Although social media-grounded studies allow researchers and managers to access a huge volume of data and, thus, strategies that involve many users, it is noteworthy that a worldwide population portion still does not have access to home and mobile Internet. In this way, the extrapolation of infodemiology and infoveillance data concerning infodemic for offline measures (e.g., developing health promotion policies and disclosure of educational campaigns) is desirable for covering communities indistinctly (including those without access to the Internet). Further, planning social media studies that tackle health equity involves identifying and acting on the root causes of structural forms of oppression and also investigating health misinformation topics that impact the diverse layers of society differently (48).
Efficiency
Efficiency is fundamentally based on the ability to measure and assess the improvement of resources, i.e., this pillar is directly related to the cost-effectiveness of digital health systems (14). Certainly, grounding these measures on scientific evidence is necessary since researchers and public health agencies often have limited resources (49). Hence, implementing cost-effective-oriented infodemic control systems requires, (a) applying automated applications to manage and analyze data (focusing on regular maintenance software developer work to prevent the algorithms from becoming obsolete) and (b) designing and implementing misinformation tracking and feasibility studies. Additionally, public health managers should be aware of the continuous updating of these data and propose partnerships with social media companies to avoid the discontinuation of access (50). More importantly, this principle is particularly interesting when extrapolating these digital approaches to developing countries while promoting the democratization of healthcare. Table 1 summarizes the above-described ethical principles about the infodemic scenario.
Discussion
The five principles of beneficence, nonmaleficence, autonomy, equity, and efficiency can aid the decision-making process of public health authorities and researchers concerning ethics issues on social media in infodemic contexts. Notwithstanding, they should be harmoniously weighted and balanced to achieve effective digital strategies for different communities. Accordingly, the principles must be fulfilled as a prima facie obligation unless they conflict with each other in a specific instance (33). Although some of these principles share similar action points (e.g., preserving users' privacy in the nonmaleficence and autonomy dimensions), the misinformation scenario makes it difficult to employ these ethical points in the same way as for other infodemiology and infoveillance purposes. For instance, the promotion of equity vis-à-vis the viral spreading of misinformation is still a challenge for public health managers, Frontiers in Public Health 04 frontiersin.org however, their active engagement with the major false or misleading information is necessary to formulate public policies and strategies for disadvantaged communities. To address this dilemma, the World Health Organization recently proposed a deliberation of the issues among a panel of experts, i.e., to discuss the ethical framework and tools for infodemic management (51). Meanwhile, the extrapolation of previously described ethical issues can suffice as a complementary solution to the WHO's current agenda. The control of the negative impacts of online misinformation depends on platforms' cooperative actions with public health authorities, such as screening and removing false or misleading information based on the best scientific evidence (40,52). Then, companies need to be more transparent about developing their algorithms from users' activities, concomitantly demonstrating their efforts to prevent the spreading health misinformation. In parallel, health managers and policymakers need to discuss in-depth the ethical and legal implications for potential propagators (users who spread misinformation) and facilitators (social media companies) to formulate regulatory principles that can address this phenomenon more effectively (53).
Simultaneously, it is necessary to clarify the limits of data privacy and freedom of speech in infodemic contexts that have the potential to generate a high humanitarian cost. Personal independence and freedom of speech are highly valued in Western societies and viewed as essential values of free and democratic nations. However, the unlimited perception of the achievement of freedom of speech can cause harm to individuals, communities, and nations, e.g., by promoting drugs or herbals known as ineffective in treating a specific disease only by profit (54,55). Moreover, users are typically concerned about sharing their private information for digital health purposes due to perceiving implications on insurance coverage, medical care, and data security (56). However, strategies to counter negative infodemic primarily use the information available to the public on social media and only disclose the information anonymously, still safeguarding autonomy.
People must be aware of the importance of sharing their social media information to support the development of strategies to control health misinformation. Thus, data literacy is an essential skill that could be developed during primary and secondary school education in both developing and developed countries. Likewise, other literacies are also necessary to support individuals in consuming trustworthy information on social media and ensure equity between communities through the smoothing out of disparities, such as digital literacy, media literacy, and scientific literacy (57)(58)(59). Conversely, low and middle-income countries tend to suffer more prominently from the impacts of online misinformation since the levels of these constructs are usually greater in high-income countries. As a result, the actions involving infodemic management demand more global initiatives. Regrettably, the lack of unified communication about health data between countries and international organizations amplifies the health inequities associated within the infodemic scenario. Notably, a significant role of global health governance (GHG) is to help countries to achieve health equity through managing external threats, stronger international solidarity, and more inclusive guidelines and policies (60). GHG is defined as "the use of formal and informal institutions, rules, and processes by states, intergovernmental organizations, and non-state actors to deal with (2) Screening out of misinformation on social media, promoting better individual decision-making concerning current health, social-political, and economic conditions.
Nonmaleficence
Responsibility attributed to professionals who do not inflict harm on individuals, resulting in the need to weigh the benefits against the burdens of public health outcomes (1) Collecting only publicly available social media data.
(2) Adopting a data management plan to orient the collection and analysis of data. (1) Anonymizing social media data and metadata to preserve users' privacy.
Autonomy
(2) Aggregating similar social media data to avoid the users' identification.
Equity
The absence of systematic health disparities between groups with different underlying social advantage/ disadvantage levels. Equity requires that a target population be afforded equal opportunities regarding public health interventions, including fair distribution of the benefits (1) Extrapolating infodemiology and infoveillance data for offline measures, such as proposing health promotion campaigns and disclosing educational campaigns.
(2) Implementing digital health systems accessible for developing and developed countries, propitiating the democratization of misinformation control approaches.
Efficiency
It measures and assesses the improvement of resources, relating directly to the cost-effectiveness of digital health systems (1) Applying automated algorithms to manage and analyze social media data.
(2) Developing digital systems and solutions based on misinformation-related characterization, tracking, and feasibility studies.
Frontiers in Public Health 05 frontiersin.org challenges to health that require cross-border collective action to address effectively" (61). Hence, effective responses to the infodemic require cooperation between states, social media companies, and global health governance to share data and regulate information. Specifically, although many countries have the autonomy and capacity to manage their own health data, ethical guidelines via GHG should orient and support the control of misinformation globally.
Conclusion
Considering the current lack of ethical guidelines for infodemiology and infoveillance research concerning infodemic, the principles presented in this perspective considered the specificities of data acquisition, storage, analysis, and application to contribute to the design and development of health misinformation studies on social media. In light of this perspective, public health authorities, researchers, policymakers, and society should seriously discuss and consider a new ethical framework to cover all details respecting infodemic-related studies.
Data availability statement
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author. | 2023-03-25T15:19:54.849Z | 2023-03-23T00:00:00.000 | {
"year": 2023,
"sha1": "b6fcfdf3cf8737419841d9ef9f803f77335dd391",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "a018891b3046130e6d7ab69af3dc237d8e94d08a",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
254809923 | pes2o/s2orc | v3-fos-license | Effects of structure and volcanic stratigraphy on groundwater and surface water flow: Hat Creek basin, California, USA
Hydrogeologic systems in the southern Cascade Range in California (USA) develop in volcanic rocks where morphology, stratigraphy, extensional structures, and attendant basin geometry play a central role in groundwater flow paths, groundwater/surface-water interactions, and spring discharge locations. High-volume springs (greater than 3 m3/s) flow from basin-filling (<800 ka) volcanic rocks in the Hat Creek and Fall River tributaries and contribute approximately half of the average annual flow of the Pit River, the largest tributary to Shasta Lake. A hydrogeologic conceptual framework is constructed for the Hat Creek basin combining new geologic mapping, water-well lithologic logs, a database of active faults, LiDAR mapping of faults and volcanic landforms, streamflow measurements and airborne thermal infrared remote sensing of stream temperature. These data are used to integrate the geologic structure and the volcanic and volcaniclastic stratigraphy to create a three-dimensional interpretation of the hydrogeology in the basin. Two large streamflow gains from focused groundwater discharge near Big Spring and north of Sugarloaf Peak result from geologic barriers that restrict lateral groundwater flow and force water into Hat Creek. The inferred groundwater-flow barriers divide the aquifer system into at least three leaky compartments. The two downstream compartments lose streamflow in the upstream reaches (immediately downstream of the groundwater-flow barriers) and gain in downstream reaches with the greatest inflows immediately upstream of the barriers.
Introduction
The Cascade Range is a primary control of water resources in the Pacific Northwest, USA, dividing the wet western parts of Oregon, Washington, and northern California from the semi-arid eastern parts. Precipitation in the Cascade Range is one of the main sources of water for hydrologic systems on both sides of the mountain range (PRISM Climate Group 2015; Thornton et al. 2016). Groundwater basins within the permeable High Cascades, a relatively young volcanic province with respect to the Western Cascades, are important components of the hydrologic system from both ecological and water-resources perspectives (Gannett et al. 2001;Jefferson et al. 2006Jefferson et al. , 2010. Groundwater flow paths can take years to decades to connect recharge areas to discharge areas. The volcanic landscape of the northwestern USA hosts more than half of the high-volume springs (>3 m 3 /s) of the conterminous USA (Meinzer 1927). Much of the discharge of these springs originates from the volcanoes of the Cascade Range between Lassen Peak, California, and Mount Rainier, Washington (Meinzer 1927), where annual precipitation can exceed 250 cm/year (Thornton et al. 2016). Springs between Medicine Lake volcano and Lassen Peak ( Fig. 1) contribute approximately half of the average annual flow (total flow of ~140 m 3 /s) to the Pit River, the largest tributary to Shasta Lake, California's largest surface-water reservoir (Meinzer 1927;Burns et al. 2017b). Whereas the Pit River originates to the east of the Shasta-Lassen Peak-Medicine Lake volcano study area (SLMSA, Fig. 1), most of the flow delivered to Shasta Lake accumulates from streams within the SLMSA (Meinzer 1927). As Fall River headwater springs have been the focus of previous hydrogeologic studies (Manga and Kirchner 2004;Davisson and Rose 2014;Burns et al. 2017b), the emphasis of this study is on the hydrogeology of the understudied Hat Creek basin. Two low-volume springs on Lassen Peak's northern flank serve as the Hat Creek headwaters, below which Hat Creek flows about 78 km northward. Additionally, three high-volume spring complexes (Big Spring, Rising River, and Crystal Lake) contribute to Hat Creek before it joins the Pit River (Figs. 1, 2 and 3).
In the SLMSA, groundwater flow paths connect volcanic uplands to streams and rivers through laterally extensive volcanic rocks (Rose et al. 1996), including the numerous springs feeding Hat Creek from young volcanic rocks (Rose et al. 1996;Davisson and Rose 1997;Fig. 2). In the northwestern USA, high-volume springs flow through and from laterally extensive volcanic units (Gannett et al. 2001). High-volume springs in these regions can serve as a stable year-round and drought-resistant water source, as perturbations from changing climate in the groundwater system lag surface-water changes due to long groundwater residence times (Gannett et al. 2001;Burns et al. 2017b). In northern California, average annual flow from these springs typically fluctuates less than 15% (Meinzer 1927).
Purpose and scope
The Northwest Volcanic Aquifer Study Area (NVASA, Curtis et al. 2020) project is a US Geological Survey (USGS) effort to understand and quantify regional water resources in the Pacific Northwest. This manuscript analyzes geologic controls on groundwater flow in the SLMSA (Figs. 1 and 2), the southwesternmost part of the NVASA. To date, an in-depth analysis of the relationship between structure, stratigraphy, and groundwater flow between Medicine Lake volcano and Lassen Peak has not been conducted. The study narrows in on the Hat Creek focus area (Fig. 1), one of the principal contributors to Shasta Lake, the largest reservoir in California (Figs. 1 and 2). The relationship between structural geology, volcanology, groundwater recharge, and stream hydrology exerts primary control on regional groundwater resources in aquifers hosted in primarily volcanic geologic settings. New geologic maps, topographic analysis, airborne thermal infrared (TIR) remote sensing data on stream temperature, and streamflow data are combined to gain insight into the surface-water and groundwater systems of the understudied Hat Creek basin, which sits within the Hat Creek focus area (Fig. 2). Together, the structure and stratigraphy, the three-dimensional (3D) conceptual model, and the locations of surface-water/groundwater interactions all help identify the potential structural or stratigraphic factors that influence streamflow gains and losses.
The Hat Creek basin is a major year-round, groundwaterfed contributor to the Pit River and Shasta Lake. However, the heterogeneity of volcanic deposits, sampled at very few wells in only a small part of the basin, makes study of this system challenging, and many of the common hydrogeology tools (such as potentiometric surface maps) are unavailable. Instead, the conceptual model of the groundwater basin is developed to be consistent with volcanic aquifer studies elsewhere (Lindholm 1996;Gingerich 1999;Gannett et al. 2001Gannett et al. , 2007Burns et al. 2011). The conceptual model is then constrained with available water-well data, geologic maps, and comparatively high-resolution streamflow and temperature surveys.
Background: translating volcanic geology to hydrogeology
The relationship between volcanogenic landforms and hydrogeology coupled with the plate tectonic setting, precipitation patterns, and previous research influence the understanding of the relationship between the geology and hydrology of the SLMSA. Grouping volcanic units by lava-flow geochemistry, cooling history, and geomorphic form and age and (or) alteration (sections 'Lava-flow geochemistry, cooling, and geomorphic form' and 'Hydrogeologic implications of lava-flow geochemistry, cooling, and geomorphic form'), allows for a broad classification of water-bearing units (section 'Hydrogeologic units'). This section, 'Background: translating volcanic geology to hydrogeology', and its subsections, lay the conceptual groundwork for volcanic hydrogeology in the Cascade Range of northern California, leading into sections 'Geology and hydrology of the Hat Creek basin' and 'Methods of hydrogeologic analysis').
Geographic setting
The SLMSA lies between Medicine Lake volcano to the north, Lassen Peak to the south, the Big Valley Mountains to the east and Mount Shasta to the northwest (Fig. 2). The Pit River bisects the SLMSA and gains water from the Fall River to the north and Hat Creek and Burney Creek to the south before draining into Shasta Lake to the west (Fig. 2). Fig. 1 a Location map with two white triangles representing Mount Rainier (north) and Lassen Peak (south) and b elevation map of Shasta-Lassen Peak-Medicine Lake volcano study area (SLMSA) and the Hat Creek focus area (red box). Elevation mapped on slopeshaded relief includes labels (HCFZ: Hat Creek fault zone, RLFZ: Rocky Ledge fault zone) for topographic features, eastdipping and west-dipping faults and streams and rivers. Legend footnote: a Modified from Wells and McCaffrey (2013)
Plate tectonic setting
The SLMSA spans the northwestern section of the Basin and Range Province, the southern end of the Cascadia subduction zone and related Cascades Volcanic Arc, and the northwestern edge of the Walker Lane Fault Zone ( Fig. 3a; Blakely et al. 1997;Langenheim et al. 2016). Reorganization of the Pacific-North American plate boundary at about 30 Ma preceded Basin and Range extension (Atwater 1970;McQuarrie and Wernicke 2005;Wesnousky 2005a;Colgan et al. 2006). The Walker Lane Fault Zone, a NW-SE trending ~120-km-long belt of right-lateral shear, accommodates up to cipitation (1981, PRISM Climate Group 2015 mapped on a slope-shaded relief map of the Shasta-Lassen Peak-Medicine Lake volcano study area (SLMSA) with rivers and high-volume springs. The highest amounts of precipitation fall on the high elevations (Fig. 1b) near the volcanic-arc axis, including Lassen Peak to the south and Mount Shasta. East of the axis of the Cascade Range, higher precipitation occurs at the relatively higher elevation peaks such as Medicine Lake volcano. The SLMSA polygon is restricted to the region that provides groundwater to the high-volume springs. The green box identifies precipitation pixels at higher elevations than Big Spring. The yellow box identifies the precipitation pixels around Magee Volcano 20-25% of the movement between the Pacific-North America plate boundary (Blakely et al. 1997;McQuarrie and Wernicke 2005;Wesnousky 2005b;Lee et al. 2009). Cascades arc volcanism and Basin and Range low-volume extensional volcanism both use N-S and NW-SE striking faults as magma conduits between melt sources and the surface (Muffler et al. 2011;Muffler and Clynne 2015). Normal and dextraloblique faults are responsible for regional Crustal deformation; these normal fault-related basins and ranges reflect the dominant-structural expression in the regional topography (Blakely et al. 1997;Unruh and Humphrey 2017).
Precipitation
The Cascade Range orographic effect causes a rapid decline in average annual precipitation with distance east from the active axis of the Cascade Range (Fig. 2). However, Medicine Lake Volcano, and some other high-elevation areas east of the active volcanic arc axis (Fig. 2), receive relatively high average-annual precipitation (>1,000 mm/year; Figs. 1 and 2; PRISM Climate Group 2015).
Previous work within the SLMSA
Most past hydrogeologic studies in SLMSA focused on geochemically identifying potential groundwater-recharge locations (Rose et al. 1996;Rose 1997, 2014), sources of groundwater discharge into streams (Davisson and Rose 2014), and heat and fluid flow in the Medicine Lake Volcano-Fall River system (Burns et al. 2017b). Isotopic data indicate that water sourced from Lassen Peak and some adjacent volcanic peaks contributes to the Hat Creek springs (Rose et al. 1996; Fig. 1). Burns et al. (2017b) built upon the foundational work of Rose et al. (1996), Rose (1997, 2014), and Manga and Kirchner (2004) to study the relationship between climate, spring discharge, and temperature at Fall River springs (Fig. 3). Burns et al. (2017b) found that both precipitation (in excess of 700 mm/ year), recharge temperatures, and decadal-scale changing atmospheric temperatures affect spring temperatures, but that characteristics of the groundwater system, such as the vadose zone's ability to thermally insulate the aquifer, buffer spring temperature changes. Building on previous work (Rose et al. 1996;Davisson and Rose 1997;Manga and Kirchner 2004;Davisson and Rose 2014), Burns et al. (2017b) concluded that changes in spring temperature will lag changes in climate by tens to hundreds of years.
Previous work quantifies the general nature of groundwater/surface-water exchange in the SLMSA. Generally, both groundwater and surface-water flow from the topographically elevated, high-precipitation uplands toward the Pit River (Figs. 1 and 2). In the morphologically similar upper Deschutes River basin (an eastside Cascade Range drainage basin north of the SLMSA), highly permeable younger volcanic rocks form productive aquifers that often discharge at high-volume springs where young volcanic units onlap older less-permeable rocks (Gannett et al. 2001). In the Hat Creek basin, large volume springs are observed to discharge from similarly young volcanic rock.
Lava-flow geochemistry, cooling, and geomorphic form
In the SLMSA, a primarily volcanic region with occasional sedimentary interbeds, volcanic units can be differentiated by geochemistry and landform (Fig. 3c). Geochemical groups are (1) calc-alkaline rocks or (2) low-potassium olivine tholeiite rocks. Calc-alkaline magmas, linked to arc volcanism, have high viscosity (Manga 1997;Lyle 2000;Manga 2001;Harris 2013;Muffler and Clynne 2015;Clynne and Muffler 2017;Fig. 3) and form scoria cones, steep-sided lava cones and broad shield volcanos, all with high aspect ratios and limited lateral extents (Clynne and Muffler 2010;Muffler and Clynne 2015;Clynne and Muffler 2017). Lowpotassium olivine tholeiite magmas, associated with crustal extension, have low viscosity and produce widespread low-relief, valley-filling, sheet-flows from low, commonly inconspicuous vents ( Cooling forms many of the primary geomorphic features and textures found in calc-alkaline and low-potassium olivine tholeiite rocks. Textures found in lava flow tops and bottoms include interconnected vesicles and cracks and are generated by a range of processes, including cooled rubble pushed forward and overridden as the lava flows, boiling water in soil, or degassing of the lava flow (Manga 2001). Lava-flow interiors also display a range of cooling textures but are often dense, possibly with cooling joints (Grossenbacher and McDuffie 1995;Lyle 2000). Shields, scoria cones, and lava cones tend to have pervasive, randomly oriented fractures produced during and after emplacement (Pollard and Aydin 1988;Grossenbacher and McDuffie 1995;Kattenhorn and Schaefer 2008).
Hydrogeologic implications of lava-flow geochemistry, cooling, and geomorphic form
Morphological features, like lava-flow tops (Manga 2001) and age-related features, such as secondary mineralization (Jefferson et al. 2010;Burns et al. 2017a), influence groundwater flow in volcanic systems. Porous and permeable lavaflow tops, bottoms, and interflow zones (where one lava-flow top meets the overlying lava-flow bottom) often serve as the primary horizontal fluid-flow media in volcanic aquifers (Gannett et al. 2001;Manga 2001;Burns et al. 2012Burns et al. , 2016. Clynne and Muffler (2017) Extensive jointing can serve as a mechanism to explain vertical connectivity between volcanic aquifers (Gingerich 1999;Davisson and Rose 2014), although closed fractures-such as cooling joints under confining pressure-unconnected fractures, or fractures filled with alteration minerals often prevent vertical fluid flow (Burns et al. 2016). The amount of alteration minerals plugging groundwater flow paths increases over time, and temperatures >30-45 °C accelerate the rate of alteration (Burns et al. 2012(Burns et al. , 2015(Burns et al. , 2016(Burns et al. , 2017a. As a result, permeability generally decreases with geologic age (Gannett et al. 2001;Jefferson et al. 2010;Burns et al. 2017a). The relationship between permeability and age of volcanic terranes determines river and stream drainage density (Jefferson et al. 2010) and spring density (Burns et al. 2017a). Young, high-elevation volcanic features may be permeable and allow rapid groundwater recharge from rain and snowmelt (Jefferson et al. 2010;Burns et al. 2017a).
The hydrologic implications of age and morphology of volcanic units serve to differentiate and characterize the basin-filling units in the Hat Creek focus area (Figs. 3 and 4). Geochemistry can determine the extent of a volcanic unit as either extensive or limited. The morphology, controlled by the geochemistry of the units, can determine where water can flow within them; that is, horizontally at flow tops, bottoms, and interflow zones and/or vertically through fractures. Alteration of volcanic glass to pore-filling clays increases with age and may limit fluid flow through volcanic media. However, the 124-ka age difference between the basin-forming unit and the oldest basin filling unit (Table 1; Fig. 3) might not substantially affect the permeability of the rocks. Thus, rocks in the study area might not be old enough for extensive alteration to have taken place. Instead, basinbounding structures, like the Hat Creek fault zone (HCFZ, Fig. 1), separate and differentiate younger-basin-filling rocks from the adjacent older-volcanic uplands.
Conceptual model of groundwater flow
In the conceptual model of groundwater flow for the Hat Creek basin, young, permeable, laterally connected, basinfilling basalt flows in the valley bottom accumulate water from the adjacent and southern uplands. These young basalt flows efficiently transmit water from Lassen Peak to the south to the Pit River to the north. Geologic structure in the valley controls both the space the young basin-filling-basalts fill and the degree of tilting they undergo. The volcanic history is important because age and deposition control the way volcanic units interact with the groundwater system (sections 'Lava-flow geochemistry, cooling, and geomorphic form' and 'Hydrogeologic implications of lava-flow geochemistry, cooling, and geomorphic form'). Oxygen isotopes enable the identification of apparent recharge elevation. This information enables the construction of a 3D hydrogeologic model of the Hat Creek basin consistent with the broader understanding of the hydrogeology of the volcanogenic terranes.
Geologic structure
Faulting plays two key roles in the geologic evolution of the Hat Creek basin (Fig. 3) by (1) determining the shape of the basin and (2) controlling the source and extent of volcanic units deposited within the basin (Leeder and Gawthorpe 1987;Gawthorpe and Leeder 2000). From south to north, fault dips change from predominantly down-tothe-west (Anderson 1940;Muffler et al. 1994;Langenheim et al. 2016) to east dipping on the west side of the basin and west dipping on the east side (Austin 2013, Figs. 1 and 3). The HCFZ runs for 47 km along the eastern margin of the Hat Creek basin, has a maximum vertical displacement of 370 m (Anderson 1940;Muffler et al. 1994;Blakeslee and Kattenhorn 2013;Kattenhorn et al. 2016) and controls the basin's geometry (Langenheim et al. 2016 Previous geologic work suggests two structural models for the Hat Creek basin: full graben and half graben. In the first model, the basin is envisioned as a full graben bounded by the HCFZ to the east and the Rocky Ledge fault zone (RLFZ) and associated faults to the west (Austin 2013; Fig. 1). In the second model, a half-graben with most displacement on the HCFZ is proposed. Principal evidence for the second model includes the asymmetric geometry of the basin (Muffler et al. 1994), east-tilting volcanic and volcanoclastic units (Anderson 1940;Kattenhorn et al. 2016), and the eastward thickening of valley fill (Langenheim et al. 2016).
The structural model for extension changes from south to north in the Hat Creek basin. The HCFZ (maximum age of 924 ka, Clynne and Muffler 2010;Kattenhorn et al. 2016) dips to the west, provides the most relief in the focus area, and creates accommodation space by separating the valley bottom from adjacent uplifted fault-blocks to the east (Fig. 3;Anderson 1940;Muffler et al. 1994;Blakeslee and Kattenhorn 2013;Kattenhorn et al. 2016). On the southwestern side of Hat Creek focus area, an unnamed fault collocated with Big Spring also dips to the west (Fig. 3). To the northwest of Brown Butte, the east-dipping RLFZ accommodates strain on the western margin of the basin, while the HCFZ maintains strain accommodation on the east (Anderson 1940;Austin 2013; Fig. 1).
Volcanic history
Volcanic units in the Hat Creek focus area are basin-filling rock units (<800 ka) and are younger than the basin-forming, faulted older volcanic (>924 ka) rocks, that underlie the valley bottom and bound the basin to the east and west (Table 1). Rocks in the Hat Creek basin are volcanic unless stated otherwise. The most well-constrained basalt flow in the Hat Creek basin is the Hat Creek Basalt (HCB), which is 30 m thick on its eastern margin and thins to 0 m thick on its western margin.
Isotope data and apparent recharge elevation
Recharge elevations for springs feeding Hat Creek were estimated by Rose et al. (1996) and Davisson and Rose (1997), using δ 18 O isotopes and were used to identify volcanic peaks that serve as possible water sources (Rose et al. 1996;Davisson and Rose 1997). Relatively low δ 18 O values at Big Spring indicate a flow-weighted average elevation matching Crater Peak on Magee Volcano (Rose et al. 1996), although this elevation could also represent the average of a range of elevations from Lassen Peak to Badger Mountain (Fig. 1). Rising River springs and Crystal Lake springs discharge water matching a flow-weighted average elevation from the high-elevation, high-precipitation region near Lassen Peak (Figs. 1 and 2; Rose et al. 1996;Davisson and Rose 1997).
Methods of hydrogeologic analysis
The volcanic history with geologic mapping and the structural history were used to characterize geologic units into hydrogeologic units based on concepts from section 'Hydrogeologic implications of lava-flow geochemistry, cooling, and geomorphic form'. Basalt-flow-unit tops and bases were interpolated for a limited area south of Cinder Butte using interpreted well-log geologic contacts to estimate the location and dip of lava-interflows (potential aquifers) in the subsurface (available well logs shown on Fig. 3). Late-summer streamflow measurements paired with stream-temperature estimates from an airborne thermal infrared remote-sensing (TIR) survey were used to estimate locations and magnitude of stream gains and losses from stream confluences and to and from the groundwater system. Overlaying gains and losses onto hydrogeology allows the identification of geologic controls on groundwater flow.
Thematic geologic maps
Geologic maps were constructed to separate volcanic units into basin-forming (>924 ka) and basin-filling (<800 ka) basalt flows and volcanic edifices using geologic mapping and age data (Gay and Aune 1958;Lyden et al. 1960;Muffler et al. 1994;Clynne and Muffler 2010;Downs et al. 2020), 1/3 arc second LiDAR digital elevation models (DEM, US Geological Survey 2019b), and the US Geological Survey active faults database (US Geological Survey and California Geological Survey 2018). Once geologic maps and US Geological Survey Quaternary fault and fold database were integrated with LiDAR, a few new faults were identified based on aspect and slope angle. Using the DEM (US Geological Survey 2019b), topographic scarps with slopes >25° were identified as faults.
Hydrogeologic units
The depositional volcanic-rock units were categorized into hydrogeologic units based on composition, morphology, and age/degree of faulting. The subdivisions are basin-filling (<800 ka), consisting both of volcanoes and lava flows, and the faulted, basin-forming (>924 ka) volcanic unit (Older Volcanic units, OV). These age distinctions were designed to differentiate units younger than the basalt of Twin Bridges (BTB, Tables 1 and 2) from those older than the initiation of faulting on the HCFZ. Young volcanic rocks will henceforth be referred to as 'basin-filling' and are further subdivided into edifice aquifers and basalt-flow aquifers (Fig. 4). The conceptualized basalt-flow aquifers have the potential The morphology of lava flows in the Hat Creek basin is highly variable; but can be conceptualized as having a dense (low porosity and permeability) flow interior, and porous and permeable flow tops and bottoms that cooled rapidly (see sections 'Lava-flow geochemistry, cooling, and geomorphic form' and 'Hydrogeologic implications of lava-flow geochemistry, cooling, and geomorphic form'). Permeability within interflow zones (a lava flow top, possibly overlain with a sedimentary interbed, and the overlying lava flow bottom) can result from a wide range of processes during lava flow deposition (see sections 'Lava-flow geochemistry, cooling, and geomorphic form' and 'Hydrogeologic implications of lava-flow geochemistry, cooling, and geomorphic form'). For Columbia River basalts, permeable thickness is estimated to be 1/10 of total thickness on average (Burns et al. 2011), but insufficient data exist to give a typical ratio for Hat Creek area valley-filling basalts. Faults can act as barriers to lateral flow when permeable interflow zones become juxtaposed with low permeability flow interiors, and fault permeability (dominated by previously unaltered flow interiors) decreases over time as fault gouge alters to clays.
In the Hat Creek basin, the maximum exposed thickness of the basin-filling BTB unit is around 30 m (at the fault at Big Spring, Fig. 3), though most of the thin flows are tens of centimeters to a few meters thick (Clynne and Muffler 2010) and the maximum expression of the Hat Creek Basalt is at least 30 m on the east side of the valley, though it could be thicker. The greatest amount of vertical displacement on the Hat Creek fault zone is 370 m, though this represents an underestimate of vertical movement on the fault as younger lava flows partially cover older scarps (Anderson 1940;Muffler et al. 1994;Blakeslee and Kattenhorn 2013;Kattenhorn et al. 2016). This means that even though the OV unit can transmit water (e.g., Lost Creek headwater spring, Fig. 3), it is assumed to be a separate hydrogeologic unit, particularly because Lost Creek headwater spring discharges above the Hat Creek Valley bottom, implying a separate, higherelevation aquifer with markedly different hydraulic head.
Well log analysis
Water-well logs ( Fig. 3; Marcelli and Peterson 2022) are used to estimate the dip and thickness of basin-filling basalts. Identifying points at potential stratigraphic contacts allows for the construction of contact-trend surfaces that estimate locations for the top and bottom of each unit (i.e., potential aquifers). The elevation of each stratigraphic contact is estimated from water-well logs (Marcelli and Peterson 2022), mapped surficial geology and the DEM (US Geological Survey 2019b). Only the general dip directions and dip angle are used herein to develop the conceptual hydrogeologic framework. In order to achieve a smooth surface, each contact-trend surface was estimated using LOESS (Cleveland 1979). Additionally, the mapped extent of the exposed parts of the HCB and the BTB was used in conjunction with stratigraphic contacts found in well logs to generate estimates for HCB top and BTB top. 1912,1921,1922,1928,1988,2002,2015,2016 when precipitation and surface runoff are lowest. Though late-summer evapotranspiration is at its highest, for analyses herein, evapotranspiration and precipitation are assumed to be negligible compared to other inflows and outflows. This enables the assumption that all measured gains and losses from the surface-water system comes from interaction with groundwater, tributaries, or dam operations (including diversions and return flow).
Airborne thermal infrared remote sensing of stream temperature
Spatial variation of water temperature in rivers is influenced by atmospheric heat exchange, topography and geographic setting (e.g., shade and albedo), streamflow (thermal inertia), hyporheic exchange, tributary inflow, and exchange of water with the aquifer system. Determining the relative influences of these variables is challenging because more than one process may be occurring along any given stream reach (Caissie 2006). TIR remote sensing can detect the effects of groundwater discharge on water temperature along the stream length. Inputs of relatively cool water create both discontinuities (e.g., cooling over a short reach) and anomalous departures from the expected longitudinal warming pattern in the downstream direction (e.g., cooling due to diffuse groundwater discharge; Fullerton et al. 2015). When thermal anomalies cannot be explained by surficial atmospheric, topographic, or advective drivers (e.g., tributary inflows), the longitudinal profile of stream temperature provides an indirect method to identify the locations and relative magnitude of groundwater/ surface-water exchange (Dugdale et al. 2015). Several studies have used TIR data to identify hydromorphologic and landscape features such as channel curvature and confinement, slope, valley morphology and geology associated with cold-water areas ( Fig. 6 are shown here setting and associated structure and lithology can be important drivers of longitudinal variation in stream temperature. Detailed high-resolution geologic maps of rivers with TIR data are needed to link the vertical dimension of hydrogeology with thermal heterogeneity in riverscapes (Fullerton et al. 2015;O'Sullivan et al. 2020).
In 2018, airborne TIR surveys (using the methodology of Torgersen et al. 2001) were conducted in the afternoon (4:37-7:00 pm) and morning (7:05-9:22 am) of both September 24 and 25 to measure water-surface temperature of the lower 65 km of Hat Creek. In rivers with no interaction with groundwater, radiant temperatures should rise steadily through the day and with distance from the source (Fullerton et al. 2015). Comparing radiant temperatures from the morning and afternoon allows spatial characterization of similar anomalous low or high temperatures along reaches of a stream, identifying locations of groundwater/surface-water interaction. Detailed descriptions of TIR methods, imagery, and data are provided by Curtis et al. (2021). The morning and afternoon surveys were conducted using a helicopter-mounted forwardlooking infrared (FLIR) SC6000 LWIR sensor (FLIR Systems, Inc.) flown 300-400 m above Hat Creek. The sensor measured wavelengths of 8-9.2 μm. Twelve digital temperature loggers (HOBO Water Temperature Pro v2 Data Logger, U22-001; Onset, Inc.) distributed within the flowing stream along Hat Creek were used to calibrate radiant temperatures in the thermal imagery. TIR temperatures were within 0.5° of in-stream temperatures (Curtis et al. 2021). Typically, channel width exceeds 4 m for the surveyed reaches, so the channel was adequately sampled by the georeferenced TIR image mosaics that have a spatial resolution of 0.5 m × 0.5 m. The thalweg path was digitized manually and did not vary at the channel unit scale (e.g., riffles, ~10 m). To remove numerical artifacts in estimated stream temperature, median temperature along each 10 m length was used. Where the median temperature picks did not correspond with the stream channel, median temp picks were removed. A gap of 400 m exists upstream of the confluence with Rising River, where low flow conditions and narrow channel made choosing a thalweg from the TIR data difficult (Fig. 3). Median temperatures were plotted longitudinally with respect to the distance upstream ('river km') from the mouth of Hat Creek where it enters the Pit River.
Results of hydrogeologic analysis
The mapped depositional units, used to create the hydrogeologic units, and stratigraphic trends calculated from geologic contacts found in well logs are used to create a 3D conceptual representation of the focus area groundwater system (Fig. 5). Measured streamflow, stream temperature, and flow from springs can be compared to average creek conditions to analyze surface-water/groundwater interactions.
Structure and stratigraphy
Depositional horizons identified in well logs generally dip and units thicken to the north or northeast (Fig. 5). These trends are due to both the geometry of the basin in the south (east dipping half-graben where the basin is surrounded by west-dipping faults, Fig. 3, section 'Conceptual model of groundwater flow') and the general topographic trend (elevation decreases with distance from Lassen Peak, Fig. 1). The maximum exposed extent of the HCB on the eastern side of the valley is 30 m, and it thins to 0 m on the western margin of the valley, but due to the sparse water-well logs (Fig. 3), much of the subsurface geometry of the basalt is unconstrained. The HCB forms much of the valley floor of the Hat Creek basin. Surfaces created by LOESS over the water-well log ( Fig. 3) contacts (section 'Well log analysis') vary in space, allowing estimation of variations in dip direction and dip (Table 3). These surfaces are used to conceptualize the geometry of the Hat Creek basin for the conceptual model (section 'Hydrogeologic units'). The Hat Creek basin has components of both a strong topographic dip (north) and a structural dip (east/northeast), with buried basalt flows thickening and contacts generally dipping in these directions.
Both HCB top and HCB bottom have north to north-northeast dip directions and similar dip angles (Table 3), and slightly thicken to the northeast. The upper trend surface for the basalt of Twin Bridges has dip directions clustered between northeast and east and dips centered on 2.2°, but that range between 1.4 and 12° (Table 3). Both HCB top and HCB bottom dip to the north rather than the east-northeast, as BTB top does. HCB bottom's max dip (1.48°) and BTB top's minimum dip (1.4°) have a .08° difference implying that they could sequentially stack atop one another. However, the difference between the BTB's maximum dip (12°), and the max dip direction (80°), and the HCB bottom's max dip (1.48°) and the max dip direction (10°) imply the existence of a unit between them. One reason maximum dips and dip directions may differ between BTB top and the two HCB contacts is a series of discontinuous sedimentary interbeds. However, there is no definitive evidence for or against the presence of a unit between HCB bottom and BTB top. The west-striking, north-dipping slopes of HCB top and HCB bottom both likely reflect the topographic dip driving the HCB's northward migration and emplacement toward the Pit River (Table 3). The BTB trend surface dips to the northeast because it is an older unit that has likely been tilted relatively more than the younger HCB; its surface reflects both the topographic trend (north dipping) and the structural trend (east dipping).
3D conceptual model for the Hat Creek basin
Combining the thematic geologic maps (Fig. 3) with the conceptual understanding of the relationship between edifice-and basalt-aquifers (Fig. 4) and the well-log (Fig. 3) trend surfaces (Table 3) forms the basis of a 3D conceptual diagram of the hydrogeology of the focus area (Fig. 5). The conceptual hydrogeologic framework is approximate, based on geologic maps, limited water-well log data (section 'Well log analysis), and knowledge of regional structural geology (section 'Conceptual model of groundwater flow') and volcanology (section 'Geologic structure'), and the relationship between volcanism, faulting, age (sections 'Lava-flow geochemistry, cooling, and geomorphic form', 'Hydrogeologic implications of lava-flow geochemistry, cooling, and geomorphic form' and 'Hydrogeologic units') and groundwater flow. Basaltflow aquifers, at the contacts between individual basalt flows (Fig. 5), form laterally extensive aquifers that dip to the east-northeast (Table 3; Langenheim et al. 2016). Up-dip margins on the west create potential connections between the aquifers and Hat Creek (Fig. 5). Vertical hydraulic connection potentially occurs at depositional margins, contacts with edifice aquifer units, and locations where stream incisions cut through confining units. At least two basalts fill the valley (Table 3), but four hypothetical units depicted on Fig. 5 conceptually present a stack of basalt layers that interact hydrologically with each other. Figure 6b depicts a conceptual cross section from the inset Lost Creek canyon (incised by Lost Creek, flowing from a spring within the OV) on the east to the western margin of the Hat Creek basin. On the western margin of the basin just north of SL, Hat Creek flows near the basalt-flow aquifer-OV contact, resulting in a potential pathway for surface-water/groundwater exchange, with multiple basalt-flow aquifers, where lava flows are thinnest and closest to exposure at the surface (Fig. 6b). Another conceptual cross-section (Fig. 6a) depicts the flow of the creek along the western margin of the HCB near the contact with the Sugarloaf edifice aquifer. The exact architecture of the edifice/basalt-flow contacts is unknown, but Fig. 6 conceptually demonstrates interfingering between the edifice and basalt-flow units. In addition to hydraulic connection at lava flow margins, incision by streams through thin parts of the basalts can provide connection with the aquifer system (Fig. 6a). High-volume springs discharge at both structural and depositional boundaries (Fig. 5); including from the upthrown side of faults (e.g., Lost Creek headwater spring), at potential fault barriers (e.g., Big Spring), or at the terminus of volcanic features (e.g., Rising River spring at the distal end of the HCB and margin of the Cinder Butte edifice).
Streamflow and stream temperature: locations of groundwater/surface-water interactions
Combining streamflow and temperature measurements allows the identification of an alternating pattern of stream gains from springs and losses from stream leakage along Hat Creek (Fig. 7). The late summer 2019 seepagerun measurement locations were selected after using the 2018 TIR survey data to identify gaining stream reaches. Synoptic seepage-run data from 2019 and previous years illustrate the general pattern of alternating gains and losses (dashed line on Fig. 7). TIR data allow identification of short reaches where most stream gains occur (gray bands on Fig. 7).
Prior seepage-run and streamflow measurements generally match the 2019 data, indicating that the pattern is robust, with modest variation over decades (Fig. 7). The range of late-season streamflow measured at the Hat Creek streamgage (Fig. 7) demonstrates that the 2019 pattern at that location is less than the median of the 78 years of data, but well within the decadal variation captured by the 10th and 90th percentiles. Near river km 14, low streamflow and narrow stream width precluded estimating temperature with the TIR imagery, resulting in the gap in median temperature data in Fig. 7. The resultant data gap extends downstream to the confluence with the spring-fed Rising River. Fig. 5. Each cross-section depicts a different interpretation of the relationship between the geology and the hydrogeology of the region. The red star, LS, identifies Lost Creek headwater spring, orange lines depict conceptual groundwaterflow directions. a Cross section, from a to a′ in Fig. 5, depicts conceptual groundwater flow in the south of the study area, crossing Sugarloaf Peak. The potentiometric surface (dashed line with triangle) lies above the Hat Creek, driving groundwater flow into the creek. b Hat Creek lies above the aquifer and loses to it through the exposed contact. The aquifer behaves in an unconfined manner near the western margin of the valley and a confined manner to the east, by Lost Creek. Lost Creek neither gains nor loses to the groundwater system. c Hat Creek flows in the middle of the valley, and though the potentiometric surface for the uppermost aquifer lies below the creek, no pathway connects the creek to the aquifer; Hat Creek neither gains nor loses to the groundwater system. The relatively low potentiometric surface beneath Lost Creek drives water from the creek down the pathways provided by joints, cracks and fractures; here, Lost Creek loses to the groundwater system
Discussion
Conceptual cross sections of geology and creek elevation (Fig. 8b), combined with streamflow and streamtemperature measurements (Fig. 8a), provide a refined understanding of the hydrogeology of the Hat Creek basin and spring-flow estimates at geologic features (Fig. 8a). Within the Hat Creek basin study area, the basin-filling basalt-flow aquifers are likely the dominant aquifers exchanging water with Hat Creek, but some spring sources may be from adjacent OV aquifers. As with the Hat Creek on the surface, the dominant groundwater flow direction is primarily from Lassen Peak in the south toward the Pit River in the north; but locally, groundwater is likely flowing through highly heterogenous flow paths from the adjacent horsts (part of unit OV) into the Hat Creek basin bottom aquifers.
Geologic evidence for groundwater-flow barriers
The changes in the permeability or continuity of an aquifer that impede groundwater flow and result in groundwater discharge are called groundwater-flow barriers. Faults and permeability juxtapositions between depositional units are two examples of groundwater-flow barriers; but under this general definition, depositional pinch-outs where basalts onlap high-elevation older lesspermeable rocks can also form barriers to groundwater flow. Whereas springs do collocate with faults (Keegan-Treloar et al. 2022), exact mechanisms of fault controls on springs are poorly understood; though they can relate to fault offsets, bends, and volcanism potentially associated with feeder dikes within faults. The fault that coincides with Big Spring (Figs. 3 and 8b) is an example of a structural groundwater-flow barrier that likely juxtaposes the Creek flow from the watershed above, starting with low-volume springs near Lassen Peak (Fig. 2). Values from 2019 with no visible red whiskers have errors smaller than the width of the point representing them. Spring complexes are labeled. Streamflow is measured on the right axis and temperature on the left. Legend notes: a Error categories: 5% (fair) or 8% (poor) of the measured value; b Streamgage for 1928-1930, 1960-1993, and 2016-2019 with whiskers denoting the 10th and 90th percentiles c Cleveland (1979) high and low permeability sections of lava flows. Controls on the groundwater-flow barrier on the north side of the Sugarloaf Peak (inset box in Fig. 8b), in the lowest reach of the Sugarloaf compartment, are less obvious. Two alternate models for the north-side springs are proposed (Fig. 9). In one explanation, the basalt-flow aquifer onlaps the OV unit and the basalt-flow aquifer on the northeast of the volcano, disrupting the lateral continuity of the basalt-flow aquifers, and allowing groundwater to leak from the aquifer margin (Fig. 9a). Alternatively (Fig. 9b), because the basalt-flow-aquifer may be continuous to the east (Fig. 3), a fault (as evidenced by the linear chain of cinder cones to the south and north of Sugarloaf Peak; Fig. 3) may still act as a barrier, making the collocation of the springs with OV part of the same story of fault offset. Fig. 7. a Discharge and temperature measurements along the longitudinal profile of Hat Creek are used to refine the streamflow profile (solid black line). b Hat Creek longitudinal profile depicts land-surface elevation, hydrogeologic units, and groundwater-flow paths. Two conceptual hydrogeologic units are present: the OV unit (older volcanics, gray) and the basalt-flow aquifer unit (green). Although the basalt-flow hydrogeologic unit is conceptualized as multiple thin aquifers at interflow zones separated by dense, impermeable lavaflow interiors, for simplicity, only one basalt-flow aquifer is depicted. The potentiometric surface of the uppermost aquifer is shown conceptually at land surface for gaining, and below for losing. Groundwater-flow barriers result in step changes in potentiometric surfaces, consistent with gaining and losing reaches. The elevations demarcated on the left apply only to the longitudinal elevation profile, not to the subsurface, which is vertically exaggerated to demonstrate surface-water/ groundwater interactions. Potential groundwater-flow barriers are marked as either vertical red line (the fault at Big Spring) or dashed green. Legend note: a Davisson and Rose (1997)
Compartmentalization of the aquifer system
In the Hat Creek basin, groundwater moves primarily from the south to the north, much as the Hat Creek does; however, the groundwater-flow paths in the Hat Creek are compartmentalized. The groundwater-flow barriers described in section 'Geologic evidence for groundwater-flow barriers' divide the focus area into leaky compartments. Within each compartment, stream gains and losses can be attributed to the existence of pathways to the groundwater system and the potential created by the relative relationship between potentiometric surface and stream stage. Figure 8b illustrates how barriers can create conditions for high potentiometric surfaces upstream of barriers (potential for springs), followed by low potentiometric surfaces downstream (potential for stream infiltration). Groundwater discharge zones define the downstream boundary of each compartment, whereas losing stream reaches define the upstream of the next compartment downstream. In the upper two compartments, this gain/loss pattern coincides to where Hat Creek runs across (W-E/E-W) known or inferred faults/general structural grain (N-S/NW-SE) of the Hat Creek basin (Fig. 3). Pathways for stream loss generally coincide where Hat Creek flows on the western margin of the basin and where lava flows are thinnest and closest to exposure at the surface ( Figs. 6b and 8). Although fault damage zones are sometimes postulated to serve as vertical flow conduits (Caine et al. 1996), absence of a flow barrier would not result in low head downstream of the fault. Thus, steadily losing reaches downstream of the springs would not occur (i.e., a head drop from above land surface to below land surface at the fault is required to explain large springs above large stream loss reaches). Transitions from gaining to losing reaches at known or inferred groundwater barriers are used to subdivide the Hat Creek basin into three leaky hydrogeologic compartments (Fig. 8b). More compartments/barriers may exist within the valley bottom to the east of Hat Creek, and compartment shape is poorly understood, but this manuscript focuses on the three identified compartments. Hereafter, the Big Spring compartment lies upstream of river km 61, the Sugarloaf compartment lies between river km 61 and 42, and the Rising River compartment occurs downstream of river km 42.
Big Spring compartment
The focus area and all associated plots consider only the first ~64 km of Hat Creek from the mouth, where TIR data are available, even though the headwaters of Hat Creek are ~78 km to the south of Pit River. Springs and the resulting Hat Creek flow are comparatively low in volume upstream of Big Spring ( Fig. 8a; Rose et al. 1996). Hat Creek flow is <1 m 3 /s at 64 km near where the TIR data begin. The Big Spring fault is a groundwater-flow barrier and forces ~4.2 m 3 /s of groundwater into Hat Creek where Hat Creek travels NE across the NW-SE trending fault (Figs. 2 and 8a). Uniform and relatively low streamflow (~0.7 m 3 /s) upstream of the Big Spring complex results in atmospheric heating and cooling and less buffering of diurnal temperature fluctuations (Fig. 8a), as lower volumes of water heat and cool faster than higher volumes (Caissie 2006). Between river km 62 and 61 upstream of the fault at Big Spring, streamflow increases by over 500%, resulting in cooling of both morning and afternoon stream temperatures over a ~1-km reach (Fig. 8).
Spring chemistry at Big Spring suggests multiple possible recharge areas (Rose et al. 1996), matching high-elevation sources including Table Mountain, Badger Mountain, Crater Peak part of Magee Volcano, and Lassen Peak ( Fig. 1; Rose et al. 1996). Cell sizes for the recharge data (PRISM Climate Group 2015) are around 4 kilometers (~4 km), but tend to Fig. 2) gives 11.3 m 3 /s of water. The 11.3 m 3 /s of water contributes to the evapotranspiration (ET) and recharge to aquifers that may feed Big Spring (PRISM Climate Group 2015; Fig. 2), and exceeds the measured 4.2 m 3 /s spring flow, leaving 7.1 m 3 /s for ET and groundwater flow past the spring. Recharge in the volcanic upper Deschutes River watershed is estimated to be up to 75% of precipitation (i.e., 25% ET), indicating that the proposed source of spring-flow at Big Spring (Rose et al. 1996) is viable. Average annual precipitation from Magee Volcano (yellow boxes, Fig. 2) adds an additional 2.7 m 3 /s of streamflow, meaning that Magee Volcano water is not strictly necessary to meet streamflow measured at Big Spring (though it may be a contributor). Thus, Magee Volcano cannot be the sole source of waters at Big Spring, due to the insufficient amount of water. The Big Spring compartment is likely leaky, and there might be alternate groundwater-flow paths from Lassen Peak around the compartment. This manuscript postulates that the average elevation of spring-water geochemistry measured likely reflects a range of elevations (Lassen Peak to Turner Mountain and its surroundings, Fig. 2) rather than one edifice (Crater Peak atop Magee Volcano, Fig. 2).
Sugarloaf compartment
The Sugarloaf compartment extends from the fault at Big Spring (ca. river km 61) to the groundwater-flow barrier north of Sugarloaf Peak (ca. river km 42; Fig. 8). The Sugarloaf compartment of Hat Creek loses streamflow over the upstream two-thirds and gains streamflow over the downstream third (Fig. 8a). Again, morning and afternoon stream temperatures change markedly over short reaches upstream of river km 48 suggesting localized groundwater discharge (Fig. 8). Between river kms 59 and 48, Hat Creek flows near or over a basaltflow-edifice aquifer contact, this unit pinch-out could result in surface-water loss (Figs. 5 and 8). Modest temperature gains downstream between river kms 61 and 48 in the afternoon likely represent stream heating due to atmospheric exchange (Fig. 8a). Groundwater discharge could be from two different sources in the Sugarloaf compartment (Fig. 8), one warmer, potentially deeper flow path from Lassen Peak, heating both morning and afternoon stream temperatures near river km 48, and a second, colder source at river km 43 (Fig. 8a).
In the Sugarloaf compartment, streamflow losses upstream approximately balance downstream gains (Fig. 8). Possibly, stream losses upstream are regained downstream with little groundwater required from outside the compartment. Excess groundwater flow from nearby groundwater-recharge sources, including Crater Peak atop Magee Volcano, Table Mountain, and Badger Mountain (Rose et al. 1996; Fig. 2), might leak through or past the downstream groundwater-flow barrier into the next compartment.
Rising River compartment
The Rising River compartment begins at the groundwater-flow barrier associated with Sugarloaf Peak (river km 42) and ends at the Pit River (river km 0). Groundwater mostly discharges near the northern terminus of Cinder Butte ( Fig. 3; Tables 1 and 2) and the HCB, where topography drops as Hat Creek approaches the Pit River (Figs. 3 and 8b). The Rising River compartment loses streamflow in its upstream sections and gains in the downstream sections. North of Cassel, two sequential run-of-the-river dams divert water through turbines or allow the water to bypass. The local gage (US Geological Survey station 11358700) monitors only one diversion, so rarely if ever does the gage measure total streamflow (e.g., total water flowing past the dam through the diversion and bypass). From September 8-19, 2019, water diverted from Hat Creek for the upper dam's (southernmost dam, Fig. 3) operation was an average of 0.28 m 3 /s (US Geological Survey 2022). The low amounts of water diverted might indicate that most of this water is immediately used for the generation of electricity, flowing through turbines to Hat Creek below the powerhouse. The increase in Hat Creek streamflow at river kms 10 and 6 might be due to lower dam operations (northernmost dam, Fig. 3), but because run-of-the-river-dams are defined by low storage capacity, both dams are assumed to affect streamflow to a small degree when compared to the ~6 m 3 /s gains measured near river km 9 (Fig. 8). Instead, streamflow gains are likely driven by spring flow added to Hat Creek from deeper groundwater-flow paths that add water below the surface of the Hat Creek.
Hat Creek loses close to 4.5 m 3 /s of streamflow between the groundwater-flow barrier (river km 42) and the confluence with the spring-fed Rising River (river km 14). Much of the 4.5 m 3 /s of streamflow lost above river km 14 may return to Hat Creek via Rising River headwater springs, which adds at least 6.9 m 3 /s of streamflow at the toe of Cinder Butte (Fig. 8). Rising River springs could also have additional sources of groundwater from the uplands to the east.
In addition to the thermal effect of the lake associated with Crystal Lake springs, stream temperatures are potentially influenced by the dams (Fig. 8). Most of the streamflow (~14.3 m 3 /s) measured near river km 3 likely enters Hat Creek before river km 7, as evidenced by the TIR data (Fig. 8) and originates from the Crystal Lake springs complex as distributed seepage.
Isotopic compositions found in the Rising River compartment at Rising River and Crystal Lake springs indicate a Lassen Peak source for at least some of the springs (Rose et al. 1996). These data suggest a deeper groundwater-flow path via leakage through or around the upstream compartments. This is unsurprising, as the geology suggests that all the basalt-flow aquifers pinch out as Hat Creek flows into the Pit River (Fig. 8b).
Implications for other volcanic terranes
Groundwater systems in volcanic regions with vertical displacement on faults could be compartmentalized. Comparing the proposed relationship between fault throw and compartmentalization to the well-studied upper Deschutes River and upper Klamath River basins (Gannett et al. 2001(Gannett et al. , 2007 might give more insight. Though previous work analyzes these basins at large scales, the relationship between groundwater-flow barriers and structure and stratigraphy is not as fully examined as the detailed work herein. In the upper Deschutes River drainage basin, Tertiary to Quaternary volcanic rocks interact with the Brothers, Sisters, and Green Ridge section of the Metolius fault zones (Gannett et al. 2001); whereas, the upper Klamath River basin hosts volcanic rocks that are faulted and as old as the pre-Tertiary and can locally be compartmentalized by faults (Gannett et al. 2007). Both drainage basins lie in regions where faulting and arc volcanism intersect (Blakely et al. 1997;Gannett et al. 2007;Waldien et al. 2019). Moreover, both basins contain rocks that are markedly older and have lower permeability, indicating possible permeability reduction with increased alteration to clay (Jefferson et al. 2010;Burns et al. 2015Burns et al. , 2017a, which would enable a more comprehensive study of the relationship between compartmentalization, faulting and age/alteration.
Summary and future work
An investigation based on detailed geologic maps, well logs, streamflow data, and TIR imagery collected in the morning and late afternoon result in a detailed conceptual model of the hydrogeology and groundwater/surface-water exchanges of the lower Hat Creek basin. These investigations reveal aspects of specific geologic features associated with both warm-and cold-water anomalies in Hat Creek's longitudinal stream temperature profile. In rivers without TIR imagery, additional research would be needed to assess whether LiDAR topographic data can be used to characterize the geology (e.g., faults and stratigraphic contacts) associated with longitudinal thermal heterogeneity and cold-water refuges important for cold-water species (Fullerton et al. 2018). Because of the lack of data, this paper does not attempt to pinpoint exact groundwater-flow paths, but rather attempts to constrain them through the Hat Creek basin. Measurements of hydraulic head and conductivity (k) are available for only a small part of the Hat Creek basin, but would greatly increase understanding of the Hat Creek basin groundwater system.
Conclusions
Hat Creek flows over a leaky, compartmentalized aquifer system with at least three distinct segments separated by geologic structures. The two downstream compartments are characterized by losing stream reaches upstream and gaining reaches downstream. This manuscript hypothesizes that the pattern of streamflow gain followed by streamflow loss occurs across structural boundaries created by faults and at unconformities between volcanic units with contrasting transmissivity. The upstream-most Big Spring compartment gains 4.2 m 3 /s at its downstream boundary near the fault at Big Spring, increasing streamflow by a factor of 5. The Sugarloaf compartment likely regains streamflow lost in its upstream reaches at a groundwater-flow barrier between river kms 48 and 42. In the Rising River compartment, near river km 15, Hat Creek almost goes dry, but gains around 14 m 3 /s between river kms ~14 and 8. Similar patterns of streamflow gains and loss at groundwater-flow barriers can be found at other locations in the Shasta-Lassen Peak-Medicine Lake volcano study area (SLMSA) south of the Pit River, and likely in other regions with similar geologic features. Northern SLMSA shows a pattern of drainage and spring discharge controlled by the depositional extent of volcanic units. In the case of the Hat Creek basin, the use of the spatially extensive airborne thermal infrared (TIR) remote sensing dataset reveals the relationship between structure, stratigraphy, and groundwater/surface-water interactions. Extrapolating the methods used in this study to other regions characterized by coeval faulting and volcanism, such as the Klamath and Deschutes River drainage basins, might lead to a similar depth of understanding of their hydrogeologic systems.
Funding Funding for this project was provided from the US Geological Survey Water Availability and Use Science Program. Additional funding for Erick Burns was provided by the US Dept. of Energy -Geothermal Technologies Program (EERE award number DE-EE0007169) and the US Geological Survey Energy Resources Program.
Declarations
Conflict of interest On behalf of all authors, the corresponding author states that there is no conflict of interest.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 2022-12-18T16:15:28.506Z | 2022-12-16T00:00:00.000 | {
"year": 2022,
"sha1": "9ad09ef5e34f716e3e374bbec2276ecef9b0d1f5",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10040-022-02545-x.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "3e02266dc292307d6b9c058ee33fe6ecbc1ea26d",
"s2fieldsofstudy": [
"Environmental Science",
"Geology"
],
"extfieldsofstudy": []
} |
54994344 | pes2o/s2orc | v3-fos-license | Is ‘Hanging Drop’ a Useful Method to Form Spheroids of Jimt, Mcf-7, T-47d, Bt-474 That are Breast Cancer Cell Lines
Breast cancer is a type of cancer that has an extremely complicated structure. Recent studies have shown that breast cancer is common invasive cancer and unfortunately, its prevalence have been rising in women. Therefore, scientists use established cell lines in the laboratory for modelling and finding therapy. Spheroids which is known microtumors, it is well characterized models to mimic the natural environment. There are many devices which have been designed for forming spheroids. In this study, the 96-well hanging drop culture plate was chosen to form spheroids of the breast cancer cell lines JIMT, MCF-7, T-47D, BT474 at density of 2.5 × 104, 5 × 104, 7.5 × 104, 105 cells/well. Cells were imaged daily to check for aggregation and cell proliferation. Spheroid formation occured within 72 hours. The fluorescence microscope examination revealed that the morphological appearance of 3D spheroid was cell line dependent. In this study, more cells were used compare with the protocol that is given from manufacturer. The importance of our work is that spheroids were formed first time at high density. *Corresponding author: Yılmaz Ö, Department of Microbiology, Institute of Health Sciences, University of Adnan Menderes, Aydın, Turkey, Tel: +905413888115; Fax: +90 256 2146495; E-mail: phdozgylmz@gmail.com Received January 16, 2018; Accepted February 09, 2018; Published February 15, 2018 Citation: Yılmaz Ö, Sakarya S (2018) Is ‘Hanging Drop’ a Useful Method to Form Spheroids of Jimt, Mcf-7, T-47d, Bt-474 That are Breast Cancer Cell Lines. Single Cell Biol 7: 170. doi:10.4172/2168-9431.1000170 Copyright: © 2018 Yılmaz O, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Introduction
Breast cancer is the most common invasive cancer in women. According to World Health Organization (WHO), it is adversary affecting millions of women all over the world and the second main cause of cancer death in women, after lung cancer [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16]. Due to the fact that breast cancer is a complex and heterogeneous disease, it is important to understand its mechanism. Thus, breast cancer is often modelled using established cell lines in the laboratory [6,7] BT-20 is the first breast cancer cell line that is established. After that, the MD Anderson series and MCF-7 were established cell lines which are commonly used for modelling in the laboratory. MDA-MB-435 was also characterised as a basal cell line [8]. Cancer cell lines: JIMT, MCF-7, T-47D, BT-474 are also widely used for modelling. Since their features, breast cancer cell lines became more widespread. For example: MCF-7 is a model cell line which could be used for researching hormone response [6].
While all cells have usually direct access to glucose, amino acids, and other growth factors in 2D cultures, the availability of these nutrients depend on diffusion rates and local environments within the scaffold in 3D cultures [8]. Therefore, both 3D and 2D cell culture conditions affect to cells which have differential expression of genes involved in signal transduction as human epidermal growth factor receptor 2 (HER2) signaling, cellular movement, cell-to-cell signaling, cellular growth, and morphology [7,[12][13][14].
There are four general methods of spheroid formation; suspension culture, nonadherent surface methods, hanging drop methods, and microfluidic methods. The hanging drop technique is one of the simplest and cheapest methods inside of them [5,[10][11][12][13][14][15]. Although they have disadvantage as producing variable size spheroids, low throughput, hard to handle, long-term culture, they provide an efficient way to obtain biological insights that are often lost in 2D platforms [15].
For this reason, the 96-well hanging drop was chosen to form spheroids on breast cancer cell lines (JIMT, MCF-7, T-47D, BT-474). According to manufacturer cell concentrations were given 5 × 10 3 cells/ well into Perfecta3D™ 96 well hanging drop plate, there are also some articles which include the seeding density, from as few as 50 cells to as many as 1.5 × 10 4 cells, allows production of varying spheroid sizes. One of the problem of 3D systems is that you cannot work with cells in excess. For this purpose, it was aimed to test if hanging drop is a useful method to form spheroids at density of 2.5 × 10 4 , 5 × 10 4 , 7.5 × 10 4 , 10 5 cells/well or not, and forming spheroids in excess cells for the first time.
Cell culture
JIMT and MCF-7 were grown respectively, in DMEM and MEM supplemented with 10% FBS. T-47D was grown in RPMI-1640 supplemented with 20% FBS and BT-474 was grown in RPMI-1640 supplemented with 20% FBS, 2 mM L-glutamine, 0.01 mg/ml human recombinant insulin.1% penicilin-streptomycin antibiotic (10,000 IU/ mL and 10,000 μg/mL) were also added inside of media. The media was stored in a 4°C refrigerator and heated in a 37°C water bath prior to use. Cancer cell lines were recovered slowly from cryopreservation. The medium was changed every 2 days and cells were passaged weekly
Hanging drop plate method
Briefly, when cells reached confluent monolayer in a T-75 flask, washed twice with PBS (pH 7.4) and treated with 0.25% trypsin-EDTA, then resuspended in fresh medium. They were centrifuged (400 rpm for 10 minutes) and counted to calculate the volume needed. Cell density was estimated using a hemocytometer. According to manufacturer data, the device was made ready for assay. The device consists of 3 major parts: The lid, the hanging drop plate itself, and a tray on the bottom. According to calculation, cell suspensions of 40 μL (with concentrations of 2.5 × 10 4 , 5 × 10 4 , 7.5 × 10 4 , 10 5 cells) were pipetted into each well located as part of the hanging drop plate piece in the center. 4 ml of distilled water was added into the peripheral water reservoir to keep the cells hydrated. The plate was sandwiched by a wellplate lid. The plate was labeled and was maintained at 37°C in humidified incubator with 5% CO 2 for five days to allow the spheroids to form. Cells were imaged daily to check for aggregation and cell proliferation. The growth media was exchanged every other day by taking 10 μL media from a drop and adding 14 μL fresh media into a drop to provide enough nutrients for cells and to prevent osmolality shift of the media. The fluorescence microscope examination and Image J software were used.
Results
According to our findings, most spheroids were more scattered in appearance and showed limited compactness and rounded shape after experimenting with different concentrations. Spheroid formations were determinated within 72 hours and they became a bit darker in color indicating more compactness. It was shown that the morphological appearance of 3D spheroid was cell line dependent (Figures 1 and 2). The fluorescence microscope examination revealed that the 96-well hanging drop plate is useful at density of 2.5 × 10 4 , 5 × 10 4 , 7.5 × 10 4 , 10 5 cells/well that are more than cells from the protocol that is given from manufacturer. Although spheroids remained proliferative for five days, their pictures were taken only after spheroid were formed exactly.
Discussion
Cancer cell spheroid formation is one of the most well characterized model that known as multicellular tumor spheroid [4,11]. 3D spheroids could be used for studies which include cell function in an avascular tumor microenvironment, drug therapies, tumor angiogenesis and tumor-immune cell interactions [15][16][17]. There are four general methods of spheroid formation and hanging drop was used in this study. The device consists of 3 major parts: the lid, the hanging drop plate itself, and a tray on the bottom We pipetted cell suspensions into each well located as part of the hanging drop plate piece in the center. Actually, concentrations were critical when plating hanging drops because if the spheroid is too heavy it could fall from the plate. PBS, water, or another buffer solution can be placed in the reservoir to keep the cells hydrated once the plate is placed in the tissue culture incubator. In this system, spheroids remain hydrated to keep cells viable. We exchanged 10 μL media and replace with 14 μL fresh media to provide enough nutrients for cells and to prevent osmolality shift of the media.
In this study, we used excess cells and it was shown that spheroid formation occurred within 72 hours. We observed most spheroids were more scattered in appearance after coated. The spheroids became a bit darker in color indicating more compactness and the building of cell layers creating the 3D spheroid structure. The fluorescence microscope examination revealed that the morphological appearance of 3D spheroid was cell line dependent.
Conclusion
According to this study, more cells were used compare with the protocol that is given from manufacturer. The importance of our work is that spheroids were formed first time at high density. | 2018-12-24T21:02:25.485Z | 2018-01-01T00:00:00.000 | {
"year": 2018,
"sha1": "d039287da489e78e386f5f5a96789e85972bfc1d",
"oa_license": null,
"oa_url": "https://doi.org/10.4172/2168-9431.1000170",
"oa_status": "GOLD",
"pdf_src": "HumanGeneratedMetadata",
"pdf_hash": "d039287da489e78e386f5f5a96789e85972bfc1d",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
237455550 | pes2o/s2orc | v3-fos-license | Vecabrutinib inhibits B-cell receptor signal transduction in chronic lymphocytic leukemia cell types with wild-type or mutant Bruton tyrosine kinase
Not available.
Ibrutinib as monotherapy or in combination has transformed the treatment landscape of chronic lymphocytic leukemia (CLL). 1,2 This drug covalently tethers to the cysteine-481 residue in Bruton tyrosine kinase (BTK), which is a pivotal enzyme in the B-cell receptor (BCR) pathway. 3 Ibrutinib treatment results in long-term overall survival in patients with CLL; however, disease can relapse, particularly in previously treated patients. At a median of 3.4 years of follow-up, the cumulative incidence of progression was 19%, and 85% of these patients had acquired mutations of BTK or PLCG2. 4,5 The predominant BTK mutation is cysteine to serine (BTK C481S ), with the second most frequent alteration being cysteine to arginine (BTK C481R ), both of which preclude covalent bond formation by ibrutinib, resulting in resistance to this drug. 4 Reversible BTK inhibitors, such as vecabrutinib, have been developed that bind to BTK and maintain inhibitory activity against WT and mutant BTK. In this study, we characterized the activity of vecabrutinib on the BCR pathway using a CLL cell line model system engineered to overexpress BTK C481 WT or two mutant variants, BTK C481S and BTK C481R . 6 We also used primary CLL cells resistant to ibrutinib to further extend our investigations of vecabrutinib. A phase 1b clinical trial was recently completed in B-cell malignancies in which vecabrutinib was well-tolerated with some evidence of activity including in CLL patients with the C481S mutation. 7 Vecabrutinib is a highly selective reversible BTK inhibitor (half maximal inhibitory concentration [IC 50 ] = 3 nM). In a panel of 234 kinases and kinase variants, vecabrutinib demonstrated a biochemical IC 50 of <100 nM for seven kinases ( Figure 1A; Online Supplementary Figure S1A, B). The IC 50 of vecabrutinib against WT BTK was similar to that of ibrutinib 3 while it was more potent than ibrutinib on ITK and TEC kinases. In a direct kinase assay, vecabrutinib inhibited WT and mutant C481S variants with similar potency. Data from healthy donors' whole blood (n=145) further established the potency of vecabrutinib in inhibiting BTK, although there was a high degree of variability (mean ± standard deviation values, 50 nM ± 39 nM, range 2.8 nM -216 nM) ( Figure 1B). Vecabrutinib inhibited phosphorylation of the BTK downstream target PLCg2 in Ramos Burkitt lymphoma cells with IC 50 values of 13 ± 6 nM ( Figure 1C). Collectively, these data suggest that vecabrutinib inhibits phosphorylation of BTK and of PLCg2 at nanomolar concentrations.
In MEC-1, a CLL cell line, treatment with both vecabrutinib and ibrutinib did not alter cell cycle profiles and resulted in 15-20% cell death at 1 mM (Online Supplementary Figure S1C-E). Vecabrutinib decreased BTK phosphorylation at a dose of 0.1 mM. Consistent with the observed decline in BTK phosphorylation, decreases in phosphorylation of PLCg2 and ERK were also observed. Phospho-S6 levels did not change in the MEC-1 cell line after treatment with vecabrutinib (Online Supplementary Figure S1F).
To mimic ibrutinib resistance, we transduced MEC-1 to generate cell lines that stably expressed green fluorescent protein and overexpressed either WT BTK (BTK WT ) or the mutated variant C481S (BTK C481S ), or C481R (BTK C481R ) BTK. 6 Cell death induced by ibrutinib or acalabrutinib has been limited when the drugs have been tested in vitro in B-cell lines or primary CLL cells, 8,9 and similarly, vecabrutinib did not affect cell viability or cell cycle profile (Online Supplementary Figure S2A-D). Several known proteins within the BCR signal transduction pathway, including BTK, can be used as biomarkers to monitor ibrutinib response or biological activity, including ERK and S6. 8,10 We evaluated these proteins by immunoblot for phosphorylated proteins to show response to vecabrutinib and ibrutinib in cell lines that harbor WT overexpression or mutant BTK overexpression. Vecabrutinib at a dose of 1 mM decreased phospho-ERK more effectively than ibrutinib did in ibrutinib-resistant MEC-1 cells that overexpress mutant BTK ( Figure 1D-F). This was observed in both BTK C481S and BTK C481R variants. Changes in phospho-ERK were consistent with those in our previous study, in which we showed that phospho-ERK is a superior biomarker to determine ibrutinib response upon overexpression of mutant BTK in MEC-1 6 ( Figure 1E, F).
It is important to note that ibrutinib also decreased phospho-proteins in cells with mutated BTK. There are two explanations for this finding. First, the cell lines express endogenous WT BTK and second, ibrutinib can bind reversibly to BTK, albeit with reduced potency. In the clinic, ibrutinib's poor pharmacokinetic properties (peak plasma level along with initial and terminal elimination half-lives) preclude activity as a noncovalent inhibitor. Vecabrutinib treatment at a dose of 1 mM decreased Bcl-2 levels in cells harboring BTK C481S and Mcl-1 in cells overexpressing BTK C481R (Online Supplementary Figure S2E-G).
To evaluate protein changes more extensively as well as to compare the effects of vecabrutinib with those of ibrutinib, we compared protein profiles using the reversephase protein array (RPPA). Since we observed the largest effect of vecabrutinib after treatment at 1 mM, in all cell lines we compared this concentration with samples treated with dimethylsulfoxide (DMSO) vehicle. The top ten canonical pathways were identified by Ingenuity Pathway Analysis (Online Supplementary Figure S3A-C). Several pathways were commonly affected in all three cell types. The extent of change and significance were different ( Figure 2A). The top three canonical pathways with maximal change after vecabrutinib in cells with BTK WT overexpression were FLT3 signaling in hematopoietic progenitor cells, EGF and HGF signaling (Online Supplementary Figure S3A), whereas in cells with BTK C481S overexpression they were HGF signaling, regulation of epithelial-mesenchymal transition by growth factors pathway and B-cell receptor signaling (Online Supplementary Figure S3B) and in cells with BTK C481R overexpression they were epithelial-mesenchymal transition, ERK/MAPK signaling, and a senescence pathway (Online Supplementary Figure S3C). We also classified the types of target proteins using pie charts (Online Supplementary Figure S3D-F). Kinase and transcription regulator protein groups constituted >60% of proteins affected by vecabrutinib in all BTK subtypes.
BCR pathway inhibition generally affects signal transduction, measured as phospho-proteins, proteins involved in transcription factors, cell proliferation, B-cell proteins, and apoptosis. Among the 258 proteins evaluated by RPPA, eight phospho-proteins and five other proteins were affected by vecabrutinib and ibrutinib ( Figure 2B). SHP-2 (PTPN11), a phosphatase that plays a critical role at several junctures in the BCR pathway, has been shown to interact with many proteins. 11 The tyrosyl phosphorylation of this protein has been shown to be Letters to the Editor
A B C D
essential for the activation of ERK, a downstream molecule in BCR signaling. 12 Compared to DMSO-treated cells, BTK inhibition substantially decreased phospho-SHP2(Y542) in WT cells, whereas only vecabrutinib specifically inhibited SHP-2 phosphorylation in both mutant cell lines ( Figure 2B).
Of the two drugs, vecabrutinib produced deeper or similar responses compared to those to ibrutinib in WT cells for all proteins ( Figure 2B). Consistent with a prior report, 12 treatment with ibrutinib and vecabrutinib caused a 2-log decrease in phospho-ERK in BTK WT cells after treatment. Furthermore, vecabrutinib treatment resulted in a 1-log decline in cells harboring mutant BTK, whereas ibrutinib had no effect ( Figure 2B). A hallmark protein that is affected by ibrutinib is p70S6K or S6 kinase. The target protein of this kinase is S6 ribosomal protein, which initiates protein synthesis upon phosphorylation. Impressively, compared to ibrutinib, vecabrutinib profoundly decreased phospho-S6K in all three cell types. Among all proteins, phosphorylated ERK and S6K were consistently and substantially decreased in all three cell types. These two proteins may serve as biomarkers for the effect of vecabrutinib ( Figure 2B).
Ibrutinib produced a larger decrease in PD1 levels than vecabrutinib did. Parallel to PD1 protein levels, serine(727) and tyrosine(705) phosphorylation of STAT3 has been shown to be reduced in CLL cells after ibrutinib treatment. 13 STAT3 and STAT5 activation through phosphorylation has been associated with inflammation and carcinogenesis. 14 STAT3 is constitutively active in CLL cells, but this activation is mitigated by ibrutinib treatment. 13,15 Vecabrutinib treatment decreased Y705 and S727 phosphorylation on STAT3 in WT and mutant cell lines. Interestingly, in mutant cells, a decline in S727 phosphorylation occurred only with vecabrutinib ( Figure 2B). Although extensive cell death was not seen with either of these drugs, RPPA data revealed that vecabrutinib treatment increased cleaved caspase 7 in all transduced cell lines.
Finally, we tested vecabrutinib in primary CLL cells from five patients with either WT or mutant BTK ( Figure 3A). Cell death ranged from 0 up to 21% after 24 hours of vecabrutinib, being higher in BTK WT than in BTK mutant samples ( Figure 3B). BTK with C481S and C481R alterations had the lowest apoptosis. In concert, immunoblot results showed that vecabrutinib inhibited BCR pathway (phosphorylation of BTK, ERK, and S6) signaling in BTK WT and in BTK T474F (gate-keeper mutation) ( Figure 3C, D). Consistent with the RPPA data, cells expressing the BTK C481S and BTK C481R variants had minor changes (patient 3). In patients 4 and 5, phospho-ERK either remained the same or increased; these two patients had C481S (catalytic domain) and T474I (gate-keeper) double mutations ( Figure 3E). Super-resistance to irreversible BTK inhibitors or variable sensitivity to reversible noncovalent BTK inhibitors has been reported for cells harboring T474 variants along with the cysteine 481 substitution. 16 In summary, our study provides a nonclinical character- Figure 1, and reviewed the manuscript;WGW identified ibrutinib-resistant patients and provided samples; SMK supervised SEH and analyzed RPPA data; VG conceptualized and supervised the research, obtained funding, analyzed the data, and wrote and revised the manuscript. | 2021-09-10T06:18:11.213Z | 2021-09-09T00:00:00.000 | {
"year": 2021,
"sha1": "bea138b9ee4f09da66b93f580cfe9f9e3a51d266",
"oa_license": "CCBYNC",
"oa_url": "https://haematologica.org/article/download/haematol.2021.279158/73677",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9b7835de81239a7125022060d2b84564d9b82fbb",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
263761554 | pes2o/s2orc | v3-fos-license | Caregiver burden and inflammation in parents of children with special healthcare needs
Children with special healthcare needs (CSHCN) are a vulnerable population that require specialized services and are often cared for by parents. These parents experience psychological, physiological, and potential inflammatory dysfunction related to amplified caregiving burden which may increase with the complexity of the child’s condition. Due to the potential for inflammatory dysregulation, we aimed to compare caregiver burden and inflammation of parents with CSHCN based on the severity of the child’s condition to parents of typically developing children. A cross-sectional design that included parents of typically developing children (n = 60), non-complex chronic disease (n = 28; one chronic condition that does not progress), and complex chronic disease (n = 32) was used. Parents completed the Caregiver Burden Inventory and blood serum was collected to measure inflammation. Multivariate analyses of variance with post-hoc testing was used to determine between group differences. Parents of children with complex disease experienced greater caregiver burden than parents of typically developing children (p < 0.001) and non-complex chronic disease (p = 0.044). Parents of children with non-complex chronic disease reported greater caregiver burden than parents of typically developing children (p = 0.02). Parents of children with complex chronic disease had lower pro- (p = 0.042) and anti-inflammatory (p = 0.002) composite scores, than parents of typically developing children. Parents of children with greater medical complexity experienced more caregiver burden and potential inflammatory dysregulation. Future research should explore inflammatory processes in this specific population and self-care measures to improve psychological and physical well-being.
Introduction
Parents of children with special healthcare needs (CSHCN), are informal caregivers and provide most of their children's in-home care [1][2][3].These parents are more likely to experience caregiver burden which creates an environment of chronic stress [4,5].Consequently, this chronic stress has the potential to increase inflammation [6][7][8].In addition, headaches, exhaustion, decreased physical function, and negative affect are common among parent caregivers [4,5,9].Moreover, child severity may increase caregiver burden and stress [10,11].Hence, the purpose of this study was to examine the
Caregiver burden
Caregiver burden is the perception of complex stress experienced by persons resulting from caring for someone with whom they have a significant personal relationship [27].Caregiver burden has deleterious effects stemming from financial and time constraints, social isolation, and emotional strain which leads to decreased quality of life, illness, poor wellbeing, and unmet medical and social needs [27][28][29].
Severity of the child's illness or condition increases caregiver burden and decreases psychological well-being in parent caregivers [30,31].Autism spectrum disorder (ASD) symptom severity has been found to increase parent depression and caregiver burden [11,32].Similar findings have been reported in parents caring for children with cerebral palsy, leukemia, and cystic fibrosis [30,31].In addition, the severity of the child's chronic condition has been linked to greater ongoing stress, depression, and anxiety [33,34].
Since repeated daily stressors and caregiver burden are common in parental caregiving for CSHCN, they often experience increased rates of depression, stress, and anxiety, putting them at risk for inflammatory dysregulation.Parent caregivers of children with autism and attention deficit hyperactivity disorder had greater levels of CRP than those of typically developing children, indicating inflammation [35,36].Moreover, parents of children with cancer experienced increased IL-6 and TNF-α in serum blood samples [23,37].
In the United States alone, there has been a 6% increase in the CSHCN population since 2001 due to improvements in medical technology, which has increased the survivability of prematurity, diagnosis of atypical neurodevelopment, and chronic illness [38].This inherently increases the need of parents to care for their child with ongoing and greater health-related needs.These parents are significantly at risk for caregiver burden and chronic stress, which may lead to chronic low-grade inflammation [5,7].Subsequently, chronic low-grade inflammation may facilitate the development of cardiovascular disease, insulin resistance, and cancer in caregivers [7,8,39].Since this caregiver population is growing and they may be at an increased risk for health-related problems related to the stress from caregiving, it is essential to better understand the relationships that exist between caregiving and inflammation.
Study aims
Despite these findings, little research exists that explores the relationship between caregiver burden on inflammation in parents of CSHCN.Moreover, few studies have compared differences in caregiver burden and inflammation in parent caregivers of CSHCN and those without.Even fewer studies have examined differences in caregiver burden and inflammation in relation to the severity of the child being cared for.Therefore, our aims to contribute to current research findings are to first explore relationships between caregiver burden and inflammatory biomarkers in parent caregivers of CSHCN.We hypothesize that inflammation will be positively associated with caregiver burden in parent caregivers of CSHCN.
We then aim to compare differences between parent caregivers of CSHCN and those of typically developing children on caregiver burden and inflammation based on severity of the child's condition.We hypothesize that parents of children with more severe conditions will experience greater caregiver burden and increased inflammation compared to parents of children with non-complex chronic disorders and those of typically developing children.In order to accomplish our aims to examine differences in caregiver burden and inflammation in parents of CSHCN and those of typically developing children and further current research, we asked parents to complete a series of questionnaires and provide a serum sample during a single one-on-one visit in this cross-sectional study using the Caregiving Process and Caregiver Burden Among Pediatric Population framework.This framework posits that caregiving demands (i.e.caregiver burden) and the severity of the child's condition and worse behavior negatively influences the physical health, which we measured inflammatory markers to assess, of the parent caregiver [40].
Study design and setting
To compare the child's condition severity on parent caregivers' burden and inflammatory biomarkers, a cross-sectional design was used with all data collected during a single visit.
Inclusion criteria
Participants met inclusion criteria if they were over 18 and the legal guardian or parent of a child under 18 years.Participants were not excluded based on their relationship (mother, father, grandparent, foster parent, etc.) with the child.Parents of typically developing children and those of CSHCN were included in the study.Participants were excluded if they were not a legal guardian of the child, or the child was 18 years of age or older.
Participant recruitment
Participants were recruited via flyers on community boards in libraries, gyms, businesses, and local social media parenting groups.Flyers were also distributed through day cares, schools, and physician offices.Additional recruitment was aided by parent support organizations via their listservs and social media pages.Initial recruitment resulted in 175 participants contacting with interest to participate in the study.Fifty-five either did not meet inclusion criteria, withdrew interest, were unreachable, or failed to show, resulting in 120 parents participating in this study.
Sixty participants were in the control group and 60 in the comparison group.The comparison group was further stratified based on severity of the child's condition using the pediatric medical complexity algorithm with medical diagnosis of the child reported by the participant through structured interview [41].The severity scoring was 0 for parents of typically developing children (i.e.control group), 1 for parents of children with non-complex chronic disease, and 2 for parents of children with complex chronic disease.Non-complex chronic disease is defined as one chronic condition, and can include physical, developmental, or mental health diagnosis, that does not progress, but persists into adulthood with episodes of good health.Examples of non-complex chronic disease are well controlled type-1 diabetes, attention deficit hyperactivity disorder, and asthma.Complex chronic disease was defined as those with considerable physical, developmental, or mental health diagnosis in two or more body systems for more than a year, they can be progressive, technology dependent, or impact life function [41].Examples of complex chronic disease are malignancy, cystic fibrosis, rare congenital disorders, and paraplegia.
Measures
First, demographic data was collected on all participants and included: gender, age, education level, marital status, ethnicity, and income.Next background data was collected which included: caregiver self-reports of diagnosed chronic and psychiatric conditions, medications, sleep health and routines, exercise, and number of and ages for any children in the household.Finally, height and weight were collected and recorded, caregiver burden was assessed, and blood was drawn to examine biomarkers of inflammation.
Caregiver burden
Caregiver burden was measured using the Caregiver Burden Inventory.This is a 24-item measure containing five subscales: time dependence, developmental, behavior, physical burden, social burden, and emotional burden.Scores for each item are based on a five-point Likert scale ranging from 0 (not at all disruptive) to 4 (very disruptive; [42]).Items were summed to create the subscales scores, and the subscale scores were summed to create an overall caregiver burden score with higher scores indicating greater caregiver burden.Good content, concurrent, predictive, and external validity, and good internal and test-retest reliability in caregivers of CSHCN have been reported [43].The overall internal consistency for the scale was good (α = 0.91), as were the five subscales, time (α = 0.85), developmental (α = 0.85), physical health (α = 0.85), emotional health (α = 0.86), and social relationships (α = 0.85).
Inflammation
Approximately, 7 mL of whole blood samples were collected in CPT tubes via antecubital venous puncture for biomarkers of inflammation.Following the research protocols, serum samples were prepared and stored within 12 h of collection in -80 °C freezer following all research protocols.All collection and transportation methods were in accordance with the bloodborne diseases and pathogen requirements and protocols.
Serum levels of inflammatory markers were measured using the Human Essential Immune Response panel LEGENDplex kit following the manufacturer's instructions.The kit consists of a multiplex analysis that quantifies 13 different pro-and anti-inflammatory cytokines in the same sample including: IL-4, IL-2, CXCL10, IL-1β, TNF-α, MCP-1, IL-17A, IL-6, IL-10, IFN-γ, IL-12p70, CXCL8, and TGF-β1.All molecules lay in the assay detection range of 2.4 to 10,000 pg/mL.At least 300 events were acquired per analyte.Intraassay coefficients of variation ranged from 0 to 3.99% and inter-assay ranged from 1 to 1.9%.Analysis was completed using flow cytometry.Gating strategies used for the flow cytometry analysis included stability gating and forward and side scatter gating.Stability gating was used to ensure that there were no instrument issues during the analysis process.The forward and side scatter gating for the cytokines grouped beads based on size with A being the smaller beads and B the larger beads.This allowed for concurrent recognition of all 13 cytokines.Analyte and bead classification were entered in sequential order based on the bead identifications listed in the panels' manuals.
Data collection
Approval from the institutional review board at Florida State University was obtained in May of 2019 and participant recruitment began immediately thereafter.Recruitment and data collection continued through February 2020.Potential participants were contacted, and the study explained.Upon agreeing to participate, participants were scheduled and given the option of meeting at a laboratory located on the university's campus or a setting of their choice (i.e., home, work).Most participants chose to have the researcher meet them in their home.During the scheduled meeting, written consent was reviewed and signed, then the researcher proceeded with data collection.
Statistical methods
First, descriptive statistics were used to summarize the participant characteristics, then they were assessed for skewness.Due to skewness bivariate analyses (chi-square, χ2; ANOVA, independent-samples Kruskal-Wallis U test) were used to compare between group differences on demographic data and to examine influences of potential confounding variables.All significance levels were set at α = 0.05.
Inflammatory cytokines were assessed for non-detectable cytokines and were set to threshold detection levels.Inflammatory cytokines were then assessed for skewness and were found to be positively skewed; therefore, these measures were transformed according to a log-base 10 transformation [42][43][44].The log-transformed biomarkers were evaluated for extreme outliers which were then recoded using 90% winsorization due to the small sample size.Exploratory factor analysis (EFA) was used to create composite scores of inflammatory cytokines given their high levels of inter-correlation.
To calculate composite scores, z-transformations were applied to standardized values and then the mean of z-scores were calculated.Pro-inflammatory composite score included IL-6, TNF-α, and IL-1β, and anti-inflammatory composite score included IL-10 and IL-4.
Correlations between caregiver burden and pro-and anti-inflammatory factor scores and between caregiver group and caregiver burden and pro-and anti-inflammatory factor scores were assessed using Pearson's Point-biserial correlation coefficients.Multivariate analysis of variance (MANOVA) was used to examine the differences between [severity] groups on burden measure scores and both pro-and anti-inflammatory composite scores.Post-hoc tests were selected depending on the violation of homogeneity of variance assumptions (Tukey-Kramer).All analyses were completed in SAS 9.4.
Demographics
All participants were screened, proved eligible, and were included in the analysis.A total of 120 parent caregivers were included in this study, 60 cared for healthy, typically developing children, 28 were parents of children with non-complex chronic conditions, and 32 had children with complex chronic conditions.No data was missing for the psychological measure and a single participant's data was missing for inflammatory markers, due to the inability to obtain the blood sample.The mean age of participants was 38 years and ranged from 26 to 57.Most participants (91.7%) were women and 87.5% were mothers of the child/ren.The median household income was $70,000 and 86.7% of the participants were employed full time.Participants were more likely to be white (89.2%), married (89.2%), and have at least a bachelor's degree (70.8%).Households averaged 2.2 children with a mean age of 7.29 years.
Groups did not differ significantly across participant characteristics except for education level.Controls were also more likely than parent caregivers of children with not-complex chronic disease to have at least a bachelor's degree (83.3% vs 53.6%; p = 0.007), but there were no differences between parents of children with not-complex chronic disease and those whose children had complex chronic disease or controls and complex chronic disease.Across all demographic characteristics, no differences existed between the parents of children with not-complex chronic disease and complex chronic disease.All demographics are presented in Table 1 and children's medical disorders are presented in Table 2.
Exploratory results
To explore relationships among caregiver burden and inflammation we looked at correlations between the caregiver groups, caregiver burden, and pro-and anti-inflammatory factor scores.Groups based on child severity were not significantly correlated with proinflammatory (r = − 0.14, p = 0.12) or anti-inflammatory factor scores (r = − 0.11, p = 0.22).Caregiver group was positively associated with caregiver burden (r = 0.47, p < 0.001).Anti-inflammatory (r = − 0.05, p = 0.57) factors scores were not associated with caregiver burden or any caregiver burden subscale.Proinflammatory composite scores were not associated with total caregiver burden (r = − 0.11, p = 0.21).A significant negative relationship existed between pro-inflammatory composite scores and caregiver burden subscale development (r = − 0.20, p = 0.03).No other relationships between pro-inflammatory composite scores and the subscales of caregiver burden existed.All results are presented in Table 3.
Inflammatory response
Caregiver groups significantly differed across pro-inflammatory cytokine factor scores (F [2, 116] = 3.24, p < 0.05, η 2 = 0.053).Across these, caregiver group explained 5.3% of the variance in pro-inflammatory biomarkers.Post-hoc comparisons indicated that caregivers of children with complex chronic disease children had significantly lower proinflammatory factor scores than parent caregivers of typically developing children (p < 0.05).Whereas parent caregivers of children with not complex chronic disease did not significantly differ from parents of complex disease or typically developing children.
Caregiver groups also differed significantly on the anti-inflammatory factor scores (F [2,116] = 6.08, p < 0.01, η 2 = 0.095).Caregiver group accounted for approximately 9.5% of the variance in anti-inflammatory composite scores.Post-hoc comparisons revealed significant differences between parents of children with complex chronic conditions and parents of typically developing children on the anti-inflammatory composite scores (p < 0.01).Parents of children with complex chronic conditions also had lower levels of anti-inflammatory composite scores than parents of children with
Discussion
We explored relationships between caregiver burden and the inflammatory composite scores in parent caregivers of CSHCN, hypothesizing that caregiver burden would be correlated with the inflammatory biomarkers.We found that no relationships existed between caregiver burden and the inflammatory composite scores.Significant between groups differences were present on caregiver burden between parents of typically developing children and those with complex chronic disease and not complex chronic disease.Parents of children with chronic disease experienced greater burden on time than both groups.Significant between group differences existed in the inflammatory factor scores, parents of children with complex chronic disease had significantly lower levels than parents of typically developing children.
Interpretation
The results of our study are promising and add novel contributions to the current body of research in its use of serum inflammatory biomarkers as a measure of inflammation from chronic stress with caregiving and parents' perceptions of caregiving burden.Our results indicated that parent caregivers of CSHCN reported greater caregiver burden than those of typically developing children.These results were expected as significant research exists that supports caregiving burden is higher in parents of CSHCN [30,31].Uniquely, we explored group differences based on the medical complexity of the child being cared for by the parent.We found significant differences between parents of typically developing children and parents of children with complex chronic disease on caregiver burden time, development, physical health, social relationships, and total burden.Additional differences between parents of typically developing children and parents of children with non-complex chronic disease on caregiver burden development, physical health, social relationships, and total burden.Only, the emotional health subscale of caregiver burden was not significant between the groups; however, this is most likely due to our small sample size or unequal groups.An additional possible explanation for no difference in burden emotional health between groups is regardless of the child's severity or lack thereof, parents consistently report not progressing emotionally, harboring feelings of guilt and regret, and embarrassment related to their child's behavior [45,46].Parents of children with complex chronic disease scored higher on burden time than parents of children with noncomplex chronic disease and typically developing children, while no differences were present between parents of children with non-complex chronic disease and those of typically developing children.Parents of children with complex chronic disease must often learn medical procedures and these intricate procedures require additional time than more basic caregiving tasks [47].Approximately 22.5% of all parents of children with complex chronic disease spend 11 or more hours per week caring for their child compared to 3.9% of parents of children with not complex chronic disease [3], which may explain the differences between these groups.
We found that parents of CSHCN, with both complex and non-complex conditions, experienced greater caregiver burden on their personal development and social relationships compared to their peers.Parents of CSHCN often must delay or sacrifice personal goals they had set for themselves and social relationships due to caregiving demands, triggering feelings of failure in not meeting their own expectations and social isolation [46,48] Parents of CSHCN have reported the desire for their lives to be different or that their child was able to live on their own as they aged [48].They have also reported sacrificing parts of their lives, such as employment, advancing their education, time spent with friends and family, relocating to be closer to needed services for their child [49,50].The sacrifices parents of CSHCN must make to care for their child likely explains the differences in developmental burden and social relationships between parents of typically developing children and those of CSHCN.
Lastly, parent caregivers of children with complex and non-complex chronic disease reported poorer physical health than parents of children without chronic disease.One potential reason for these findings are that social isolation, greater time spent caregiving, caregiver sacrifices and poor mental health are indicated in the worsening physical health of caregivers [9,27,51,52].Since parents of CSHCN in this study reported greater caregiver time commitment, 1 3 poorer social relationships, and more developmental burden than parents of typically developing children, based on prior research, our outcomes of worse reported physical health are expected.
Previous research indicates that caregiving results in chronic low-grade inflammation [15,53].Contrarily, we found that parents of children with complex disease had significantly lower levels of pro-and anti-inflammatory composite scores than parents of typically developing children.Previous research using similar pro-inflammatory cytokine composites found these were increased in adolescents experiencing chronic stress; yet this result may indicate differences in caregiving stress versus other types of chronic stress [54][55][56].Interestingly, our results support the theory of habituation rather than chronic low-grade inflammation in which persistent repeating daily stressors build an inflammatory tolerance decreasing the inflammatory response [24,26].Moreover, our results may be due to elevated glucocorticoid levels, which inhibit suppression of pro-inflammatory cytokine production; however, we did not examine these [23].
Long term stress has been purported to result in the decreased production of anti-inflammatory cytokines IL-10 and IL-4, indicating dysregulation of the inflammatory processes [15,57].We found that parents of children with complex chronic conditions also had significantly lower levels of anti-inflammatory cytokine composite scores than parents of typically developing children.Our results align with previous research that has indicated parents of children with ongoing, chronic conditions experience decreased production of anti-inflammatory cytokines [23,24].
We found that caregiver group had a significant positive relationship with caregiver burden and all associated subscales, this result was expected in that parents of CSHCN consistently report greater experience of caregiver burden than parents of typically developing children.We found caregiver group was not associated with the pro-or anti-inflammatory composite scores.Anti-inflammatory cytokine composite scores were not significantly correlated with caregiver burden or any of the subscales.Caregiver developmental burden had a small yet significant inverse association with the pro-inflammatory cytokine composite scores, meaning that higher developmental burden was associated with lower inflammatory cytokines.Developmental caregiver burden refers to the caregivers' beliefs that they are not developing at the same rate as their peers who do not have the same caregiving responsibilities [58].While pro-inflammatory cytokines are heavily involved with inducing inflammation and long-term exposure to low levels have been indicated in chronic inflammation [59,60].Yet, a habituation-like occurrence may account for the negative association between caregiver burden development and pro-inflammatory cytokines since parents of chronically ill children experience the same stressors every day.
Future directions
Little evidence exists that has explored the effects of complexity of the child's condition on parent caregivers' inflammatory processes.While our findings lend some credence to the "habituation-like phenomenon", future research should further explore the long-term effects of caregiving for chronically ill children on caregiver inflammation.Moreover, the severity of the child's condition should be further explored to determine if greater complexity of care required increases the incidence of the "habituation-like phenomenon" in parent caregivers.
Additionally, these parents may not have been experiencing the effects of inflammation due to their relatively young age, high overall socioeconomic status, education level, and support systems since these act as protective factors against inflammation [21,55,61].Moreover, research has indicated that resilience, coping, and self-efficacy result in decreased levels of IL-6 and TNF-α and positively impacting physical and psychological health [62,63].While we did not assess these measures, it is possible that parents of children with complex chronic disease have developed protective factors against inflammation.It would be prudent for future research to further explore the effects of potential protective factors on parents with medically complex conditions on long-term inflammation and mental well-being.Lastly, it would be pragmatic for future researchers to examine gender and racial differences regarding caregiver burden and inflammation.
Limitations
There are limitations of note in this study.First, the sample was recruited from a single community in the southeast and the sample size was modest.Moreover, our sample was likely affected by gender and racial bias.Participants were predominantly middle class, white, women, which may impact the generalizability of our findings.Like other parent caregiver studies, our sample was largely mothers since most parent caregivers are women.Yet prior research indicates that significant gender differences exist in caregiving.Women are more likely to report greater stress, depression, and burden than their male peers and are more likely to spend greater amount of time caring for their loved-one [64].Our sample was also highly educated, married, and well above the federal poverty line, which may act as protective factors.
Conclusion
Caregiver burden research is widely recognized in elderly caregivers and the effects of caring on inflammation in those caring for the elderly population is rapidly growing.Yet little research has explored caregiver burden and inflammation in parents of CSHCN.Even less research has explored the effects of medical complexity on parent caregivers and their experienced burden and inflammatory processes.We found significant differences between groups on caregiver burden and inflammatory composite scores suggesting that parents of CSHCN experience greater burden and potential inflammatory dysregulation related to caregiving.While promising, additional research is needed to determine the processes behind the inflammation in parents of CSHCN and if it differs from caregivers of the elderly.Moreover, gender effects on these outcomes should be explored.Furthermore, additional research is needed to determine protective factors, coping, resilience, and self-efficacy against inflammatory dysregulation and caregiver burden.Finally, as the number of parents of CSHCN continues to rise, it is critical to explore interventions to alleviate the negative consequences of caregiving. | 2023-10-09T13:34:37.212Z | 2023-10-09T00:00:00.000 | {
"year": 2023,
"sha1": "e47fc107747e0e7d70130e5eae757d82419060bc",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s44202-023-00089-z.pdf",
"oa_status": "GOLD",
"pdf_src": "Springer",
"pdf_hash": "12dc0347e70fc0eaa09b689403d335c76bf117b7",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": []
} |
247939377 | pes2o/s2orc | v3-fos-license | Scattering for Schr\"{o}dinger operators with potentials concentrated near a subspace
We study the scattering properties of Schr\"{o}dinger operators with bounded potentials concentrated near a subspace of $\mathbb{R}^d$. For such operators, we show the existence of scattering states and characterize their orthogonal complement as a set of surface states, which consists of states that are confined to the subspace (such as pure point states) and states that escape it at a sublinear rate, in a suitable sense. We provide examples of surface states for different systems including those that propagate along the subspace and those that escape the subspace arbitrarily slowly. Our proof uses a novel interpretation of the Enss method in order to obtain a dynamical characterisation of the orthogonal complement of the scattering states.
1. Introduction In this paper, we study the scattering properties of Schrödinger operators with potentials concentrated near a subspace of R d . This is one of many models of a quantum particle interacting with a surface. For such operators, we show the existence of scattering states and characterize their orthogonal complement as a set of surface states, which consists of states that are confined to the subspace (such as pure point states) and states that escape it at a sublinear rate, in a suitable sense. We provide examples of surface states for different systems including those that propagate along the subspace and those that escape the subspace arbitrarily slowly.
1.1. Motivation and prior work. Our work is motivated by the vast literature studying the scattering theory of Schrödinger operators with potentials that decay at infinity. Typically, these are self-adjoint operators on H = L 2 (R d ) of the form where H 0 = −∆ and V , the potential, is a real-valued multiplication operator. For short range potentials, that is, those with sufficiently fast decay, one is interested in showing that the wave operators Ω ± = s-lim t→∓∞ e itH e −itH 0 exist on all of H and are asymptotically complete in the sense that their range is equal to the continuous subspace of H. Intuitively, states in the range of Ω ± behave like free waves as t → ∓∞, in the following sense: if Ω − ψ = ϕ, then lim t→∞ e −itH 0 ψ − e −itH ϕ = 0 Asymptotic completeness then means that all states in the continuous subspace of H scatter to free waves.
We make no attempt to comprehensively review the multitude of results concerning which assumptions on V yield asymptotic completeness. However, we mention the seminal work of Agmon [1] (and the references therein), in which asymptotic completeness is shown for V satisfying, for instance, V (x) = O(|x| −(1+ǫ) ) as x → ∞ Our paper is based on the work on Enss [12] showing asymptotic completeness for potentials satisfying a short range condition, which for a bounded potentials can be written as V χ B c r ∈ L 1 (r) (1.2) where χ denotes an indicator function and B c r is the complement of the ball of radius r in R d . In a related direction, many authors have investigated the scattering theory of Schrödinger operators with anisotropic potentials that have different behavior in different coordinate directions (see, for example, [3,6,8,9,25]). Building on one-dimensional results of Carmona [3], Davies and Simon [8] investigated potentials V that are periodic in the coordinate directions {x 1 , ..., x d−1 } but with different spatial asymptotics as x d goes to plus or minus infinity. They showed that in this setting, the absolutely continuous subspace of H decomposes into pieces that, under the evolution of H, move to ±∞ in the x d coordinate and surface states that are localized near the hypersurface {x d = 0} for all time. We review this result more thoroughly in Section 6, but for now we note that even if V goes to 0 rapidly as x d → ±∞, there may still exist surface states in the ac subspace of H. Furthermore, a state in the range of Ω ± cannot be localized near a hypersurface for all time (see Section 5.3) so the presence of ac surface states may be thought of as an obstruction to asymptotic completeness. Such states may also be seen if V decays sufficiently slowly in some directions. In this case, originally studied by Yafaev [29], one may observe states which disperse away from the support of V slower than a free wave (see Section 6 for more details). Finally, we remark that asymptotic completeness may also fail in the sense that Ran(Ω − ) may no longer be equal to Ran(Ω + ). For d = 3, one may observe such behavior in settings similar to those considered below [6].
In view of the circle of ideas recalled above, one may naturally ask what can be said about the scattering theory of potentials that decay at infinity but only in some coordinate directions. By this, we mean a potential V that is concentrated (in a sense to be specified later) near the surface {x ∈ R d | x k+1 = · · · x d = 0} for some 1 ≤ k < d. The aforementioned class of examples shows that one cannot expect asymptotic completeness in this setting because some states may undergo transport along the surface. However, one has the following very plausible physical picture: a state which moves away from the surface as time evolves should feel the influence of the potential less and less, so it should behave asymptotically like a free particle and therefore be in the range of the wave operator. This suggests that there is a dichotomy between states that remain near the surface and those that are asymptotically free irrespective of the precise nature of V . So, one should really ask: for V as above, is the orthogonal complement of Ran Ω ± given by the space of surface states? The present paper is an affirmative answer to this question.
Before stating our results, let us mention that many authors have studied the spectral and scattering theory of surface potentials due in part to their physical importance. We refer the reader to [6, 10,14,15,16,18,19,20] for some idea of the questions that have been investigated for surface models. In these papers and others, the authors are usually interested in surface potentials with some additional structure. For instance, among other examples, Davies and Simon [8] consider a partially periodic potential so that they may leverage symmetry. Other authors investigate random surface potentials [10,15] or a (possibly discrete) half-space model with some boundary condition (such as [14,18,19,20]). In many of these cases, additional structure allows for a better description of the surface subspace than one might hope for in full generality, either by showing it is trivial [18] or by giving a more restrictive definition [8]. In this paper, we make significantly weaker assumptions on V -only that it is bounded and has the right decay away from the surface -at the price of a more inclusive description of the surface states. Therefore, many of these prior models fall within the purview of our theorem.
1.2.
Model and results. We consider a self-adjoint operator H on H = L 2 (R d ) of the form (1.1), where V is a real-valued bounded potential such that for some r 0 > 0 and 1 ≤ k < d. Here and throughout, · refers to either the euclidean norm or the norm of H. Since k is fixed throughout the paper, we will suppress it in the notation. We define the space of surface states to be Our main theorem is that The existence result may essentially be found in [16], though we supply our own proof. See also Chapter 2, Section 10 of [21] for a related existence theorem. Remark 1.2. The above theorem may be easily generalized to allow V satisfying that is, potentials V which decay perpendicular to the surface in a short range way. Broadly speaking, the L 1 condition enters in a similar way as in [12]. For simplicity of presentation we have restricted to the case where χ c S R V is in fact 0 for R large enough, but we have explained how to adapt our proof to this generalization in Appendix B.
Remark 1.3. The definition of H sur is closely related to the notion of a minimal velocity estimate as exhibited in [17]. A typical estimate of this type for a state ψ might be of the form for some ℓ > 0 and all v less than some v 0 . Such an estimate usually results from a Mourre estimate on some energy interval, in the presence of which one already expects asymptotic completeness (for a self-contained exposition of these ideas, see Chapter 4 of [11]). An easy corollary of our Theorem 1.1 is that a state ψ is a scattering state if it satisfies a minimal velocity estimate relative to the region S vt , i.e., if for some v > 0 lim inf t→∞ χ Svt e −itH ψ = 0 Thus, our theorem provides a dynamical criterion for asymptotic completeness, which may be verified via commutator methods.
1.3. Methodology: the Enss Method of scattering. We rely on the Enss method of scattering originally developed in [12], whose geometric flavor is well-suited to our problem. The Enss method realizes the physical intuition developed above: if V satisfies (1.2), a state which moves away from the origin under the H evolution is asymptotically free. In Enns' original argument, one fixes a state ψ in the absolutely continuous subspace of H and finds a sequence of times t n → ∞ for which ψ n = e −itnH ψ satisfies χ Bn ψ n → 0 so that ψ n is moving away from the origin. This is possible for V a relatively bounded perturbation of H 0 with relative bound less than 1 by the celebrated RAGE theorem [2,26], which says that a state ψ in the continuous subspace escapes every compact set K in a time mean sense: Along the sequence {t n } ∞ n=1 , one then performs a phase space decomposition of ψ n into incoming and outgoing pieces: Both ψ n,in and ψ n,out are spatially localized far from the origin with momenta that point roughly toward or away from the origin respectively. These phase space properties of ψ n,in/out guarantee that lim n→∞ (Ω − − id)ψ n,out = 0 lim n→∞ (Ω + − id)ψ n,in = 0 from which asymptotic completeness is an easy consequence.
In trying to apply the above outline to our setting verbatim, one encounters the problem that one cannot use the RAGE theorem to see that a continuous state moves away from the surface, as the surface is not compact. To proceed, we provide a novel interpretation of Enss original argument that does not rely on any a priori properties of the continuous subspace. Working in the original Enss setting, we fix a state ψ orthogonal to Ran(Ω − ) and perform a phase space decomposition along an arbitrary time sequence increasing to infinity, now keeping the piece of ψ close to the origin (in the above, this piece was o(1) by the RAGE theorem): ψ n = ψ n,bounded + ψ n,in + ψ n,out + o(1) Here, ψ n = e −itnH ψ as before and ψ n,bounded is essentially χ Bn ψ n . One can argue that ψ n,in goes to 0 as n → ∞ and the fact that ψ ⊥ Ran(Ω − ) implies the same for ψ n,out . Thus, ψ n is asymptotically equal to ψ n,bounded and by varying over all time sequences one may show that In other words, Enss' argument provides a geometrical characterization of the orthogonal complement of Ran(Ω − ) as the set of bound states. Indeed, it is a consequence of the RAGE theorem that the states satisfying (1.5) are precisely the pure point states of H, but one need not know this to obtain this interesting theorem.
Our adaptation of this argument to surface scattering will require that the operators implementing the phase space decomposition have better monotonicity properties than those originally used by Enss. To this end, we adopt Davies' [7] point of view on the Enss' method by defining families of phase space observables. This formulation allows us to define the decomposition in a natural way, via operators which are almost projections onto subsets of phase space. Choosing these operators in the correct way allows us to study the evolution in a lower dimensional space, i.e. only in the directions perpendicular to the surface. For the reader's convenience, we have collected various results about these observables in Appendix A. This is particularly important because throughout the proof we will use a phase space characterisation of the surface states. The precise definition of this characterisation will be given in Section 2.2, but for now it can be described as consisting of states that either evolve close to the surface or propagate away from the surface with momenta roughly parallel to the surface. Remark 1.4. A natural question that arises from these two characterisations of H sur is: can there truly be surface states that propagate away from the subspace? If so, these states would have to do so at a sublinear rate and with highly restricted momenta. Indeed, following [8], one may define which contains all states that evolve close to the subspace. This definition will be convenient to work with in Section 6. As shown in Proposition 6.1, H ′ sur ⊂ H sur , so we may reformulate our question as: is there some choice of potential V so that H sur \ H ′ sur (H) is non-empty? Indeed, such potentials do exist: following Yafaev [29], in Section 6 we show that V decaying like a long range potential in the x direction may produce such states. However, we will show in Section 6 that at least for V partially periodic or V that decays to a limit at ∞ quickly enough, Outline of paper. In Section 2 we provide some notation as well as defineH sur , the auxiliary surface subspace that will be used in the proof of Theorem 1.1 extensively. In Section 3, we prove (i) of Theorem 1.1, in other words the existence of scattering states. In Section 4, we develop the Enns decomposition (Theorem 4.1) for our setting, stated using the phase space observables of Davies.
The decomposition is proved, as in the original Enss paper [12], by combining Cook's method with several applications of non-stationary phase. This decomposition is the main ingredient used to show, in Section 5.1, thatH sur and Ran(Ω ± ) span all of H, as described in the sketch above. In Section 5.2 we show that the intersection of these two subspaces is trivial, yielding our first completeness result (Lemma 5.2). For this, we show that the intersection is unitarily equivalent to H sur (H 0 ), the surface states of the free evolution, which we show to be trivial by a direct computation. Then, we again use the method of non-stationary phase to give a better characterization of the surface states, namely to show thatH sur is in fact equal to H sur . In Section 6 we consider some special classes of potentials and discuss their surface states, relating them to known results where relevant. Finally, in Appendix B, we explain how to accommodate short range decay of the potential away from the surface.
Acknowledgment. We are grateful to our advisor, Wilhelm Schlag, for leading us towards this problem, and for his guidance and encouragement during this work. We also thank Michael Weinstein and Amir Sagiv for discussions that improved the definition of H sur .
Notation and Conventions.
For any ℓ > 0 we use the following • We let H denote L 2 (R d ) with norm · and use the convention that its inner product ·, · is anti-linear in the first argument and linear in the second. • The symbols · and ·, · will also be used for the norm and inner product on R ℓ . • d(·, ·) is used for the distance between points or subsets of R ℓ .
• B r will mean the ball of radius r centered at the origin in either R ℓ or H depending on context.
• We use the following convention for the Fourier transform of f ∈ H: We will refer to the R k components as longitudinal and the R d−k components as transverse.
2.2. Definition of the auxiliary surface subspace. As mentioned above, for the proof of part (ii) of Theorem 1.1, asymptotic completeness, it will be more convenient to work with a different subspace, denotedH sur . We will show in Section 5.3 that it is in fact equal to H sur . The definition of this subspace and the arguments that follow depend crucially on the ability to localize a state into a subset of phase space. For this, we will follow the formulation of phase space observables developed in [5]. To this end, choose η ∈ S(R d ), such that η = 1 and suppη ⊂ B 1 . Let η δ be such thatη δ (p) = δ − d 2η ( p δ ), a rescaling of η, so that suppη d δ ⊂ B δ and η δ = 1. Now define the following family of coherent states by translating η δ in phase space: We use this to define a family, depending on δ > 0, of positive-operator-valued measures as in [7], which serve as phase space observables. For any E ⊂ R 2d Borel and ψ ∈ H let which is a weakly convergent integral. These operators are closely related to the Fourier-Bargmann transform F η δ : L 2 (R d ) → L 2 (R 2d ) defined, for instance, in [4] Section 1.3.3. In our notation, F η δ may written as Using this, we can write P δ (E) as is self-adjoint and non-negative by construction. See [5] for more details about the basic properties of these positive-operator-valued measures.
In this paper, we will choose η that factors into functions of x and x ⊥ : . From now on, we will label the coordinates of R 2d as For n > 0 and m > 0, we define the far set in phase space to have space coordinates in S c n (that is, x ⊥ ∈ B c n ) and momentum in S c m (that is, p ⊥ ∈ B c m ), as well as its complement, the surface set: In words, W n,m;far consists of states that have transverse position and transverse momentum bounded away from 0 and W n,m;sur is its complement. Here and elsewhere, the dimension of B n is understood from context.
This allows us to define the set of surface states as which is manifestly a closed subspace. The expressionH sur without an operator will be used throughout to denoteH sur (H).
Existence of the Wave Operators
To begin, we use the following direct application of the Corollary to Theorem XI.14 from [24]: . This is already enough to prove the existence of the wave operators: By linearity, it suffices to show the existence of Ω ± for simple tensors in D α : We now estimate this last expression via Lemma 3.1. For this, note that we have Therefore, we may apply Lemma 3.1, to see that for any ℓ > 0 where C denotes a constant which may change from line to line but is always independent of x and t. It follows immediately that ∞ 0 V e −itH 0 ψ dt < ∞ so that by Cook's method Ω − ψ exists. Since α>0 D α is dense in H, we conclude that Ω − ψ exists for all ψ ∈ H and the claim for Ω + follows from a similar argument.
The inclusion σ(H 0 ) ⊂ σ ac (H) is a result of the intertwining property of Ω ± : Ω ± defines a unitary equivalence between H 0 and H| Ω ± (H) and σ(H 0 ) is purely absolutely continuous.
Enss Decomposition
We fix m > 0 in order to prove the following decomposition lemma. Since m is fixed in this lemma and its proof, we will often suppress it in the notation. However, it should be noted that the decomposition does depend on m.
n=0 ⊂ H be a sequence of unit vectors. Then for any m > 0, there exists some δ 0 = δ 0 (m), so that for all δ ∈ (0, δ 0 ) we may write ϕ n = ϕ n;out + ϕ n;in + ϕ n;sur where these summands satisfy Proof of Theorem 4.1. We now define subsets of R 2d that decompose W n,m;far into subsets of phase space with momenta pointing towards and away from supp V . For a point (x, p) in phase space, this means that its transverse position and transverse momenta are either aligned or unaligned respectively: so that naturally W n,m;far = W n,m;out ⊔ W n,m;in and let ϕ n;out = P δ (W n,m;out )ϕ n ϕ n;in = P δ (W n,m;in )ϕ n ϕ n;sur = P δ (W n,m;sur )ϕ n so that (b) holds. It will be convenient to label the projections of W n,m;in/out to the transverse coordinates as (Ω − − id)ϕ n;out n→∞ −−−→ 0 (4.1) Proof. We may write [22], page 299) and e −itH 0 op = 1. Thus, to proceed we want to show that from which (4.1) follows in light of (4.3). In what follows, the symbol C refers to such a constant, the exact value of which may change from line to line.
From Proposition A.5 we have that x,p;δ 2 dx dp which we will estimate via the following lemma: In order to apply Lemma 4.3 we need the following geometric claims: for all (x, p) ∈ W ⊥ n;out , t ≥ 0, y ∈ B r 0 , and ξ ∈ O := supp η ⊥ x,p;δ + B δ .
proof of claim. Since (x, p) ∈ W ⊥ n;out we have that x > n, p > m, and x, p ≥ 0 and we may write ξ = p + p ′ where p ′ ∈ B 2δ . It follows that Furthermore, since x > n, we may write Finally, because y ≤ r 0 ≤ 1 8 n, By letting C = 1 16 , we obtain the desired inequality for all ξ ∈ O.
Let C(x, t) be the classically allowed region (see Lemma 4.3) corresponding to O. For y, ξ, and (x, p) as above, we have that y ∈ C(x, t) so we may apply Lemma 4.3 to see that for any ℓ > 0 there is some µ > 0 such that n;out and y ∈ B r 0 . We note that where the latter expression is independent of x and p (but depends on δ) and is finite since η ⊥ δ ∈ S(R d−k ). Therefore, for (x, p) ∈ W ⊥ n;out Using the above, for any ℓ large enough relative to d − k, we may write W ⊥ n;out Thus, we may conclude that To see (4.4), we first note that for all t and n χ Br 0 e −itH ⊥ 0 P ⊥ δ (W ⊥ n;out ) op ≤ 1 so that by combining the two bounds and choosing ℓ sufficiently large we may write which proves (4.1). The limit (4.2) may be deduced from exactly the same argument by first writing (Ω + − id)ϕ n;in ≤ M 0 −∞ χ Sr 0 e −iH 0 t ϕ n;in dt and noting that for t ≤ 0, e −itH 0 ϕ n;in behaves like e −itH 0 ϕ n;out for t ≥ 0 because W ⊥ n;out and W ⊥ n;in are related by (x, p) → (x, −p). Proof. This proof is based on an argument of Enss recorded in [28]. We can write ϕ n;in = P δ (W n;in )e −itnH ϕ ≤ P δ (W n;in )(e −itnH − e −itnH 0 )ϕ + P δ (W n;in )e −itnH 0 ϕ so it suffices to prove that To prove (4.6), we write By using (4.5) and the symmetry between W ⊥ n;out and W ⊥ n;in when mapping (x, p) → (x, −p) we see that for any ℓ > 0 χ Sr 0 e iτ H 0 P δ (W n;in ) op ≤ Cτ 1 2 (n + mτ ) −ℓ as long as τ > 0, so we conclude, similarly to the above, that For (4.7), we fix ψ ∈ H compactly supported and choose R so that supp ψ ⊂ S R . Then because the computation of the above operator norm applies just as well to S R for R > 0 arbitrary instead of S r 0 . Density establishes (4.7), which concludes the proof of the lemma.
These lemmas establish Theorem 4.1 in full. The proof is accomplished in three steps: the first is to prove that Ran Ω − andH sur span all of H, the second is to show that their intersection is 0, and the third is to prove thatH sur = H sur .
By using Proposition A.3 we see that, (since S n = R k × B n ) Note that for x ∈ B c 2n and ℓ > 0 large enough so we can write, for any ℓ > 0 P δ (W n,m;sur )e −itH 0 ϕ ≤ χ Sn e −itH 0 ϕ + C ϕ n −ℓ Thus, we conclude that To estimate the right hand side, we note that x ∈ S n implies that for t > n 2α we have Therefore, we may proceed as in the proof of (3.2) in the proof of part (i) of Theorem 1.1 to see that for all ℓ > 0 we have It may be of interest to note that we have in fact proven that it is equivalent to defineH sur with a lim sup in time instead of a sup. In other words: The proof of the desired equality will lean on a non-stationary phase argument: Lemma 5.6. Fix v > 0. For any m < v 16 and δ < m 2 and for any ψ ∈ H we have lim As before, both claims will follow from an estimate on the free propagation e −itH 0 .
Claim 5.7. With all parameters as above, for any R > 0 and ℓ > 0, there exists C > 0, independent of t, m and v such that for all |t| > 8R v . Proof. We will first prove the claim for t > 8R v . Note that where we have used that x > vt > 8R > y . As in the proof of Lemma 4.2, we may apply Lemma 4.3 to see that for any ℓ > 0 there is some C such that |e −itH ⊥ 0 η ⊥ x,p;δ (y)| ≤ C( x + vt) −ℓ uniformly in x, p and y as above and t ≥ 0. Since R is fixed uniformly in x and p, therefore we may integrate (5.5) to find that −p), the claim holds for t < − 8R v as well.
The limit (5.2), as in Lemma 4.5, follows from the bound and the above claim. The limit (5.3) may be established by noting that for ψ such that supp ψ ⊂ S R we can write by the above. Since such ψ are dense, the lemma is proven.
ψ for all t > T 0 . By Proposition A.3, we may estimate the second term 2vt op ψ Using (5.1), we see that for some ℓ > 0 ψ ≤ ε + χ S 2vt e −itH ψ + C(vt) −ℓ + P δ (R 2k × B c vt × B m )e −itH ψ By Lemma 5.6, taking the limit as t → ∞ implies that Since ε was arbitrary and χ S 2vt e −itH ψ ≤ ψ we may conclude that lim t→∞ χ S 2vt e −itH ψ = ψ To complete the proof, we will show that H sur ⊥ Ran(Ω − ), which implies that H sur ⊂H sur . In fact, we will show that H sur ⊥ Ω − (D α ) for any α > 0 and conclude by density.
Let ψ ∈ H sur , ϕ ∈ Ω − (D α ). Note that the definition of H sur implies lim t→∞ χ S c vt e −itH ψ = 0 (5.6) For any v > 0 and any t > 0, by writing vt e −itH ψ ϕ + ψ χ Svt e −itH ϕ and then taking a lim as t → ∞ we see that we may apply non-stationary phase as in the proof of Claim 5.4 to get that for any ℓ > 0 where C does not depend t. In particular, for any ℓ large enough So we can conclude that Applying this to inequality (5.7) combined with equation (5.6) we conclude that ψ, ϕ = 0 which completes the proof.
This proposition with Lemma 5.2 prove part (ii) of Theorem 1.1, or in other words asymptotic completeness.
Examples
Having established our main theorem, we analyze a few special cases to see some of the variety of surface states that may occur. For this purpose, it will be convenient to work with the sufficient condition for being a surface state given in the following proposition.
Proposition 6.1. In the notation of Section 1 Proof. Recall the definition of H ′ sur : We note that for any v > 0, ψ ∈ H, and t > 0 we have Since this is true for any t > 0 we can take lim t→∞ on both side to get So if ψ ∈ H ′ sur , the last term is 0, and therefore ψ ∈ H sur , as needed.
6.1. Surface States in σ c (H). While it is clear that eigenfunctions of H are in H ′ sur , and so from the above proposition are surface states, it is natural to ask whether there may also be surface states in the continuous subspace. We answer this in the affirmative via a simple example.
Let d = 2 and consider a potential which depends on the x coordinate only: Then we may write where H x and H y are the one-dimensional operators Assume that H x has an eigenvalue E 0 with corresponding eigenfunction ψ 0 . For any ψ 1 (y) ∈ L 2 (R), we claim that ϕ(x, y) := ψ 0 (x)ψ 1 (y) (6.1) To see this, note that since H = H x ⊗ id + id ⊗H y we may write Therefore, by Proposition 6.1 we conclude that ϕ ∈ H sur . Furthermore, if ψ 1 ∈ H ac (H y ), as H y is purely ac, we can guarantee that ϕ ∈ H ac (H). This is because for self-adjoint operators of the form D = A ⊗ id + id ⊗B, the spectral measure of f (x)g(y) with respect to D is given by the convolution of the spectral measure of f with respect to A with the spectral measure of g with respect to B (see [13] for more details).
Remark 6.2. In [25], Richard generalized this example by introducing a class of "Cartesian potentials" that, roughly speaking, attain different limits in different coordinate directions. For instance, we may consider potentials of the form V (x, y) = V 0 (x)V 1 (y), where V 0 (x) is as above and V 1 (y) decays to a limit in a short-range way: there exists some c ∈ R such that one may infer from Theorem 1.2 in [25] that By an argument similar to the one given for the above example, it is easy to see that Ran(Ω − ) ⊂ H ′ sur so that H ′ sur = H sur .
6.2. Potentials Periodic in All But One Direction. Now suppose that k = d − 1 and that V is periodic in all but one direction in that there are linearly independent vectors a 1 , . . . a d−1 ⊂ R d such that V (x + a i ) = V (x) for all i and x ∈ R d . The additional structure of such potentials allows us to give a simpler characterization of the surface states. The proof below can be gleaned from the analysis of such systems in [8], but we include a proof for the sake of completeness. A similar proof for a different system may be found in [27].
is the cylinder over the basic cell of the periods. For each θ, we let H 0 (θ) be −∆ on H(θ) with core given by ψ ∈ L 2 (D) with smooth extensions to R d satisfying ψ(x + a j ) = e iθ j ψ(x) for all j and x ∈ R d . Letting H(θ) = H 0 (θ) + V , we have the unitary equivalence These properties of the direct integral decomposition for periodic operators are enough to prove Theorem 6.3. We refer the interested reader to [23] for more details about this decomposition.
Following the proof of Theorem 1.8 in [14], Theorem XII.85 of [23] implies that Thus, for any ψ ∈ L 2 (R d ) The inner integral goes to 0 as t → ±∞ since Ω ± (θ) exists so that by the dominated convergence theorem, we see that Ω ± = U ⊕ T Ω ± (θ) dθU * . It follows that H = Ran Ω − ⊕ H s . Furthermore, 6.3. Transient surface states. In this section, we exhibit a potential that induces states in H sur \ H ′ sur . Furthermore we show that one can build a potential with states that propagate in the transverse direction arbitrarily slowly in a sense specified below. Potentials of this class were originally considered by Yafaev [29].
Remark 6.4. One may also construct examples of potentials supported inside a strip for which H sur \ H ′ sur = ∅. However, we consider the above example for the sake of computational simplicity. Let h(y) be the operator on L 2 x (R) given by h(y) = − d 2 dx 2 + V (x, y) Solving directly, we find that for some E < 0, there is a normalized ϕ 0 (x) such that h(0)ϕ 0 = Eϕ 0 and ϕ 0 (x) = Ce −c|x| for |x| ≥ 1 By rescaling, we see that for all y ∈ R h(y)ψ(x, y) = y −2α Eψ(x, y) where ψ(x, y) = y − α 2 ϕ 0 ( y −α x) Define J : L 2 y (R) → L 2 (R 2 ) Jf = ψ(x, y)f (y) By Theorem 15.1 in [30], since α < 1 2 and because ϕ 0 (x) clearly satisfies for all k ≤ 2, there exists a phase function Ξ(y, t) : R 2 → R such that the modified wave operator exists for all f ∈ L 2 y (R) where Moreover, RanΩ is orthogonal to Ran Ω − and therefore lies in H sur .
To specify the the space distribution of states in Ran(Ω), for β > 0 we let Intuitively, if φ ∈ H sur,β then at time t it is localized within a strip of width t β .
Proposition 6.5. Suppose that φ ∈ Ran(Ω) for φ = 0. Then φ ∈ H sur,β for all β > α but not for β ≤ α. Moreover, φ ∈ H sur \ H ′ sur . Remark 6.6. The above proposition says that states in Ran(Ω) are localized at time t in a strip of width t α+ǫ for any ǫ > 0, but not in a strip of width t α . In other words, such states propagate in the transverse direction at rate proportional to t α . Thus, by modulating the decay of V in the longitudinal direction, choosing α, one can create states that propagate in the transverse direction arbitrarily slowly.
Proof. For φ ∈ Ran(Ω), there exists some f ∈ L 2 y (R) such that lim t→∞ e −itH φ − JU 0 (t)f = 0 so it suffices to show that lim t→∞ χ S c t β JU 0 (t)f = 0 for β > α and lim t→∞ χ S c t β e −itH φ = 0 for β ≤ α. To see this, note that Clearly g(0) = 1, g(∞) = 0, and g(y) ≥ 0 for all y. By taking r = t β for some β > 0, we see that Given this identity, by the dominated convergence theorem we need only take the limit as t → ∞ under the integral for different values of β. For β > α, this integrand goes to 0 pointwise as t → ∞ so we see that Conversely, for β < α, the integrand goes pointwise to g(0)|f (y)| 2 and for α = β to g(|2y| −α )|f (y)| 2 , both of which integrate to a positive quantity i.e. lim t→∞ χ S c t β JU 0 f > 0 Finally, by choosing 0 < β < α, we see that and C ′ > 0 such that x j ∂ ∂x j and H 2 is the Sobolev space of order two. Then the wave operators Ω ± exist and define a unitary equivalence between H and H 0 .
The condition (1) implies that outside of a compact neighborhood of the origin, V (x) must be bounded by some dimensional constant. Therefore, the above conditions may be regarded as imposing some sort of smallness on V . 6.5. Random surface potentials. In this section, we summarize some results from [10] which show that almost surely H sur is infinite dimensional for certain classes of random surface potentials. To this end, let H(ω) = H 0 + V ω be the random operator on R d given by the potential where f , the single site potential satisfies (1) f ≥ 0 and f > σ > 0 on some non-empty open set.
(2) f ∈ L p (R d ) for p ≥ 2 if d ≤ 3 and p > d 2 if d > 3. and the random coefficients q k satisfy (1) The q k (ω) are i.i.d. random variables with distribution given by a measure µ such that supp µ = [q min , 0] for some q min < 0. (2) µ is Hölder continuous.
(3) There exist C, τ > 0 such that for all ε > 0 µ([q min , q min + ε]) ≤ Cε τ One can show that almost surely σ (H(ω) which is negative. Under these assumptions we have that Because eigenfunctions are clearly surface states, for instance by Proposition 6.1, this demonstrates that random models can induce an infinite dimensional space of surface states.
Appendix A. Properties of Phase Space Observables
In this appendix we prove several properties of the phase space observables P δ (E) that we use above. We recall that we choose η ∈ S(R d ), such that η = 1 and suppη ⊂ B 1 , and η = η ⊗ η ⊥ . Let η δ be such thatη δ (p) = δ − d 2η ( p δ ), a rescaling of η, so that suppη d δ ⊂ B δ and η δ = 1. Now define the following family of coherent states by translating η δ in phase space: We use this to define a family, depending on δ > 0, of positive-operator-valued measures as in [7], which serve as phase space observables. For any E ⊂ R 2d Borel and ψ ∈ H let P δ (E)ψ = (2π) −d E η x,p;δ , ψ η x,p;δ dx dp Proposition A.1. We have the following equality: Proof. If we denote by F(·) the Fourier transform then So, the proposition follows directly from Plancherel: as needed.
Next we want to be able to bound the operator norm of AP δ (E) for another operator A: Proposition A.5. For any δ > 0, and A any operator we have, and any Borel set E ⊂ R 2d : AP δ (E) 2 op ≤ (2π) −d E Aη x,p;δ 2 dx dp dx ⊥ dp ⊥
Proposition A.6. Let ψ ∈ H be such that suppψ ⊂ D ×D ⊥ = D and let Proof. The first equality follows directly from the fact that η x,p;δ , ψ = R d e ixξη δ (ξ − p)ψ(ξ) dξ = 0 for p ∈ B since suppη x,p;δ ⊂ B δ + p. Similarly, the second equality comes from the fact that for any ϕ ∈ H supp P δ 2 (F )ϕ ⊂ D + B δ 2 and an application of the first equality.
Proposition A.7. For any δ > 0, and for any Borel set D ⊂ R d , suppose that E ⊂ D × R k × D ⊥ × R d−k is a Borel set, and denote D = D × D ⊥ . Then for any ϕ ∈ H P δ (E)ϕ 2 ≤ (|η δ | 2 * χ D )ϕ ϕ (A.1) Proof. The inequality (A.1) is a result of the fact that P 2 δ (E) ≤ P δ (E) (which is easy to establish since 0 ≤ P δ (E) ≤ id): For the following claims, suppose that where η and η ⊥ are functions in S(R k ) and S(R d−k ), respectively, of L 2 norm 1. It is easy to see that in this case η x,p;δ (y) = η x ,p ;δ (y )η ⊥ x ⊥ ,p ⊥ ;δ (y ⊥ ) where the shifted functions η x ,p ;δ (y ) and η ⊥ x ⊥ ,p ⊥ ;δ (y ⊥ ) are defined analogously to before. Furthermore, P δ and P ⊥ δ are defined as operators on L 2 (R k ) and L 2 (R d−k ), respectively, in the obvious way.
Proposition A.8. Under the above choice of η, if E = E × E ⊥ ⊂ R 2k × R 2(d−k) then we have Proof. For ψ ∈ L 2 (R k ) and ψ ⊥ ∈ L 2 (R d−k ) x ⊥ ,p ⊥ ;δ dx dx ⊥ dp dp ⊥ = P δ (E )ψ ⊗ P ⊥ δ (E ⊥ )ψ ⊥ Since P δ (E) acts as claimed on elementary tensors, the claim is established by the definition of the tensor product of two operators.
Corollary A.9. For any δ > 0, let A = B ⊗ C where B is an operator acting on L 2 (R k ) and C acts on L 2 (R d−k ). Then for E of the above form AP δ (E) 2 op ≤ (2π) −d E Bη x,p;δ 2 dx dp · E ⊥ Cη x,p;δ 2 dx dp and P δ (E) op = P δ (E ) op P ⊥ δ (E ⊥ ) op Proof. This is immediate from Proposition A.8 and Proposition A.5.
Appendix B. Potentials that Decay in x ⊥
In this appendix, we explain how our proofs may be adjusted to accommodate potentials satisfying To see the existence, or part (i) of Theorem 1.1, for such potentials, we fix ε ∈ (0, 2α) and change inequality (3.1) so that it reads V e −itH 0 ψ ≤ M ψ χ Bεt e −itH ⊥ 0 ψ ⊥ + V χ S c εt op ψ The condition on ε guarantees that 2α > εt t , which allows us to bound the first summand in the above by C(1 + t) −ℓ+d for any ℓ > 0 (compare to inequality (3.2)). This, combined with the condition (B.1), lets us conclude the existence of the wave operators.
For part (ii) of Theorem 1.1, in the proof of Lemma 4.2 must be modified by fixing ε < 1 8 and replacing (4.3) by (Ω − − id)ϕ n;out ≤ M ∞ 0 χ S ε(n+mt) e −itH 0 ϕ n;out dt + ∞ 0 V χ S ε(n+mt) op ϕ n;out dt Again, the second summand decays as per condition (B.1). For the first summand, we must only change Claim 4.4 to allow y ∈ S ε(n+mt) , which is achieved via the restriction on ε . Similar adjustment will give the result for Lemma 5.6. After this, the proof works as written. | 2022-04-05T01:16:06.433Z | 2022-04-01T00:00:00.000 | {
"year": 2022,
"sha1": "a48e531933b26a336bc14ae06aad6609397b8bec",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "a48e531933b26a336bc14ae06aad6609397b8bec",
"s2fieldsofstudy": [
"Physics",
"Mathematics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
} |
234531618 | pes2o/s2orc | v3-fos-license | Management Information System of IMBI Puskesmas , Jayapura Utara District
Imbi Jayapura in carrying out its duties and functions, supported by patient registrationmanagement, management of each poly, pharmacy management, laboratory management, andadministrative management. The recording process in all management is still manual, and patientregistration services can only be done using a prospective patient having to come directly to thePuskesmas, a manual registration process so that it takes a long time to provide health services at thePuskesmas. It is deemed necessary to have a system that can support all management activities. There isno accumulation of patients when registering, faster in service, and accurate in solving patients andreporting problems. This study encourages IMBI puskesmas to serve prospective patients who wish toregister online using the website, carry out medical record management at each clinic, pharmacy andlaboratory quickly and precisely, and carry out the direction of patient reports at Puskesmas Imbi speedilyand accurately. This research resulted in the Puskesmas Imbi Jayapura Management Information Systemused by Puskesmas officers to manage all Puskesmas activities.
Introduction
The Puskesmas is a government agency engaged in public health services at the village level. The role of Puskesmas is significant in supporting the performance of health agencies above it, such as hospitals, as an effort to prevent and control public health. To improve the quality of health services at the Puskesmas level, in particular, a good concept or system is needed so that later quality, effective and efficient health services can be realized and can improve the performance of the Puskesmas [1].
In the management at the Imbi Health Center, there are still deficiencies, namely the recording process for each poly, which is even recorded in the book which results in reporting not happening quickly and precisely because it takes a long time and patient registration services can only be done by means of prospective patients having to come directly to the Imbi Puskesmas, search and manufacture of outpatient cards, and medical cards still use handwriting. Patient data is even written in books, so it takes quite a long time to process patient registration at the Puskesmas Imbi registration counter [2].
Literature review 2.1 Previous Research
Application of Management Information Systems at Putri Hijau Hospital Medan. Data collection on health at the Putri Hijau Hospital in Medan is still very difficult, so that fast, precise, and accurate health information is still scarce. By implementing an information system, computerized management can produce fast, detailed and precise information. This study aims to determine the factors that influence the application of management information systems at the Putri Hijau Hospital in Medan. Analysis of the implementation of the hospital management information system (SIMRS) at Kardinah Tegal Hospital ". This study discusses the hospital information system that has an essential role in clinical and administrative services. Hospitals need a Management Information System (SIM) to improve the quality of medical services [3]. The Hospital Management Information System (SIMRS) is designed to integrate the hospital's main functions into one unified system stored in a central database. It is necessary to improve in terms of human resources by implementing training on the use of the Hospital Management Information System (SIMRS) to be used optimally for clinical functions and to support comprehensive patient services at the Kardinah Tegal Regional General Hospital. Management Information System of Kencong Health Center (SIMPUS) in Jember Regency Using End-User Computing (EUC) Satisfaction Method ".
This study discusses user perceptions of the Puskesmas Management Information System (SIMPUS) at dental BP need to add a clinical odontogram, and the emergency room needs to be added when the patient arrives at the health service facility, the patient's introductory identity, and the patient's health summary before leaving the emergency room users such as writing the date of birth, address, name of the drug, and the type of drug which caused the data to be inaccurate and the ER could not divide the time between filling out the Puskesmas Management Information System (SIMPUS) and serving patients [4]. All respondents said that the Kencong Health Center Management Information System (SIMPUS) format was simple, good and suitable for its users, and by using the Puskesmas Management Information System (SIMPUS), the work was punctual [5].
There needs to be training and user support, end-user involvement, considering special units, the involvement of doctors, implementing SOPs, and socialization to help Puskesmas management optimize usage. Based on the background, the formulation of the problem is that it takes quite a long time in the patient registration process because it must be recorded in a book so that there is an accumulation of patients at the patient registration counter; prospective patients must come directly to the Imbi Health Center if they want to register, record each poly, laboratory, administration businesses and pharmacies that still record using handwriting so that it takes a long time to carry out health services and produce reports and there is no management information system at the Imbi Health Center which is used to process data, register patients and make reports quickly and accurately [6].
The purpose of this research is to carry out patient registration services quickly so that there is no accumulation of patients when registering at the Imbi health center counters, serving prospective patients who wish to register online using the website, managing medical records at each clinic, pharmacy and laboratory quickly and accurately. And carry out the management of patient reports at Puskesmas Imbi quickly and accurately [7]. The benefit of this research is that patient registration counters can work effectively and efficiently because using a system created, prospective patients can register online without having to come directly to the Imbi Health Center counters, doctors, policymakers of each policeman, pharmacists and laboratory officers can carry out the more effective and efficient recording of patient medical records. And the management of Puskesmas Imbi can immediately see all perperiodic reports.
Management information System
SIM is a set of subsystems that are interconnected, gather together and form one unit, interact and collaborate with one another in certain ways to perform data processing functions, receive input in the form of data, then process it (processing) and producing output (output) in the form of information as a basis for decision making that is useful and has real value that can be felt as a result both at that time and in the future, supporting operational, managerial and strategic activities, the organization, by utilizing various the resources available and available for these functions to achieve their goals [8].
Public Health Center
Puskesmas is a functional organization unit that organizes health efforts that are comprehensive, integrated, evenly accepted and affordable by the community with active community participation and using the results of the development of appropriate science and technology, at a cost that can be borne by the government and the wider community optimal health status, without neglecting the quality of services to individuals in accordance with geographic conditions, area size, transportation facilities, and population density in the working area of the Puskesmas. In order for the reach of Puskesmas services to be more even and wider, Puskesmas need to be supported by supporting Puskesmas, placement of midwives in villages that are not yet covered by existing services, and mobile Puskesmas. In addition, community participation is mobilized to manage Posyandu [9]. ISSN: 2528-2417 ■ 3
Methodology 3.1 Data Collection
The data collection methods used in this study were interviews by conducting questions and answers with related sources, observation or observation by making direct observations at the research site, and literature study by reviewing previous references that supported this research [10].
Current System Analysis
The system flow currently running can be seen in the following figure:
Figure 1. Current System Flow Map
The picture above shows the Flowmap system that is running in general polyclinic starting from the patient submitting an outpatient card to the general policeman, then the officer records the patient's data and then the outpatient card is given to the doctor to examine the patient, the doctor will write down the results of the diagnosis and drug prescription, then the drug prescription is given to the patient. After the patient receives the prescription, it is then submitted to the pharmacist and pharmacist to find the drug to be given to the patient.
PIECES analysis
Based on the results of interviews, observations and literature studies, the results of the analysis of the current system requirements and the proposed system using the PIECES framework for six aspects (Performance, Information, Economy, Control, Efficiency, Service) were obtained. This analysis is used to get more specific root causes and symptoms of the problem because it uses performance, information, economy, system security, efficiency and service aspects. The aspects that have been mentioned above will be analyzed one by one so that the main problems that exist can be identified [11]. This is important because usually what appears on the surface are only symptoms of the main problem.
3.4.System planning
The following is a use case diagram in the Management Information System at the Imbi Health Center. Use case diagram of patient registration management which consists of 1 actor, namely the counter clerk and 21 use case, where the counter clerk can enter the counter clerk's page if they have successfully logged in [12]. After logging in, the counter clerk can perform patient registration management activities consisting of adding JKN patients, adding KPS patients, adding private patients, editing JKN patients, editing KPS patients, editing private patients, deleting JKN patients, deleting KPS patients, deleting private patients , see JKN patients, see KPS patients, see private patients, look for JKN patient data, look for KPS patient data, look for private patient data, print JKN patient treatment cards, print KPS patient treatment cards, print private patient treatment cards, view reports, search report, print report. The use case diagram of patient registration management can be seen in the following figure: ISSN: 2528-2417 ■ 4
Results and Discussion 4.1 Home Page Views
The home page is the main page of the Imbi Jayapura Health Center Management Information System which is displayed to users without having to log in first. The home page design can be seen in the following image:
Display of JKN Patient Online Registration Page
The JKN patient online registration page is the page used by JKN patients who are going for treatment for the first time to register online without having to come directly to the Puskesmas. The appearance of the JKN patient online registration page can be seen in the following image:
Display of the Online Treatment Registration Page for JKN patients
The online registration page for JKN patients is a page used by JKN patients who have already been treated to register online without having to come directly to the puskesmas. The appearance of the online registration page for JKN patients can be seen in the following image:
Login Page Views
The login page is a page where users or officers at the Imbi Health Center must log in first to access the Imbi Jayapura Health Center information system. The login page display can be seen in the following image:
Admin Main Page Display
The main page is a page that displays all menus in the Management Information System of the Imbi Jayapura Health Center which will be displayed to users who have access rights to login. The main page display can be seen in the following image:
Views of the New JKN Patient Registration Page
The new JKN patient registration page is the page used to register JKN patients for the first time being treated at the Puskesmas counter. The appearance of the new JKN patient registration page can be seen in the following image:
Display of Old JKN Patient Treatment Registration Page
The JKN patient treatment registration page is the page used to register JKN patients who have already been treated at the Puskesmas counter. The appearance of the old JKN patient registration page can be seen in the following image: The General Poly page is a page used by General Poly officers and general gPs who have access by logging in to the Imbi Health Center management information system to manage patient data on General Poly. The General Poly page view can be seen in the following image: Figure 10 . General Poly page view
Dental Clinic Page Views
The Dental Polyclinic page is a page used by Dental Polyclinic officers and dentists who have access by logging in to the Imbi Health Center management information system to manage patient data at the Dental Clinic. The appearance of the Dental Poly page can be seen in the following image: Figure 11. Display of Dental Clinic page
MTBS Poly Page Views
The IMBI Poli page is a page used by MBTS Poli officers and MTBS doctors who have access by logging in to the Imbi Health Center management information system to manage patient data in general polyclinic. The IMCI Poly page view can be seen in the picture:
Nutrition Poly Page Views
The Poli Gizi page is a page used by Poli Gizi officers who have access by logging in to the Imbi ISSN: 2528-2417 ■ 8 Health Center management information system to manage patient data at the Nutrition Poly. The poly nutrition page display can be seen in the following image: Figure 13 . Display of Nutrition Poly page
KIA Poly Page Views
The KIA Poli page is the page used by the MCH POLRI officers who have access by logging in to the Imbi Health Center management information system to manage patient data at the MCH Poly. The appearance of the KIA Poly page can be seen in the following figure:
Pharmacy Page Views
The pharmacy page is a page used by pharmacists who have access by logging in to the Imbi Health Center management information system to manage drug data and prescriptions from each Poli at the pharmacy. Pharmacy page display can be seen in the following image: The laboratory page is a page used by laboratory personnel who have access by logging in to the Imbi Health Center management information system to manage patient data in the laboratory. The laboratory page display can be seen in the following image: Figure 16 . Display of Pharmacy page
Incoming Mail Page Views
The incoming letter page is the page used by administrative officers who have access by logging in to the Imbi Health Center management information system to manage incoming mail data for administration. The page view of incoming mail can be seen in the following image:
Outgoing Mail Page Views
The outgoing letter page is the page used by administrative officers who have access by logging in to the Imbi Health Center management information system to manage outgoing mail data in administration. Outgoing mail page display can be seen in the following image:
Report Page Views
The report page is the page used by the Puskesmas management who has access by logging into the Imbi Health Center management information system to manage patient-related report data at the Imbi Health Center. The report page display can be seen in the following image: Figure 19. The report page view
User Data Page Views
The user data page is a page used by admins who have access by logging in to the Imbi Health Center management information system to manage user data in the Imbi Jayapura Health Center management information system. The user data page display can be seen in the following image:
Closing
Finally, the following conclusions can be drawn: a. After testing the system, patients can register online through the Imbi Health Center Management Information System. b. Counter clerk, general policeman, dental policeman, integrated management poly officer for sick toddlers, maternal and child health policeman, nutritionist, pharmacist, lobbyist, administration officer of Puskesmas management and admin can use the Management Information System | 2021-05-16T00:04:16.571Z | 2020-11-19T00:00:00.000 | {
"year": 2020,
"sha1": "575bc2bfcfd15d3ea719dc610651e76062bc21a2",
"oa_license": "CCBY",
"oa_url": "https://aptikom-journal.id/index.php/conferenceseries/article/download/399/150",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "d065405a1cefb6751e4501123f05c0d3e494d83a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Business"
]
} |
59407447 | pes2o/s2orc | v3-fos-license | Variable impact of Malaysia’s national language planning on non-Malay speakers in Sarawak
The study examined the impact of the national language policy on the language use of three main ethnic groups in the Malaysian state of Sarawak. The data analyzed was based on a sociolinguistic survey on language use in six domains that involved 937 Malay, Chinese and Iban adolescents from three major towns in Sarawak. The results showed that the use of Bahasa Malaysia exceeded English usage for all three ethnic groups, showing the success of compulsory education in the national language. However, the language planning has greater impact on the Iban than on the Chinese who are shifting away from the ethnic languages of the Chinese sub-groups to Mandarin Chinese. The availability of an alternative standard language with international standing which also functions as a symbol of cultural solidarity compromises the impact of the national language policy.
Introduction
In the context of status planning, language implementation it is important to ensure the adoption and spread of the language form that has been selected and codified."It is not enough to devise and implement strategies to modify a particular language situation; it is equally important to monitor and evaluate the success of the strategies and progress shown toward implementation" (KAPLAN; BALDAUF, 1997, p. 37).Studies on language implementation across settings can serve to monitor policy success and inform language planning theory.For example, in the Southeast Asian region, the promotion of Bahasa Indonesia as the national language of Indonesia has succeeded as more and more urban, middle-class; indigenous families in Java and elsewhere are adopting Indonesian as the home language (OETOMO, 1988, cited in OETOMO, 1991).Census information from 1971, 1980 and 1990 shows an increase in the knowledge of Indonesian and a concomitant decline in the knowledge of Javanese, Sundanese, Madurese, Batak, Buginese, Minangkabau and other languages among the people of Indonesia (STEINHAUER, 1994).Steinhauer attributed the success of Indonesian to the fact that it has never been the language of a specific dominant group and hence cannot be stigmatised as the language of a culturally or economically identifiable section of the population.In neighbouring Thailand, language planning has somewhat succeeded in shifting ethnic labelling as some Thai people of Chinese descent describe their grandparents as Chinese Teochew but themselves as Thai Chinese.Studies by Morita (2003) revealed that the Chinese elite and the Thai-born Chinese identified with the Thai rather than with the Chinese.Many Chinese and mixed Thai and Chinese ancestry have experienced language shift to Thai and no longer learn Chinese to use at home (MORITA, n.d.).This language shift is a result of the decline of Chinese education, rejection by the China-born Chinese and the government's pro-Thai campaign (MORITA, 2004).Unlike Thailand, ethnic delineation is still obvious in Singapore despite the adoption of English or Mandarin Chinese as a language of daily communication.The Singapore government's definition of bilingualism means "being proficient in English and one's 'ethnic mother tongue' (Mandarin, Malay or Tamil) as a cultural language" (CHUA, 2004, p. 68).Research has shown that the Speak Mandarin campaign and the bilingual education policy introduced in 1966 has resulted in the young Chinese using Mandarin in place of the languages of the Chinese sub-groups such as Hokkien and Teochew (e.g., CHUA, 2009;KUO;JERNUDD, 2003;LI;SARAVANAN;NG, 1997;RINEY, 1998).Similarly, in the Philippines there is increasing use of Filipino, the national language, despite earlier resistance (see HILDAGO, 1998).
Thus far, the review of key studies in the Southeast Asian region indicates that status planning for the national language has succeeded to different levels in various settings.Without a common framework, comparison of detailed descriptions across disparate settings is not easy.A common framework allows "field researchers to collect and compare data to the extent such data can be comparable across countries" (LAITIN, 2000, p. 154).An important framework that has emerged is the strategic model of language choice based on game theory developed by Laitin (1992), a political scientist interested in language policy outcomes in multilingual settings (KAMWANGAMALU, 2011) and using language as a proxy for ethnicity in order to study the link between ethnic heterogeneity and civil war (see FEARON;LAITIN, 1996;2003).Game theory emphasises strategic choice based on the expected utility model of decision making and links it to the concept of equilibrium to generate predictions (MUNCK, 2001).
The game theory of language regimes applied to national language programmes conceptualises "economic pay-offs, local honour [cultural solidarity], and external acceptance [as] the three components of a language choice utility function" (LAITIN, 1993, p. 232).Laitin explained that in making a rational language choice, an individual seeks the highest returns possible for their language choice through rational calculation of returns, and the choices are seldom binary except in the case of medium of education for children.Based on this analysis, Laitin concluded that the multilingual repertoire which includes a global and a national or regional language is an efficient equilibrium in the emerging world system of language.Laitin (1992) predicted that market forces would force multilingual countries to formulate policies geared towards a 3+1 outcome, with the 3-1 outcome for citizens whose mother tongue is the same as the national language and the 3+1 outcome for the others (KAMWANGAMALU, 2011).This paper adopts Laitin's (1993) game theory on language policy outcomes as its theoretical framework.
Aim of the study
This study aimed to compare the impact of the implementation of the national language policy on the language use of the three main ethnic groups in the Malaysian state of Sarawak: the Malay, the Iban and the Chinese.As the national language of Malaysia (Bahasa Malaysia) is the standard language of the Malay speech community, their language use patterns are taken as the basis for comparison with the two non-Malay groups.
In this paper, the term "Malay languages" encompasses Bahasa Malaysia and other varieties of the Malay language, including the regional Malay variety spoken in Sarawak which the speakers refer to as "Sarawak Malay Dialect", "Local Malay" or "Bahasa Melayu Sarawak".The abbreviated version, Bahasa Sarawak, is used in this paper.The term "Chinese languages" refers to Mandarin Chinese, which is the standardised Chinese language and the languages of the Chinese sub-groups such as Foochow, Hakka and Hokkien.Although Foochow, Hakka and Hokkien are referred to as dialects by Chinese speakers, this paper keeps to the usage of "languages of the Chinese sub-groups" to avoid having to differentiate between languages and dialects.The term "Indigenous languages" is used to include the languages spoken by the Indigenous groups in Sarawak, for example, Iban, Bidayuh, Kayan and Kelabit.
Sociocultural background of Sarawak
Sarawak is a Malaysian state located on the island of Borneo, flanked by Kalimantan in the East and Sabah in the North.The other part of Malaysia is Peninsular Malaysia, located south of Thailand.Out of the population of 2 million in Sarawak, the Iban constitute 29.2%, the Chinese, 25.5% and the Malays, 22.7 % (Department of Statistics Malaysia, Sarawak, 2011).The Iban is the largest indigenous group in Sarawak.The languages of the Iban and other indigenous groups in Sarawak are mutually unintelligible.Chinese comprises several sub-groups such as the Foochow, Hakka, Hokkien, Teochew and Cantonese, each with their respective languages which are also mutually unintelligible.The difference is that the Chinese share a common standardised language, Mandarin Chinese.Those who go to Chinese schools can read and write Mandarin Chinese but others who learn the language informally may not have written competency in it.The Malays in Sarawak speak different regional varieties of the Malay language and those with formal education also speak and write Bahasa Malaysia.However, as the regional Malay variety that was used as the basis for developing the standardised Malay language was the Johor variety, Bahasa Malaysia is seen as a Peninsular Malaysian language in contrast to the Bahasa Sarawak (TING, 2001).Despite regional variation in the Malay varieties spoken in different parts of Sarawak, Malay speakers can understand one another.
The Malay languages have more institutional support than the other languages because the ruling government of Malaysia has greater Malay representation than other ethnic groups.The political power held by the Malays accords vitality to the regional Malay variety.This advantage is augmented by the fact that language-in-education planning propagates Bahasa Malaysia as the official language of Malaysia.Bahasa Malaysia was instituted as the national and official language when the then Federation of Malaya gained independence from the British in 1957.The status of Bahasa Malaysia as the official language of Malaysia means that official communication by and with the government is conducted in Bahasa Malaysia.Later when Sarawak joined the Federation of Malaysia in 1963, Bahasa Malaysia was adopted as the national language.Then, in 1985 the Sarawak State legislature agreed to use Bahasa Malaysia as the official language after infrastructural inadequacy and resistance were addressed (see LEIGH, 1974;PORRITT, 1997).Bahasa Malaysia was only introduced as the medium of instruction in Sarawak schools in 1977 at Primary One [Year One] level.By 1987, Bahasa Malaysia was used as the language of instruction up to Form Five [Year 11] (see TING, 2010a for further details).Because of the prevailing linguistic milieu in Sarawak, many government officers who had an English educational background are inclined to speak English in an official capacity, particularly those who are not Malays and hold positions at higher hierarchical levels (TING, 2007).
Subsequent to this, there was a remission in status planning whereby English was allowed restricted status as medium of instruction for Science and Mathematics in 2003.Ong (2009) sees the 2003 language-switch policy for the teaching of Science and Mathematics in English in secondary school as a gradual shift back to the ideology of the early post-independence era when state language management was characterised by English-Malay bilingualism.However, the concern with the widening performance gap between urban and rural students, which was affecting mainly Malay students, and pressure from language nationalists escalated into a cabinet decision on 8 July 2009 to revert to Bahasa Malaysia in national schools and mother-tongue languages in national-type schools from 2012 onwards (CHAPMAN, 2009; "Maths and Science back to Bahasa, mother tongues", 2009).The reversal to Bahasa Malaysia as the medium of instruction for Science and Mathematics signaled a return to the "national unity / integration / identity" (ONG, 2009, p. 211) agenda that is anchored by the national language.
An apparent exception to this national language policy is the use of English in higher education.In the tertiary educational scene, reforms have brought about the reinstatement of English as the medium of instruction in public universities since the 1990s (ONG, 2009).Flexibility in implementation is deemed necessary because of the globalisation of higher education and the need to be relevant to the international student market.Studies have shown that English is used, with some code-switching to Bahasa Malaysia, in some public universities for lectures, particularly in the sciences (YEO; TING, 2010).
Socioculturally, the Malay, Iban and Chinese communities in Sarawak are distinct with some blurring of ethnic boundaries in urban centres due to the social transformation that has accompanied modernisation.The Chinese who migrated from China were mainly involved in agriculture and trade.The Chinese sub-groups lived in their respective enclaves.The Ibans were mainly farmers and concentrated in the Rejang River basin.The Malays were known to be fishermen and rice planters who lived along river banks.With modernisation, urban migration for better employment brought about the mingling of ethnic groups as they began to share work places and neighbourhoods.However, culturally they remained distinct.
Interethnic contact in Sarawak in rural areas may take place in the language of the numerically dominant community but this may not be in the case in the cosmopolitan urban areas.In earlier years, the common language of communication was English because of the remnants of the colonial influence.However, in later years, Bahasa Malaysia emerged as a shared language due to the formal teaching of the language in schools.The current scenario is the use of Pasar Malay (a pidginised form of the Malay language) by older Chinese speakers and Bahasa Malaysia by younger Chinese speakers, usually in the transactional domain, in Sarawak (TING;CHONG, 2008;TING, 2010b) and also in Sabah (WONG, 2000) and Peninsular Malaysia (BURHANUDEEN, 2006).The Chinese have reservations about speaking Malay languages among themselves.Ting and Nelson's (2010) survey of 200 university students in Kuching, Sarawak showed that they view Bahasa Malaysia as a language of the Malays.The Iban and other indigenous groups of Sarawak are not as resistant towards the adoption of the Malay languages.Studies by Ting and Campbell (2007) on the Bidayuh show the use of the Bahasa Sarawak in family communication when spouses are not Bidayuh (see also DEALWIS, 2009;2010;DEALWIS;DAVID, 2009).These research findings point to some differences in the receptivity of the different ethnic groups towards the use of Bahasa Malaysia for intraethnic communication, although the same reservation is not evident in learning the language for utilitarian purposes.This paper offers sociolinguistic data on the language use of Malay, Iban and Chinese adolescents in Sarawak to obtain insight into the future linguistic milieu in Sarawak, particularly with regards to place of Malay languages in relation to other languages.
Study method
A sociolinguistic survey was conducted in three major towns in Sarawak (Kuching, Sibu and Miri) from January to March 2011.The survey involved various indigenous groups in Sarawak but only the data on the Iban are reported in this paper because the numbers from the other groups are too small.In the original study, the language-ethnicity link was also examined but the results are not included in this paper and the items are also not included in the questionnaire attached (see APPENDIX A).
The respondents in this study were 937 adolescent students aged 13 to 18 (mean: 15.6) in six schools: one located in the urban and another in the rural hinterland of each of the three towns.Using personal contacts, informal consent for the study was initially sought from the principals of the schools.Then the names and addresses of the schools were submitted to the Malaysian Ministry of Education and subsequently to the Sarawak State Education Department for approval to conduct the study.The official letter granting approval for the study was sent to the school principals, after which the details of the study were explained by the research assistants involved in the study.Arrangements were made for about 200 students from each school to fill in the questionnaire.Students were asked to stay back after school to fill in the questionnaires which were collected immediately.A total of 1188 questionnaires were returned but only the data from the 324 Iban, 348 Chinese, 265 Malay respondents were included in this particular study in line with its aim to compare the non-Malay speakers' language use with that of the Malay speakers.Some of the respondents came from families in which one parent was Malay and another Iban but following Phinney (1992), the ethnic identification for this study is based on their self-identification.
Within these ethnic groups, the gender distribution is balanced.TAB. 1 shows other demographic characteristics of the respondents which are relevant to language use.The frequencies in the table refer to the number of respondents and the percentage was calculated out of the total for the respective ethnic groups.The majority of the Iban respondents had Bahasa Malaysia as the medium of education from pre-school to secondary school; this experience was similar to that of the Malay respondents.However, a large proportion of the Chinese respondents had attended Chinese pre-schools (78.74%) and continued with Mandarin Chinese as the language of instruction in primary school (90.23%) before attending public schools which used Bahasa Malaysia as the medium of instruction.To make the switch, the students go through a transition class after Primary Six [Year Six] before proceeding to Form One [Year Seven].There were slightly more Malay and Iban respondents in the rural sites (about 60%) than the urban sites (about 40%) but the pattern is reversed for Chinese respondents.The socio-economic status of the respondents in this study was gauged by using the monthly income of the parents.Regardless of ethnicity, the respondents were in the lower income bracket of less than RM2000 per month.*The percentages do not add up to 100% for pre-school because some respondents did not attend pre-school and for parental monthly income because one respondent was an orphan A 37-item questionnaire was used to examine the language use of the adolescents (see APPENDIX A).A section of the questionnaire examined language use in six domains relevant to school-going adolescents: family, friendship, education, transaction, mass media and religion.The less relevant domains of government, employment and legal were omitted from the questionnaire for the purposes of this study.The categorisation of domains was based on Platt and Weber's (1980) classic study on language use in Malaysia.Nine items were allocated to the family domain as this is the bastion of ethnic language use (KHEMLANI-DAVID, 1998;LAWSON;SACHDEV, 2004).The section had four items on language use in the mass media encompassing radio, television, movie and online communication.Language use in the other Educational background domains was examined with only one item each.Altogether the respondents were asked to report their language use for 17 situations within these domains.For these items, respondents could put down more than one language as the use of two or languages is common in a multilingual setting.
The final section (20 items) elicited demographic information on their family, social network and educational background in order to describe the context for the language use patterns.Li (1994) found that the composition of an individual's social network, and especially the ethnic composition of a network, had a greater explanatory value for language choice than variables such as age and gender (cf.also LI, MILROY; PONG, 1992) (cited in LANZA; SVENDSEN, 2007).
Results
The results in this section show that the Iban respondents are closer to the Malay respondents in their language use patterns than they are to the Chinese respondents.
(1) Language use of the Malay respondents TAB. 2 shows the number of times the Malay respondents reported using a particular language for interactions in the six domains examined in this study.For the Malay, interactions in all the domains can take place in either Bahasa Malaysia or Bahasa Sarawak.The balanced use of the standard Malay language and the regional Malay variety is evident in online communication, and the transactional and friendship domains, with Bahasa Malaysia used for interethnic communication and Bahasa Sarawak for intraethnic communication.In the mass media domain, two sub-domains where English is the preferred language for a substantial proportion of the Malay respondents are television programmes (103 reports) and movies (117 reports) but the majority still prefer Bahasa Malaysia (212 and 191 reports respectively).The education domain is the only domain where Bahasa Malaysia usage exceeds that of Bahasa Sarawak.The formality of the teacher-student relationship necessitates the use of the official language, Bahasa Malaysia (236 reports).Nevertheless, as 161 (or 60.75%) of the 265 Malay respondents also reported speaking Bahasa Sarawak with their teachers, this shows that the shared ethnic membership needs to be acknowledged through the use of the local Malay variety.Further evidence of Bahasa Sarawak being the language of the Malay community is found in the family and religious domains.
From Malay languages, we move on to examine the use of other languages by the Malay respondents.English has specific relevance when it comes to television programmes and movies, and is sometimes used for family communication (only 134 reports).Indigenous languages are used with friends and family members from other ethnic groups.Altogether 48 respondents were products of exogamous marriages but they had identified themselves as Malays.Five were from Chinese-Malay marriages and 40 were from Indigenous-Malay marriages, but the remaining three respondents did not have Malay parents.
One respondent had Bidayuh-Iban parentage, the second had Melanau parentage and the third had Chinese-Melanau parentage.Their parents were probably Muslim converts.In Malaysia, marriages with Malays entail conversion to Islam.Article 160 of the Malaysian Federal Constitution states that a Malay is "a person who professes the religion of Islam, habitually speaks the Malay language, [and] conforms to Malay customs" (Legal Research Board, 1997, p. 198).When non-Malays take up Islam as their religion, they also tend to adopt the Malay identity."Masuk Melayu" (Enter Malay) is the term used the Muslim Malayalee respondents in Nambiar's (2010) study.A similar phenomenon is reported in Indonesia by Steinhauer (1994, p. 772) whereby "Dayaks who give up their tribal religion and convert to Islam appear to consciously abandon their own language and to shift to Banjarese as a sign of total conversion".Speaking the language is integral to the cultural identity of the Malay, reflective of Fishman's (1977) patrimonial dimension of ethnic identity.Although they have English in their linguistic repertoire and use it to some extent, the Malay languages will be the mainstay of communication for the Malay speech community.
(2) Language use of the Iban respondents TAB. 3 shows the number of times the Iban respondents reported using a particular language for the domains specified in the questionnaire.Iban is the most frequently spoken language for the Iban respondents, particularly in the family and religious domains as the interactions are mainly within the Iban community.
When there is need for a standard language as in the case of interactions with teachers in school, reading of religious texts and the mass media domain, the Iban respondents opt for Bahasa Malaysia rather than English.Although Iban is now a written language using the Roman alphabet, its written use is not popular as can be seen from the 33 reports of Iban use for online communication.The domains with a balanced use of Iban and Bahasa Malaysia are the transaction and friendship domains -the former for interactions within the Iban community and the latter for interethnic communication.For these two domains which involve ethnic diversity, the gravitation is towards Bahasa Malaysia, followed by Bahasa Sarawak and English.English movies are preferred by 166 Iban respondents but slightly more (184) reported a preference for Malay movies.Bahasa Sarawak and Chinese languages do not feature as much in daily language use of the Iban respondents.
There is no doubt that the private family domain is where the ethnic language reigns for the Iban respondents (2436 reports), supporting the adage that the home is the last bastion of ethnic language use.On the other hand, the presence of other languages for home communication cannot be ignored: Bahasa Malaysia (433 reports), English (315 reports) and Bahasa Sarawak (171 reports).A check on the ethnic descent of the parents showed that there were only eight Iban respondents who had one parent who was not indigenous.Of the eight respondents, only one had a Malay mother and the rest had Chinese mothers.This result shows that the ethnic language is giving way to other languages in the family domain for the Iban respondents under study.(3) Language use of the Chinese respondents TAB. 4 shows the number of times the Chinese respondents reported using a particular language for the domains covered in the language use questionnaire.For the Chinese respondents in this study, the use of Indigenous languages and Bahasa Sarawak is almost negligible but English is the preferred language for movies and online communication.Bahasa Malaysia accounts for 11.44% of the language choices reported -the most in education, and slightly less in the transactional, friendship and family domains.Although Bahasa Malaysia is the main language used by the Chinese respondents with teachers in school, this domain is shared with Mandarin Chinese and English.The use of standard languages other than Bahasa Malaysia shows a compromise in adherence to the official language policy, possibly to take account of the ethnicity and language preferences of the teachers.In the ethnically-diverse transaction and friendship domains, Bahasa Malaysia is mainly used for interethnic communication.This is because shop attendants tend to be from Indonesia or Sarawak indigenous groups in the current Sarawak retailing scenario.The main language in these two domains is, in fact, Mandarin Chinese for communication with Chinese interlocutors.
The results point to the growing role of Mandarin Chinese relative to the languages of the Chinese sub-groups for communication within the Chinese community -in the mass media, family, religion, transaction and friendship domains.If there is need for a standard language as in the case of movies, radio and television programmes and online communication, the use of Mandarin Chinese is understandable.Similarly, the need for a written language for the reading of Christian and Buddhist religious texts and religious liturgy makes Mandarin Chinese more relevant.However, for spoken communication, the infrequent use of Chinese sub-group languages with friends, family and shop retailers compared to Mandarin Chinese provide strong evidence of a shift towards the latter.
Discussion
If the language use patterns of the adolescent respondents from the Malay, Iban and Chinese speech communities can be taken as an indication of the patterns in the larger community, the results show that the national language planning is more successful amongst the Iban than the Chinese.The language use patterns of the Iban are similar to those of the Malay in their preference for Bahasa Malaysia to English, apart from the use of their respective ethnic languages in the family and religious domains.On the other hand, the Chinese use Mandarin Chinese in most of the domains, and generally prefer English to Bahasa Malaysia.The question arising from the results is: why are the Iban closer to the Malay in their language use patterns?Culturally they are different.Religiously, a large proportion of the Ibans are Christians and the Malays are Muslims.Three possible explanations can be posited.
The first is educational background.The data for this study show that both the Iban and the Malay groups have Bahasa Malaysia as their medium of education from pre-school to secondary school.Through this, they develop familiarity with Bahasa Malaysia and use it for daily communication.The familiarity also makes them prone to using Bahasa Malaysia for formal and written communication, unlike the older generation which resorted to English (see TING, 2007).
The second explanation is anchored in the linguistic similarity of the Iban and Malay languages.At a basic level, there are similarities in the vocabulary.Both Iban and Malay belong to the Malayic subgroup.Hudson (1970) classified Iban, Salako, Kendayan and related languages into a single subgroup which are relatives of Malay that have undergone separate development (cited in ADELAAR, 2006).The similarity of Iban and Malay languages makes it easier for their speakers to adopt the Malay languages for daily use.
Thirdly, in the political scenario of Malaysia, the Ibans are categorised together with the Malays as Bumiputra (sons of the soil), which is likened to the indigenous peoples of Malaysia.The official categorisation could facilitate a social aggregation of the two ethnic groups.There is support in the data in that a majority of the Iban respondents in this study reported having Malay friends and vice versa.In comparison, fewer of the Chinese respondents reported having Malay friends.Since the Iban and the Malay are drawn together on various counts, it is to be expected that their language use patterns are also similar.The only language feature that distinctively distinguishes between the two groups is the Iban's use of their own ethnic language.
In the context of Laitin's (1993), strategic model of language choice based on game theory, the rational choice of Malay languages for daily interactions by the Iban opens up means to enter the politically strong Malay community.There are also economic pay-offs in the form of business contracts and social contacts.As the Malay group has established itself in the bureaucracy, the school system and gradually in the commercial centre, "the speakers of other languages must typically learn their language in order to penetrate those arenas" (LAITIN, 1988, p. 289).Thus, it is important for the Iban to be competent in Malay languages for social mobility.This process is assisted by compulsory education in the national language, perhaps coupled with stigmatisation of their ethnic language, similar to the effects of the compulsory public school in France: Thus arose a powerful mechanism of displacement of local by national languages, as the school language gradually became the language the parents would speak to their children, partly in order to prepare them for school, partly also in response to the growing intranational mobility generated by industrialisation and urbanisation, and facilitated by the very spread of the nation's official language (Van PARIJS, 2000, p. 218).
However, it seems that Chinese languages are not as easily displaced by Bahasa Malaysia.Although Bahasa Malaysia and, for that matter, English are used for utilitarian purposes, the Chinese hold on to their ethnic language.In the past, it was the languages of the Chinese sub-groups but there is a shift towards Mandarin Chinese reflected in the language use of the adolescent respondents in this study (see also TING;HUNG, 2008;TING;MAHADHIR, 2009).The diminishing prestige of these Chinese languages can be attributed to the prevalence of a diglossic situation among the Chinese whereby Mandarin Chinese functions as the supra-ethnic, official language, whereas the languages of the Chinese sub-groups are used for intimate intra-ethnic communication and local cultural events (see SNEDDON, 2003;STEINHAUER, 1994 on Indonesian).This paves the way for the emergence of a supra-Chinese identity linked to the use of Mandarin Chinese (TING;CHANG, 2008).
The resistance of the Chinese to the national language agenda could stem from the perception of the Chinese language and culture as superior (see WU, 1991).The Chinese also treat their language as integral to the Chinese identity.Through the language, they establish cultural solidarity with the broader Chinese community worldwide and receive economic pay-offs in the form of jobs in companies with Chinese ownership.To facilitate access to these benefits, Chinese parents choose Chinese private schools over public schools which only teach Mandarin Chinese as a subject.The use of Mandarin Chinese beyond the school system is facilitated by institutional support in the form of the mass media as well as the linguistic landscape.The rational choice of Mandarin Chinese, supplemented by English, offers the Chinese community better pay-offs than full-scale adoption of Bahasa Malaysia because intranational mobility for them is limited by affirmative action policies favouring the Malays (see CROUCH, 2001).
Conclusion
The study examined status planning in a setting where the national language is derived from the language of a majority ethnic group in the country.This is a case of rationalisation through the recognition of the language of a majority group and the imposition of a single language for education and administrative communication (see LAITIN, 2000).The study showed that in the Malaysian state of Sarawak, the Iban adolescents are closer in their language use to the Malays than are the Chinese.Like the Malays, the Ibans frequently speak Malay languages in the mass media, friendship and transactional domains and even in the family domain.On the other hand, the Chinese are inclined towards Mandarin Chinese although the ethnic languages of Chinese sub-groups are still a feature of daily language use at this point in time.The religious and family domains show ethnocentric patterns of language use and are still the bastion of ethnic language use.In the education domain, the dominance of Bahasa Malaysia is a direct outcome of the status planning for the national language.Laitin's (1993) 3+1 multilingual repertoire has explanatory power in a restricted domain of application.For all the three ethnic groups under study, English is the global language that is used in domains such as higher education and a gateway to the outside world and Bahasa Malaysia is the regional or national language which is the medium of education and also the language of government.For the Malay whose ethnic language is the national language, these two languages are needed to function in Malaysian society; hence the 3-1 outcome as predicted in Laitin's (1993) game theory of language regimes.The assumption is that the standard Malay language and regional varieties are considered to be one language.Using this assumption, Laitin's formula of language outcomes is not applicable to the Chinese and Iban who need three languages to function in the Malaysian community.Besides English and Bahasa Malaysia, they need their respective ethnic languages for community membership and local honour, giving rise to the 3+0 outcome.The language outcome is not 3+1 because Bahasa Malaysia is both a language for national integration and a regional language.Between the two non-Malay ethnic groups, the findings revealed a greater dominance of Bahasa Malaysia in the lives of Iban respondents than in the lives of Chinese respondents.The impact of the national language policy is compromised when the Chinese can resort to Mandarin Chinese, an alternative standard language with international standing, which also functions as a symbol of cultural solidarity for the speech community worldwide.In the absence of a standard language of this standing in the case of the Iban, they embrace their ethnic language which provides access to community membership and adopt Bahasa Malaysia for intranational mobility.The findings suggest that for the implementation of status planning for the national language to succeed with groups whose ethnic language have a larger sphere of usage and influence than the national language, the socio-economic gains derived from mastery and use of the national language has to be unequivocal.Even then, the returns from using the national language may be less than returns from using the global language.In the context of Laitin's (1993) game theory on language policy outcomes, language planning which seeks to elevate the national language and obliterate contesting languages may no longer be feasible, and it is more rational to seek equilibrium of these languages.
TABLE 1
Demographic characteristics of respondents in terms of ethnicity, gender and educational background, locality and parental monthly income
TABLE 2
Frequency of languages used in six domains by Malay respondents
TABLE 3
Frequency of languages used in six domains by Iban respondents
TABLE 4
Frequency of languages used in six domains by Chinese respondents | 2018-12-11T03:26:53.364Z | 2012-06-01T00:00:00.000 | {
"year": 2012,
"sha1": "a57f9d2f55393e95c42ca5b62266e859fffe2194",
"oa_license": "CCBYNC",
"oa_url": "https://www.scielo.br/j/rbla/a/8RK6xw8w4ynQXLVdYVP4wWM/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a57f9d2f55393e95c42ca5b62266e859fffe2194",
"s2fieldsofstudy": [
"Linguistics",
"Education"
],
"extfieldsofstudy": [
"Geography"
]
} |
255283664 | pes2o/s2orc | v3-fos-license | Compliance with the Zero Suicide Initiative by Mental Health Clinicians at a Regional Mental Health Service: Development and Testing of a Clinical Audit Tool
Aim: The aim of this study is to investigate the compliance of mental health clinicians in applying the Zero Suicide (ZS) approach to their clinical practice in a rural and regional health community setting. Methods: A retrospective clinical audit of six mental health teams was undertaken at a single site. A clinical audit tool was developed and validated using a six-step approach. The data was extracted and analysed via descriptive and inferential statistics and compared to a specialised mental health team, experienced with the ZS approach. Results: A total of 334 clinical records were extracted for January, April, August, November 2019 and June 2020. The clinical audit and analysis confirmed that the mental health teams are not consistently using the assessments from their training and are therefore not implementing all of these elements into their practice. This could have implications for the risk formulation and treatment for people at risk of suicide. Conclusions: The use of a validated clinical audit tool can be beneficial to establish compliance with the mental health clinicians and to determine any areas requiring further improvement. Further education and reinforcement may be required to ensure consistency with incorporating the elements of ZS into everyday clinical practice.
Introduction
The World Health Organisation (WHO) estimates that a person dies by suicide every 40 s in the world, resulting in approximately 700,000 deaths a year [1]. Suicide is a global public health issue, affecting people across all life spans with its devastating consequences. Considerable efforts have been made to raise awareness of this public, but preventable health issue and to strengthen suicide prevention strategies [1]. Mental health clinicians play a key role in identifying, assessing, and implementing strategies to assist people at risk of completing suicide, with evidence revealing that 75% of individuals who died by suicide had visited a healthcare professional within three months preceding their death [2].
Suicide prevention requires a systems-approach, involving the implementation of multiple strategies and collaborating across sectors within a community [3]. As with any treatment approach, comprehensive evaluation is necessary to enable quality control and improve pre-existing strategies. Three key evaluation recommendations are: continuous monitoring of the implementation process of suicide prevention plans, evaluation of suicide prevention plans to form lessons learned for future efforts and evaluation of surveillance systems [4].
Background
The Zero Suicide (ZS) initiative has potential to improve outcomes in people at risk of suicide, due to its structured holistic framework and organisational commitment [5]. The ZS model has seven elements for organisations to adopt, with the focus on improving patient safety outcomes, continuous quality improvement and the safety and support of clinical staff [5]. The ZS model was initially developed in 2011 by the Clinical Care and Intervention Task Force and the National Action Alliance for Suicide Prevention in the United States of America and has been implemented in over 200 healthcare organisations worldwide [6]. The elements of the ZS approach focus on system-wide culture change, workforce training, identifying individuals at risk of suicide, engaging and treating people at risk using evidence-based treatment, care transition, and improving policies and procedures through continuous quality improvement [5]. Studies have shown promising results with the implementation of the ZS approach. An analysis undertaken within a large Australian public mental health service reported a significant reduction in repeated suicide attempts (65%), as well as a longer time between attempts [7]. Similarly, the Centerstone in Tennessee reported a 65% reduction in the rates of suicide among patients after the implementation of the ZS framework [6]. Despite these promising statistics, further research is recommended to establish the efficacy of the implementation of the ZS framework, especially in an Australian setting [7][8][9].
The introduction of the ZS approach to a community mental health service at a large regional mental health unit in Victoria, Australia provided an opportunity to assess the efficacy of its implementation. In an Australian context, specialised mental health care can be accessed through emergency departments, residential mental health and community mental health services, which is funded by state and territory governments [10]. A Suicide Prevention Pathway (SPP) was introduced to a single Community Mental Health team located two hours from the main regional mental health unit, in which each step of the ZS framework was embedded within the hospital's current processes. This community mental health team consist of a range of health professional specialising in mental health including mental health nurses, occupational therapist, social worker, clinical psychologist, with overarching support by the treating psychiatrist. All staff attended two days of SPP training in February 2019 with periodical refresher in-services provided to staff thereafter. The purpose of the SPP was to improve suicide prevention practices through improving the consistency and quality of assessments, formulation safety planning and consumer education. The authors developed a ZS clinical audit tool to appraise patient clinical records to assess the compliance of mental health clinicians in adopting these practices. A clinical audit is defined as "a quality improvement process that seeks to improve patient care and outcomes through systematic review of care against explicit criteria and the implementation of change" [11]. Hospital records can provide a valuable and rich data source of information to compile data and to enhance our knowledge on suicide [4]. To the authors' knowledge, no other study has specifically investigated the compliance of mental health clinicians in the adoption of the ZS model within their clinical practices.
The aim of this study is to investigate the effectiveness of the ZS model through conducting a clinical audit to establish the compliance of the mental health clinicians at a regional mental health service. The authors intend to identify any potential areas of improvement that may assist in the delivery of these practices to people at risk of suicide. Given the significance of this public health issue, considerable efforts are necessary to contribute to the current evidence base regarding suicide prevention programs and to address any gaps in the literature.
Design
This study was a single-site retrospective clinical audit, using a quantitative descriptive design. A clinical audit tool was developed and used to extract data from client records to assess the implementation of the Zero Suicide prevention approach based on the Collaborative Assessment of Suicide Events (CASE)-Shawn Shea Model, in a single large regional hospital.
Audit Tool Development
The clinical audit tool (Appendix A) was developed using the six-step process as outlined by McConnell-Henry et al. [12]. After the audit aims were identified (step one), the 41-item data extraction tool was developed by the research team in consultation with the auditors (step two).
Step three involved establishing content validity index (CVI), in which the data extraction tool was sent to five experts with extensive experience. These experts included a Professor of Mental Health, who was both a content and design expert, an Executive Director of a regional mental health service, a mental health practitioner together with two academic design experts. The 41-item audit tool was tested for clarity and relevance, in which each item was ranked highly relevant by all five content and design experts. When assessing the CVI of a focused data set, an overall scale is considered to be valid with an index of ≥0.90 [12]. In this study, the Scale Content Validity Index Average (S-SCI/Ave) for all items was 0.92, therefore no changes were required of the 41 items. Using expert feedback, minor amendments were made to the tool such as the inclusion of the partial column and the low, medium and high ratings scale even though this is not considered to be part of the ZS approach.
Step four and step five involved pilot testing and training using the audit tool, in which the two auditors each completed two individual clinical records and compared the findings. Interrater reliability was established for this tool, in which the two auditors underwent clinical audit training by author JP to ensure uniformity in the audit. The two auditors also have extensive experience in the mental health field and were the sole auditors.
Step six involved establishing the sample size.
Data Collection
Patient medical records were accessed by members of the project team affiliated with the mental health service for five different time periods following ethical approval. The data was extracted from medical files from the regional Community Mental Health Services. The sample size of the population was calculated to be n = 292 based on a predicted population of 1202 case records with 95% confidence intervals. A cluster sampling strategy was involved, in which the research team opted to extract data from the months of January 2019 (pre SPP staff training), April 2019, August 2019, November 2019 and June 2020. These months were specifically selected by the research team to evaluate the long-term compliance of mental health clinicians in implementing the SPP. Data from the Hospital Outreach Postsuicidal Engagement (HOPE) program was used for comparison. The role of the HOPE clinicians is to provide support for people after discharge from emergency departments (ED) who are identified as at-risk of suicide or for those who express suicidal ideation and/or repeated intentional self-harm. The HOPE team had previously embedded the ZS approach to their practice and are considered to be the practice benchmark for the ZS approach within this health service. After data extraction, each audit tool was deidentified and coded, before being electronically scanned and sent to the research team for analysis. Ethical approval was granted from the hospital ethics committee and the University human ethics committee prior to extraction of data using the clinical audit chart tool (Project No. 2020-20 HREA and A20-070).
Data Analysis
The data was collated and entered into a SPSS software (IBM SPSS Statistics for Windows, Version 26.0, IBM Corp, Armonk, NY, USA) file, cleaned and checked for missing data. Basic demographic data such as age, gender, referral origin, mental health team and nature of presentation was analysed using descriptive statistics. After earlier consultation with an expert statistician, it was decided to use descriptive and inferential analysis through the computer program IBM SPSS V25. Pearson-Chi squared statistics were incorporated to establish statistical significance between the expected and observed frequencies of the data. Initial comparisons were made between the extraction points, followed by a combining of the data for analysis.
Demographics
A total of 336 charts were extracted from the selected time periods, in which 334 records constituted the final data set. The remaining two charts were excluded due to incomplete or missing data. Table 1 details the demographical data, in which the mean age of the clients was 39.8 years, with the majority of clients aged between 25-45 years of age, ranging from 8-97 years. The most frequent group was the 25 to 35-year age group (n = 73) followed by the 36 to 45-year age group (n = 60). Gender related data revealed that 52% (n = 171) of clients identified as male, 47% (n = 157) of clients identified as female and 1% (n = 4) of clients identified as intersex, and two charts had missing data. The clinical audit identified six main mental health teams at the regional hospital: Acute Community Intervention Service (ACIS), Aged Persons Mental Health Service (APMHS), Child and Youth Mental Health (CYMHS), Recovery, Prevention/Recovery and HOPE. The ACIS team received the most referrals, followed by the APMHS and CYMH teams. The greatest number of referrals occurred during June 2020. The HOPE data is presented as a combined data set with most of the referrals occurred in June 2020. It is possible that natural disasters such as the bushfires and COVID-19 may have impacted the regional community, contributing to the increase of referrals to mental health services.
The ED was the most common point of referral of clients with 55% (n = 182) followed by ACIS (17%, n = 56) and a general practitioner (14%, n = 47). Data from the treatment plans showed that 53% (n = 177) of cases were referred to intake, indicating that clients were referred for further mental health services. A total of 25% (n = 85) of clients were referred for crisis assessment.
Specific data was collected for each age category, including official diagnoses, identified suicide drivers and number of cases involving suicide attempts, suicidal ideation, non-suicidal self-injury and self-harm. Four clients under the age of ten years were identified in the data, with depression recognised as the key diagnosis in 50% of cases, 75% of cases did not specify suicide drivers. In the category 11-17 years of age (n = 34), suicidal ideation occurred in 50% of cases, with 23.5% of clients specifying non-suicidal self-injuries. The 34 clients aged 11-17 years, had no formal diagnoses noted for 44% of cases, with 27% of clients diagnosed with depression and 27% with behavioural disorders. For 53% of the clients in this age group, no suicidal drivers were specified with 20% identifying family and friendship stressors as their primary suicide driver.
In the age group 18-24 years (n = 46), 41% of cases had no formal diagnosis, with 22% having a diagnosis of depression, followed by 18% of cases with personality disorders. Suicidal ideation was reported in 74% of cases followed by active suicide attempts in 28% of cases. In this age group, 15% of clients reported self-harm and 13% indicated non-suicidal self-injuries. A total of 37% of clients did not identify specific suicide drivers with a further 19% of clients identifying family and friendship stressors as specific suicide drivers. In the age group 25-35 years (n = 73), 28% of cases had no formal diagnosis, 18% had a diagnosis of depression, and 14% had a diagnosis of adjustment disorder. Suicidal ideation was expressed in 52% of cases and 16% of cases reported a suicide attempt. A total 49% of cases reported no specific suicide drivers, followed by 29% of clients in this age group reporting family and friendship breakdown as the main driver.
Depression was the most predominant diagnosis for the 36-45-year age group (n = 60) at 26% of cases, followed by schizophrenia in 21% of clients. A total of 50% of cases in this age group indicated suicidal ideation, with 13% reporting suicide attempts and 10% of cases reporting non-suicidal self-injuries. A total of 45% of clients were identified as having no specified suicide drivers and 30% of cases reported their main suicide driver was due to family/friendship or relationship breakdown. Personality disorders was the predominant diagnosis in the age group of 46-55 years (n = 43) with 28% of cases. This was followed by depression in 25% of clients and schizophrenia at 21%. In 54% of cases, no suicide drivers were specified. Suicidal ideation occurred in 51% of cases in this age category followed by 14% reporting a suicide attempt. In the age group 56-65 years (n = 27), 38% of clients did not have a formal diagnosis. For 44% of cases, no suicide drivers were specified followed by family/friendship stressors, financial stressors and medical factors including auditory hallucinations, self-harm, each at 15% correspondingly. Suicidal ideation was high in this age group in 52% of people, followed by 22% of people reporting suicide attempts. Non-suicidal self-injuries was identified in 11% of cases followed by self-harm at 7%.
Depression was the significant diagnosis for people in the age category of 66-75 years (n = 22) with a reported 33% of clients. Suicide drivers were not specified in 72% of cases and 27% of people in this age category expressed suicidal ideation. For the 76-85-year-old age group (n = 21), 33% had a diagnosis of dementia followed by 28% with no formal diagnosis. In 85% of cases nil suicide drivers were identified. A total of 19% in this age group reported suicidal ideation with 14% reporting they had had a suicide attempt. In the >86-year age group (n = 4), 50% of clients had expressed suicidal ideation. A total of 75% of cases had no formal diagnosis and 25% had a diagnosis of dementia.
Audit Chart Findings
The audit tool was divided into six distinct sections which included screening, assessment, risk formulation, safety plan, prevent access to lethal means and client and carer education. These sections were analysed using the HOPE data (n = 35) as the practice benchmark in which the performance of the combined mental health teams (n = 299) were compared to that of HOPE's screening.
Screening and Assessment
The data pertaining to mental health practitioners screening clients for suicidality indicates that the HOPE team's performance exceeded expectations, 25 compared to the expected value of 7.4. This is in direct contrast to the other mental health teams who were expected to score at least 63.6 in this variable but only achieved a score of 46 (χ 2 58.793, p-value 0.001). The assessment section reflected the key components of the Collaborative Assessment of Suicide Events (CASE), based on the Shawn Shea Model taught in the SPP education ( Table 2). The HOPE team performed better than expected for all these variables, demonstrating that they are conducting thorough assessments for people who are at risk of suicide. The statistic worth noting is that of the observed 154 people that were identified as requiring an assessment at screening, only 97 people received a completed assessment, which indicates that an assessment was not completed for 57 people who were deemed to be at risk of suicide. The other mental health teams partially explored some of these variables, such as the context and details of suicide behaviour (method), plans and degree of preparation, recent, past and previous events, enquiring about other suicide methods and identifying drivers of suicidality. Table 3 demonstrates the differences at each extraction point between assessment indicated and assessment completed. The results from Table 3 demonstrate that the likelihood of a person at risk of suicide receiving a completed assessment, depended on the specific mental health team. For example, the HOPE team, who specialise in caring for high risk clients, completed a total of 35 assessments for which 36 assessments were indicated. In comparison, there were mixed results amongst other mental health teams, in which the completion rate ranged from 46% to 77%. As an example, in August 2019 there were a total of 66 client records extracted across combined mental health teams in which 35 were identified as requiring a ZERO assessment. The clinical audit revealed that only 16 clients received an assessment in August 2020, at a compliance rate of 46%. The overall compliance rate of the mental health teams (excluding the HOPE team) was 63%.
A Pearson Chi-Squared test of independence was performed to examine the relationship between the assessment indicated, the assessment completed and the extraction points. This statistical test is appropriate for comparing three factors of nominal data. The relationship between the extraction points and assessment indicated was significant, χ 2 (5, N = 335) = 39.104, p = 0.000. A statistically significant relationship was also established between the assessment completed and the extraction points, χ 2 (5, N = 335) = 71.093, p = 0.000. However, there was no statistical significance between the two variables assessment indicated and assessment completed throughout the duration of the data collection period, χ 2 (5, N = 335) = 6.391, p = 0.270. Staff training commenced in February 2019, yet it does not appear to have influenced the rate of completed assessments. In comparison, the HOPE team maintained 100% compliance between indicated and completed assessments, throughout the data collection period. Further research is required to explore the reasons for the decreased compliance with completed assessments by the other mental health teams.
Risk Formulation
The HOPE team demonstrated excellent compliance when assessing the risk state and risk status, as well as evaluating foreseeable changes, available resources and internal/external drivers. As shown in Table 4, the observed frequencies (n = 32, 30, 31, 21 and 19) far exceeded the expected frequencies (n = 4.4, 4.6, 3.9, 2.7 and 2.9) in the subsequent columns, with the p-value less than 0.001, indicating statistical association. Similar to the other sections, the other mental health teams did not complete the risk formulations as expected; however, there was partial compliance with the risk formulations for three of the variables, risk state, risk status and foreseeable changes (n = 23, 25 and 11), as compared to the expected partial frequencies (n = 22.4, 25.1 and 10.7). There was not a statistically significant association between the ratings of low, medium or high used with HOPE and other mental health teams (p = 0.30). This indicates that there is not a relationship between using this variable and belonging to either HOPE or the other mental health teams. Of concern is that the practice of using this variable as part of a suicide prevention assessment is outdated. The fact that HOPE and other mental health teams are still incorporating these ratings into their assessments, indicates that further education is needed to ensure that all mental health clinicians are using current evidence-based practice.
Safety Plan
The HOPE team demonstrated high compliance when it came to implementing the correct safety plan, as well as involving the client and family. Table 5 shows that there was a statistical association between these variables, along with the safety plan identifying suicide drivers with the p-value less than 0.001. Most importantly, the accuracy of these safety plans was noted with the observed frequency (n = 33) far exceeding the expected frequency (n = 4.6).
The observed frequencies for these variables (n = 11, 5 and 0) were significantly less than the expected frequencies (n = 39.4, 18. 8 and 22.4). This correlates with the demographic data which identified that specific suicide drivers were not thoroughly investigated. The HOPE team was highly compliant with completing safety plans with the client and family, as well as identifying drivers.
Preventing Access to Lethal Means
The HOPE team demonstrated exceptional performance in the two variables under preventing access to lethal means. The observed frequencies (n = 19) were greater than the expected frequencies (n = 2.1) in identifying lethal means for the HOPE team and the subsequent safety plans for clients including the lethal means interventions were deemed to be appropriate. The other mental health teams demonstrated significant shortcomings with preventing access to lethal means.
Client and Carer Education
There was a statistically significant association between adequate client and carer education provided by the HOPE team, as compared to the other mental health teams. The observed frequencies (n = 5 and n = 1) were greater than the expected frequencies (n = 1.3 and n = 0.1) for the HOPE team in the two variables regarding the safety plan discussions with both client and family, as well as the provision of the Beyond Blue information books. The observed frequencies (n = 7 and n = 0) for the other mental health teams were less than the expected frequencies (n = 10.7 and n = 0.9) as demonstrated in Table 5.
Discussion
The objective of this study was to conduct a clinical audit to determine the compliance of mental health clinicians in implementing the ZS approach at a regional mental health service. The use of a validated clinical audit tool was an essential component of the clinical audit to determine the areas of the SPP steps that specifically required further improvement. This audit was necessary as it provided an opportunity to identify areas that mental health clinicians would benefit from further education and training support. This aligns with evidence that recommends organisations create processes to assess compliance to the SPP and to evaluate outcomes on a systems, policy, and individual basis [5]. This commitment to continuous quality improvement is characteristic of the final element within the ZS framework [5].
The standout statistic in this clinical audit was the lack of specified suicide driver identified for each of the age groups. This indicates that the specific suicide drivers were not thoroughly investigated by the mental health clinician at each assessment, potentially leading to gaps in risk formulation and treatment. A comprehensive suicide risk formulation should be completed when a person screens positive for suicidal risk, with all staff utilising the same risk formulation model [5]. Another possibility was that the specific suicide drivers were not documented accurately, which again indicates a potential deficit in following the SPP pathway. Interestingly, many of the clients from the clinical audits did not have a formal mental health diagnosis. This was evident in the 18-24 year age group, in which 41% of cases had no formal diagnosis. These clients may have accessed mental health services for the first time or may have previously accessed services without complying with the follow-up recommendations. The high number of cases without a formal diagnosis is concerning, especially since some diagnoses such as schizophrenia could be under-represented in our findings. Suicide is the largest contributor to decreased life expectancy in those with schizophrenia and it is imperative that clinicians identify risk factors in this cohort and offer comprehensive assessments [13,14]. This is also important for those newly diagnosed with dementia, as there is an elevated suicide risk during the first twelve months after diagnosis, with the highest risk in patients aged 65 to 74 years [15]. The findings of Stapelberg et al. [7] emphasise the importance of identifying vulnerable individuals who may have presented for the first time and to offer suitable services. This may provide greater improvements for the first-time presenter and may help to avoid multiple presentations in the future [7].
The HOPE team were 100% compliant in completing assessments for all clients who were identified as requiring an assessment at screening. These findings were viewed favourably and indicates that the HOPE team is conducting thorough assessments and exceeding the expected frequencies. This is anticipated as the HOPE team are known as the acute crisis management team within this health service. As evident from the audit, all six mental health teams within this health service receive referrals for at-risk patients and are expected to conduct comprehensive assessments. The difference is the HOPE team receives these types of referrals more frequently than the other mental health teams. The qualitative analysis of this project identified that mental health clinicians from a variety of teams required regular, ongoing suicide prevention education that was tailored to their area of practice [16]. Even still, areas of improvement identified for the HOPE team include further exploration of the behavioural incident (events over the prior 48 h) and further exploring context and details of suicidal behaviour including location. In keeping with the ZS framework, further education and training are recommended to improve full compliance by the other mental health teams.
Although this clinical audit demonstrated the areas of improvement needed within mental health teams at a regional mental health service, this should not be viewed as a criticism of the mental health clinicians. The purpose of this clinical audit is not to vilify the community mental health clinicians, but to provide recommendations for improvement and areas requiring further education. This aligns with current literature which recommends implementing a Restorative Just Culture alongside the ZS framework in a hospital or health service [17]. The just culture is imperative to gaining the clinicians' trust and commitment to organisational changes [17]. Although there is need for further research into the feasibility and effectiveness of large-scale implementation of the ZS approach [9], this study contributes to the literature from a large single health service in a regional and rural setting.
Implications for Practice
The findings of this study reinforce the importance of regular evaluation and implementing continuous quality improvement processes to ensure that mental health clinicians are utilising the latest evidence-based research when working with people at risk of suicide. This detailed clinical audit form has the potential to be used by other mental health services that use the ZS approach, to assess clinician compliance. Recommendations for future research include longitudinal studies to investigate the long-term compliance of the ZS approach. Another possible avenue is to investigate the mental health clinicians' perspective of the ZS approach using a qualitative lens.
Limitations
A limitation to the findings of this study is the consistency and experience of mental health clinicians using the ZS approach. The HOPE team are a specialised unit, who regularly engage with people at risk of suicide. In comparison, the other mental health teams may not have regular contact with people at risk of suicide, therefore this inconsistency may have influenced the accuracy of the findings. Another potential limitation is the data collection of this study occurred at specific time points at several sites within a regional mental health service. It is possible that the findings of this study may not be generalised to metropolitan or other regional settings
Conclusions
In all six sections of the clinical audit, which was based on the CASE-Shawn Shea Model, the HOPE team demonstrated a statistically significant association between these variables and their ability to implement these elements into their practice. The clinical audit and subsequent descriptive analysis confirmed that the other mental health teams are not consistently using the assessments from the SPP training and are therefore not implementing all of these elements into their practice. Further education and reinforcement of the Zero Suicide approach may be needed to facilitate consistency between mental health clinicians in assessing and managing people at risk of suicide. The development of the clinical audit tool proved to be an effective means to evaluate the compliance of mental health clinicians. Funding: This research was funded by Latrobe Regional Hospital (LRH) as part of the evaluation of the Zero Suicide Prevention program service implementation. The CERG was responsible for the data analysis working in partnership with the Mental Health Coordinator at LRH who coordinated the extraction of the data and contributed to the writing for publication.
Institutional Review Board Statement:
The study was conducted in accordance with the Declaration of Helsinki and approved by Latrobe Regional Hospital Human Research Ethics Committee (Project No. 2020-20 HREA) and Federation University Human Research Ethics Committee (A20-070).
Informed Consent Statement: Informed patient consent was not required for this retrospective clinical audit. The audit did not contain identifiable patient details.
Data Availability Statement:
The data presented in this study are available on reasonable request from the corresponding author. The data are not publicly available due to privacy reasons.
Acknowledgments:
The authors would like to acknowledge the valuable contribution of the mental health practitioners who assisted with the Zero Suicide Project and Evaluation. The data informed the recommendations and future implementation of the Zero Suicide Prevention approach across its mental health services.
Conflicts of Interest:
The authors declare that they have no conflict of interest. The CERG was responsible for the data analysis as an independent evaluator. The hospital mental health practitioners were responsible for extracting the data using the chart audit tool. The CERG would also like to acknowledge the statistical support from Jo-ann Larkins at Federation University Australia. | 2022-12-31T16:13:17.321Z | 2022-12-28T00:00:00.000 | {
"year": 2022,
"sha1": "73ccfac40530470054ee562881fe1d6a833959ea",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2039-4403/13/1/3/pdf?version=1672227033",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a91a6f32a3cfbb906c62da32d91f3d2fd0912c97",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
256037440 | pes2o/s2orc | v3-fos-license | A portalino to the dark sector
“Portal” models that connect the Standard Model to a Dark Sector allow for a wide variety of scenarios beyond the simplest WIMP models. Kinetic mixing of gauge fields in particular has allowed a broad range of new ideas. However, the models that evade CMB constraints are often non-generic, with new mass scales and operators to split states and suppress indirect detection signals. Models with a “portalino”, a neutral fermion that marries a linear combination of a standard model neutrino and dark sector fermion and carries a conserved quantum number, can be simpler. This is especially interesting for interacting dark sectors; then the unmarried linear combination which we identify as the standard model neutrino inherits these interactions too, and provides a new, effective interaction between the dark sector and the standard model. These interactions can be simple Z′ type interactions or lepton-flavor changing. Dark matter freezes out into neutrinos, thereby evading CMB constraints, and conventional direct detection signals are largely absent. The model offers different signals, however. The “portalino” mechanism itself predicts small corrections to the standard model neutrino couplings as well as the possibility of discovering the portalino particle in collider experiments. Possible cosmological and astroparticle signatures include monochromatic neutrino signals from annihilation, spectral features in high energy CR neutrinos as well as conventional signals of additional light species and dark matter interactions.
JHEP02(2019)105
1 Searching for hidden sectors A major question for particle physics is whether there is detectable physics beyond the standard model (BSM). We are well aware that there is physics beyond the standard model, as evidenced by dark matter, neutrino mass, gravity, and inflation. There are naturalness arguments in favor of additional BSM physics, such as the hierarchy problem and the strong CP problem. Since these connect to properties of known fields in the standard model, they often motivate interesting signals or new experiments. As the LHC energy has marched up and the luminosity increased, we have gained the ability to look for new particles at ever higher masses. Constraints on new particles with O(0.1) level couplings are strong, with tremendous limits over wide ranges of lifetimes and properties. Simultaneously, attention has increasingly turned toward searches for new hidden sectors.
Much of the attention has come on "dark sector" models, where there can be new particles and interactions present, but which are generally assumed to be SM singlets. The communication between sectors occurs via "portals," which are operators that connect the two sectors, i.e., Where the dimension of the operator is 4 + p and Λ is the relevant scale of the operator. While non-renormalizable portals can be important, (see, e.g., [1]), much effort has been focused on the renormalizable and super-renormalizable portals, namely the Higgs portals,
JHEP02(2019)105
the kinetic mixing portal, and the neutrino portal. While the Higgs portal typically yields WIMP-like models [2] (although see [3] and related), the other two can yield scenarios with dramatically different mass ranges and properties. The kinetic mixing portal, in particular, has received tremendous attention. In the context of a dark sector with charged matter and a dark Higgs, such kinetic mixing with a dark photon can naturally yield thermal dark matter over a wide range of scales [4][5][6][7][8][9]. These models are simple and often yield interesting signals. Unfortunately, because of the coupling to charged particles the most straightforward of them also produce unobserved distortions of the CMB, absent new mass scales and operators to split the Dirac fermions, or to replace them with new scalar fields.
The neutrino portal has been well studied mostly in the context of sterile neutrino dark matter e.g., [10][11][12]. However, it, too, can yield thermal models such as via a new particle for dark matter to annihilate into [13][14][15][16][17][18]. The right handed neutrino, given a Majorana mass, can also be integrated out, yielding non-renormalizable mass-mixing operators with charged dark sectors, e.g., [19]. However, in these cases the light neutrino mass is corrected by an amount δm ν ≈ sin 2 θ m heavy , meaning either the mixing must be very small, or the heavy state must be eV in scale.
In this note, we show that the neutrino portal can also yield scenarios that are structurally as simple as kinetic mixing models, but instead have a dark sector that dominantly interacts with neutrinos, rather than charge. The effect arises from the inclusion of a "portalino," a gauge-neutral fermion with an exact or nearly exact global quantum number. The scenarios we arise at naturally have potentially sizable (g ν ∼ 10 −2 ) new interactions for neutrinos, and dark matter freezeout into neutrinos. The dark matter in these scenarios can be light (m χ 10 MeV), without needing to turn off annihilation channels because the only coupling to SM particles is to neutrinos which do not strongly affect the CMB.
The portalino
The neutrino portal is typically thought of as a relatively benign interaction. It produces a Dirac mass with a SM singlet fermion. If the singlet fermion has a large Majorana mass, the physical mass is suppressed by the seesaw mechanism and the Dirac mass can be sizable. If there is a lepton number symmetry, the Dirac mass sets the scale for the neutrino mass and thus must be small enough to be consistent with terrestrial and cosmological measurements.
However, this latter case assumes that there are no other particles involved. It is this possibility that is our focus.
If one extends the model simply by adding a second singlet fermion ψ the physical consequences are significant. If ψ has a Dirac mass m n with the first singlet fermion, then there is a massless field in the spectrum. Specifically, there is a field we would identify as a neutrino ν = c θ ν L + s θ ψ, with a massive partner n = s θ ν L − c θ ψ.
JHEP02(2019)105
Here tan θ = m d /m n , and the heavy mass eigenstate has mass m = m 2 d + m 2 n . ν has its interactions suppressed compared to the standard model by c θ which leads to strong constraints on the mixing angle s θ 10 −1 − 10 −3 from precision measurements of weak decays with neutrinos. However, if the neutrino in question is ν τ , there is far less precision information on its couplings and mixing angles as large as s θ ∼ 0.3 are possible. Importantly, the Dirac mass m d can naturally be as large as charged lepton masses, even if the heavy neutrino state is still weak scale.
Let us complicate the situation further -if we imagine the state ψ has interactions of its own, whether scalar or vector, the light mass eigenstate will inherit those interactions, with a coupling suppressed by powers of s θ . As will be our principal focus, let us suppose a coupling of ψ to a new massive vector boson ω In terms of mass eigenstates, this interaction becomes Thus, the light neutrino mass eigenstate, which we identify as the physical neutrino, carries a residual vector interaction as well. This "effective Z " process [20] is a simple way to add interactions to SM fermions. For other SM fermions, such effective Z UV completions require new, light states charged with SM quantum numbers. The neutrino, uniquely, does not, and thus becomes a singular portal into new interactions of hidden sectors. Of course if ψ is charged under a new gauge group, it cannot mix with a gauge singlet. If the gauge symmetry is broken, however, then, just as the standard model neutrino does, ψ can marry the singlet, and the massless eigenstate will inherit its interactions. This singlet fermion which conveys these new interactions to the physical neutrino, and the resulting mass eigenstate we refer to as the "portalino." The only theoretical requirement on this is that a suitably low-dimension operator exists which is a gauge singlet under the other gauge group and which creates a fermionic (fundamental or composite) single particle state. Unlike kinetic mixing which requires a U(1) for a renormalizable interaction, here, regardless of how complicated the new sector's particle content is and what the gauge sector looks like, so long is there is a gauge singlet operator (akin to Lh in the SM) one can develop the interaction in question.
While this particular scenario is new, it draws from a number of ingredients that have existed in the literature. Ref. [20] discussed the effective Z scenario, the idea of using "missing partners" as a means to generate effective interactions with massive gauge bosons. Ref. [21] discussed how with a missing partner, neutrinos with large Dirac masses can still have massless eigenstates, allowing the massless eigenstate to inherit a large Yukawa coupling. Interactions that only neutrinos feel, sometimes called "secret" interactions, have been widely discussed in many contexts [22][23][24], often described as an effective theory. Direct mass mixings of the form ψφhl/M have been studied as a means to induce interactions for neutrinos [19,25,26], including in chiral models [27], but, absent tuning, these tend to require either small mixings or light sterile neutrinos. More closely related to dark JHEP02(2019)105 matter, [28] studied an effective Z model, where charged fermions marry the neutrino by extending the SM with a second Higgs doublet charged under a new U(1). Refs. [13,18] showed how an inverse seesaw could yield a large DM − ν − φ Yukawa interaction, where φ is some new lepton number carrying force carrier. These last two are closest in content to the scenario described here.
A simple model
Taking the above discussion and translating into a full Lagrangian is straightforward -one must simply cancel gauge anomalies (by adding a conjugate ψ c ) and ensuring that there are no additional massless states (which are constrained by the CMB).
The simplest example is where l and h are the usual Standard Model fields, n c and x c are gauge singlets and n + and x − are two SM singlets which are charged under the dark U(1) d . We use the x and n labels to distinguish the mass eigenstates after U(1) d breaking. Connecting to the previous section, n c is our portalino, n + ↔ ψ, and we extend the model with additional fields to cancel the gauge anomalies and provide masses for all new states. Mass terms n c x c and n + x − would be allowed by the gauge symmetries, but are easily forbidden with global symmetries.
Assuming that the uneaten fields in the scalars h and φ are heavy enough to be ignored we replace them by their VEVs and we arrive at two (Dirac) mass terms There are two massive Dirac particles: x c pairs up with x − , and n c combines with linear combination (n) of ν L and n + , the other linear combination remains massless and is what is identified as the "usual" neutrino ν. Note that we have included only one portalino field in this simple model. It couples to a linear combination of the SM lepton doublets yl → i y i l i in eq. (2.5). Generalizations with multiple portalinos are straightforward, see section 2.2. The fields with couplings to gauge bosons in the mass eigenstate basis are with couplings In this model, n acts as an unstable heavy neutrino, the portalino, while x is stable. As we shall discuss in section 3, x provides a natural dark matter candidate. The details of the model's dark phenomenology depend on the ordering of the massive particles, n, x and ω.
JHEP02(2019)105
All three obtain their masses from the symmetry breaking vev v φ proportional to coupling constants. As a benchmark scenario we envision the ω as the heaviest particle with a mass of order GeV, the dark matter x with a mass in the 10 to 100 MeV range and the portalino n somewhere in-between. With this ordering of the spectrum portalinos decay invisibly to dark matter particles, and dark matter freeze-out occurs via annihilation into neutrinos. Remarkably, all other orderings also produce viable models of thermal dark matter.
Non Abelian models and generalizations
The U(1) model is a very simple example of the portalino mechanism with an automatic dark matter candidate. However, there are a number of variants which we might also expect.
Multiple portalinos
The simplest extension of the above scenario is to enlarge n P , the total number of gauge singlet portalinos. As we do so, we should simultaneously consider enlarging n D , the number of dark gauge-charged fermions ψ. In the most straightforward examples n P = n D and there are no new massless degrees of freedom.
However, even in this simple modification, there are important differences. The first critical difference is that the heavier portalinos will dominantly decay invisibly, via an offshell ω, irrespective of the ordering of the mass spectrum of n, x and ω. This will be critical as we discuss the experimental constraints on this scenario in section 4.
The second difference is that the mixing angle of the lightest portalino is not necessarily the largest mixing angle. Thus, the massless "neutrinos" can have larger couplings to ω, only constrained by the properties of a heavier, invisibly decaying portalino.
We can also consider moving away from n P = n D . If n P > n D , we will require some state to have extremely small (< 10 −12 ) Yukawas, lest the SM neutrinos have too large Dirac masses. Since our premise is that all these couplings should be more comparable to ordinary Yukawas, this moves the scenario in a qualitatively different direction.
If we take n D > n P , we will have new, massless degrees of freedom. As we will discuss later, there are well-known and generic constraints on new light degrees of freedom. There are unavoidable production processes for these states as well. Just as the portalino can decay to SM neutrinos, it will be able to decay to these new states as well. Four-fermi operators will allow production of these states via νν → ψψ. If these states are charged under the ω gauge group, the rate of production will be large. If they are charged under some other gauge group, they could be produced by processes mediated by that gauge boson. However, even if those processes are suppressed, they will still be produced with a rate T 5 sin θ 4 ν sin θ 4 ψ /m 4 ω , where θ ν , θ ψ are the mixings of ν and ψ with the portalino, respectively. Terminating this process by T ∼ GeV, requires m ω 10 5 GeV × sin θ ψ sin θ ν . Precise constraints depend on the details of the new hidden sectors.
Non-Abelian models
While the simplest model from section 2.1 is based on a U(1) gauge group, the charged fields can be charged under any other group so long as a gauge singlet fermion operator is present.
For G = SU(2) → 0, we would naturally require two portalinos to give masses to both components of the doublet ψ. The off-diagonal SU(2) interactions would mediate potential transitions between SM neutrinos. However, since the ω interactions in the single portalino case already needn't be flavor diagonal, we would not expect a significant change in neutrino properties. However dark matter is naturally a doublet as well, and there are a variety of interesting consequences if those states are non-degenerate.
For G = SU(2)×U(1) → U(1) there is naturally a charged partner, akin to the electron in the SM. The existence of the massless photon means that the theory should decouple before T ∼ GeV from the SM. One would expect multiple components of DM in this theory, at least one with a residual U(1) interaction.
For G = SU(3) → SU (2), ψ is a triplet. The portalino marries ψ 3 , while ψ 1,2 remain charged under the unbroken SU(2). With two copies of ψ, one can write a non-vanishing ijk φ i ψ j ψ k , which would give masses to the ψ 1,2 states. One could envision a component of the dark sector made of "quirky" dark matter. The variations of this scenario are large and we defer to later work.
Dark matter freezeout
Since the field ψ is charged under a hidden gauge symmetry it is expected that there must be some additional field ψ c to cancel the gauge anomalies. It is natural (although not required) that due to gauge or global charges of ψ c no ψ c ψ Dirac mass term is present. If ψ c acquires a mass by marrying a different field, it forms a natural candidate for dark matter. This is simply illustrated within the context of the U(1) model of section 2.1 where x forms a natural dark matter candidate.
The precise freezeout process depends on the spectrum. Depending on masses, χχ → νν, χχ → nn, χχ → nν or χχ → ωω can all be the dominant annihilation channel. If m χ > m ω , then χχ → ωω typically dominates, with cross section σv = πα 2 /m 2 χ . For light WIMPs, this requires a small coupling α ∼ 10 −4 [m χ /GeV]. On the other hand, for m χ < m ω , s-channel annihilations are naturally mixing suppressed and couplings comparable to the SM are more naturally allowed.
Portalino phenomenology and neutrino constraints
As these scenarios can yield O(1) corrections to the SM neutrino couplings, it is clear that constraints arise from a number of sources. Some constraints are modified in the presence JHEP02(2019)105 of multiple portalinos, and we shall review them here. Our bounds are adapted from the excellent review by de Gouvea and Koblach [29], the SHIP white paper [30], and [13,18].
Precision electroweak and lepton universality: in the Standard Model the muon width
is proportional to the Fermi constant G F squared which can also be determined in other ways, for example by measuring the W and Z masses and α em . Comparing independent determinations of G F is a test of the SM. A portalino mixing with either the electron or muon neutrino reduces the W coupling of the neutrino by cos θ. This reduces the muon width by cos 2 θ and comparing to other experimental determinations of G F one obtains an upper bound on the angle θ as shown in figure 1, labeled "lepton universality, PEW". Muon decay bounds become weaker if the portalino mass is lower than the muon mass so that portalino final states are possible. In the limit m n m µ , the muon width including neutrino and portalino final states is proportional to cos 2 θ + sin 2 θ = 1, i.e. insensitive to θ. For non-negligible portalino masses a phase space analysis of Michel electrons (labeled µ → eνν in figure 1) can give a strong bound. Similarly, τ decays can be used to bound mixing of the portalino with τ neutrinos.
Meson decays: charged current meson decays with leptons in the final state can go to
portalinos if the portalino is lighter than the decaying meson. In the case of two-body decays of stopped mesons such as π → eν or K → eν the energy of the charged lepton is monochromatic and would be shifted to lower values for decays with portalinos. The absence of a second line in the spectrum of final state charged lepton energies provides a very strong constraint because of the large number of pion and Kaon decays observed. In addition, the overall rates for leptonic K and π decays are sensitive to mixing with portalinos. In hadronic τ decays the phase space distribution of final state hadrons is sensitive to the presence of a final state portalino with non-negligible mass.
3. Neutrino oscillations: another bound on portalino-neutrino mixing which does not rely on details of the portalino decay can be obtained by considering neutrino oscillations. If neutrinos mix significantly with a sterile portalino then the observed 3 × 3 neutrino mixing matrix is non-unitary. An analysis of atmospheric neutrino data gives the bound |U ντ n | 2 0.18. Figure 1. Bounds in the mass versus mixing parameter space for portalinos mixing with electron neutrinos (left), muon neutrinos (right), and tau neutrinos (bottom). In all three plots portalino masses smaller than 10 MeV are ruled out by the CMB bound on extra relativistic degrees of freedom ∆N eff (green shading). The blue shaded regions at large mixing angles are ruled out from the combination of a number of fairly model-independent constraints which do not rely on portalino decays. These including bounds on non-universal neutrino couplings, precision electroweak constraints, changes in the rates and kinematics of π, K, µ and τ decays, and neutrino oscillations. The dotted line bounds (orange shading) are more model dependent as they rely on portalino decays to visible SM particles. For details, see text and references [13,29,30,34]. decaying in a detector cavity. Experiments have a significant reach even in the case of very small mixing but only when the portalino decays into visible final states. We also indicate such bounds from in figure 1 with dotted lines. Significant improvement of such bounds would be obtained with SHIP [30,31] LBNL [32], and FCC-ee [33]. 5. Lepton flavor violation: the portalino can mix with more than one flavor of neutrino and therefore mediate lepton flavor changing transitions. In the SM, the corresponding neutrino-mediated transitions are negligible because of the smallness of the neutrino masses, but here the Dirac mass terms of the portalino are much larger. Currently, the only bounds which are competitive with flavor-preserving bounds are from µ → e transitions and bound the product |U νµn U νen | ∼ sin θ µ sin θ e . The bounds shown in figure 1 (red dashed lines) assume that the portalino mixes equally strongly with ν µ and ν e , i.e. sin θ µ = sin θ e .
JHEP02(2019)105 5 Portalino cosmology
The portalino, by virtue of its relatively large interactions with the SM, is in thermal and chemical equilibrium in the early universe and does not fall out of equilibrium until temperatures below its mass where its number density becomes exponentially suppressed. To see this consider the rate for annihilation nn → nν at temperatures near the portalino mass T ∼ m n Γ ann ∼ n n σv ∼ m 3 n m 2 n g 4 D sin 2 θ m 4 ω (5.1) which is much larger than the Hubble rate H ∼ m 2 n /M pl ∼ 10 −19 MeV [m n /10MeV] 2 even for portalinos as light as 10 MeV. At lower temperatures, it falls out of equilibrium and we check that its lifetime is short compared with the time of BBN. The lifetime depends on whether it decays into dark sector or SM states. If the portalino is the lightest dark sector particle, it can still decay as n → 3ν via an off-shell ω with Thus even for portalinos which can only decay to SM particles the decay is sufficiently rapid. When there is a dark sector state that is lighter than the portalino under consideration (for example, if there is a lighter portalino or if the dark matter particle is lighter than the portalino) then the lifetime for three body decays is much shorter with sin 2 θ max → 1. If the portalino is heavier than m ω then the two body decay n → ων becomes dominant. Thus even portalinos as light as 10 MeV can decay promptly before BBN under a wide range of circumstances. In determining the 10 MeV lower bound for the portalino mass, the dominant constraint comes from CMB bounds on light species. The portalino generally stays in kinetic and chemical equilibrium until after the neutrinos decouple from the electron/photon bath. The total portalino entropy at neutrino decoupling is then ultimately deposited into the neutrino bath and increases N eff . The temperature of chemical decoupling for the different neutrinos is approximately T chem νe,νµ,ντ 3.2 MeV, 5.3 MeV, 5.3 MeV [35]. We take a limit N eff < 3.37 arising from a combination of Planck and other data [36]. This results in a bound of m n > 22, 36 MeV assuming the portalino can annihilate to e or only µ/τ , respectively. Even with a small coupling to ν e the lower bound typically applies (see eq. (5.2)). Both of these bounds assume that the portalinos cannot deposit their energy directly into SM particles other than the neutrinos, and, in the latter case, that the ν µ,τ cannot rethermalize with ν e via the new ω interactions (in which case the lower ν e bound applies). Since this analysis assumes an instantaneous decoupling of neutrinos from the electron/photon bath, we conservatively plot a bound of m n > 10 MeV in figure 1. 1
Light species
In this scenario, it is quite common that there are additional light species present. This can be because the gauge sector has a residual unbroken component, or non-Abelian partners of ψ have small or zero masses. Thus, it is worth considering what constraints on N eff imply for such portalino scenarios.
Assuming that the hidden sector decouples from the SM at a temperature T dec , then the effective number of neutrinos contributed by the particles in the hidden sector is where g * D , g * SM , g dec * D , g dec * SM are the effective number of degrees of freedom in the dark sector and the SM at low energies, and in the dark sector and the SM at decoupling, respectively. Specifically, g * SM = 10.75, since we are anchoring to the last point when neutrinos have their entropy increased. For simplicity, we can take g * D = g dec * D and assume T dec 1GeV at which time g dec * SM = 61.75, yielding ∆N eff = .056g * D . (5.5) Taking the Planck limit (95% confidence) of ∆N eff < 0.33 [36] we find the relatively mild g * D < 6. On the other hand, If T dec Λ QCD at which time g dec * SM = 17.25, this becomes a more stringent requirement g * D < 1.1.
Discussion
Dark matter that primarily interacts with neutrinos is a challenging scenario to test. Mixing of the portalino with SM neutrinos is a crucial component of the scenario and can be tested by precision measurements of the SM neutrino couplings. But there are also several more model-dependent possible signals which depend on the details of the hidden sector.
• In addition to the large interaction of the ω with neutrinos, it may have small interactions with other SM particles. This could arise, for instance, from a small kinetic mixing with the SM photon. Alternatively, we could identify it as the gauge boson of some other SM symmetry, such as baryon number, µ − τ , or some effective Z , although this would require either small couplings or m ω 100GeV. In addition to the well-studied phenomena associated with those forces, this would also yield signals of enhanced ν − SM interactions, especially at energies comparable to m ω .
• The portalino couplings may also violate lepton flavor. Then W loops in association with flavor violating portalino coupling insertions can give rise to processes like µ → eγ and µ to e conversion in the background of a nucleus. The Mu2e experiment [38] at Fermilab will look for µ to e conversion in the field of an Aluminum nucleus, and is expected to improve the limits on |U νµn U νen | by two orders of magnitude.
JHEP02(2019)105
• So far, we assumed for simplicity that the Yukawa coulings in (2.5) are real. In general, such couplings are complex and give rise to CP-violation. In particular, if there are at least two portalinos with weak scale masses m n i and mixing angles not too far from the precision electroweak bounds, |U ν l n i | 2 10 −3 , then an electron EDM with experimentally interesting size can result. The leading contribution stems from a two-loop diagram and scales as [39] d e e where J D e 3 × 10 −6 contains the complex phases and is suppressed by 4 powers of portalino mixing angles U ν l n i , and I D ∼ 1 is a dimensionless loop integral. Evaluating this for mixing angles at the PEW bound and maximal CP violation we find d e /e [cm] 3 × 10 −30 m n 1 m n 2 /TeV 2 J D e /(3 × 10 −6 ) which is close to the 2018 experimental bound from ACME of 1.1 × 10 −29 cm [40].
• A clear signature of this scenario would be the detection of monochromatic neutrinos from dark matter annihilation. Current limits on this from the galactic center are roughly 100 − 1000 times thermal from 10 GeV m χ TeV [41,42], making a straightforward detection of the scenarios described above challenging. However, it is quite simple to employ the portalino to yield models that could be detected.
In particular, one can envision a scenario with m ω < 2m n and dark matter a vectorlike state with ∼ TeV mass, charged under ω. This allows one to straightforwardly adopt the construction of [7], and consider χχ → ωω, with ω → νν. For m ω ∼ GeV a sizable Sommerfeld enhancement would boost the signal into the detectable regime. For m ω > 2m n the signals could become partially visible at the level of the visible BR of the n, yielding other signatures, including CMB constraints.
Finally, superheavy dark matter could conceivably decay via ω emission. The boosted ω could then decay producing ultra high energy neutrinos, but without any associated charged particle signals, evading the basic constraints considered in [43].
• Another possible signal would be on the spectrum of UHE neutrinos observed at IceCube. The center of mass energy for a PeV cosmic ray neutrino incident on a non-relativistic neutrino of the relic neutrino background with mass O(0.1eV) is O(100 MeV). Thus, it is an intriguing point that the ongoing search at IceCube is for the first time giving us information on ν −ν interactions in the 10 MeV−1 GeV range.
Given this, it is conceivable to consider a ω-burst scenario akin to the Z-burst idea [44]. The average density of relic neutrinos is O(100 cm −3 ), thus, we can consider the column density for a neutrino traversing the observable universe, cτ n ν ≈ (3 × 10 10 cm/sec)(5 × 10 17 sec)(100/cm 3 ) ≈ 10 30 /cm 2 ≈ (20GeV) 2 . (6.2) Thus, for σ ∼ sin 4 θ/m 2 ω ∼ (10 −3 ) 2 /(100MeV) 2 ∼ (100 GeV) −2 (which is the approximate size of the cross section on resonance), one can reasonably have a universe that is JHEP02(2019)105 somewhat opaque to neutrinos at the resonance energy. This would lead to distortions of the cosmic ray neutrino spectrum which could be detectable at IceCube. [24,45] • Self-interactions of the dark matter may thermalize the cores of dark matter halos, potentially resolving small scale anomalies [46][47][48][49]. The interactions between the dark matter and neutrinos or between dark matter and dark radiation in our models can also leave a measurable imprint on the large scale structure of the universe [50,51] The portalino could naturally find itself embedded in other scenarios, such as solutions to the hierarchy problem. In a SUSY model, the mass scale could arise radiatively, analogously to kinetic mixing scenarios [52][53][54]. In a Twin Higgs scenario, portalinos could serve as a means to marry off just three of the six neutrinos, ameliorating the massless degree of freedom problem in those models. We leave a detailed study of these possibilities for future work.
We should also note that while we have focused on the portalino coupling to the neutrino and carrying an effective lepton number, it is also possible to write the nonrenormalizable operator uddn/M 2 , replacing lepton number with neutron number. This would yield a small n-neutron mass mixing, and by analogy with the neutrino mixing scenario, would lead to neutron-specific effective interactions. Given the tremendous questions of flavor and collider limits, a full discussion warrants further study.
The astute reader will have noticed that we have not said anything about neutrino masses. This is because the portalino is compatible with many different ideas for neutrino mass generation. One scenario that appears particularly intriguing is radiative neutrino mass generation via n number violation. Or one could envision an inverse seesaw due to a small Majorana mass for ψ.
While we have illustrated a variety of interesting scenarios, we have only scratched the surface of possible models. In particular, chiral models, non-Abelian models, and a thorough exploration of dark matter scenarios is warranted. | 2023-01-21T14:42:17.931Z | 2019-02-01T00:00:00.000 | {
"year": 2019,
"sha1": "fd807d1f150f43334f878c42581a46ab25576348",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP02(2019)105.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "fd807d1f150f43334f878c42581a46ab25576348",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
} |
5890196 | pes2o/s2orc | v3-fos-license | Transcriptomics and Metabonomics Identify Essential Metabolic Signatures in Calorie Restriction (CR) Regulation across Multiple Mouse Strains
Calorie restriction (CR) has long been used to study lifespan effects and oppose the development of a broad array of age-related biological and pathological changes (increase healthspan). Yet, a comprehensive comparison of the metabolic phenotype across different genetic backgrounds to identify common metabolic markers affected by CR is still lacking. Using a system biology approach comprising metabonomics and liver transcriptomics we revealed the effect of CR across multiple mouse strains (129S1/SvlmJ, C57BL6/J, C3H/HeJ, CBA/J, DBA/2J, JC3F1/J). Oligonucleotide microarrays identified 76 genes as differentially expressed in all six strains confirmed. These genes were subjected to quantitative RT-PCR analysis in the C57BL/6J mouse strain, and a CR-induced change expression was confirmed for 14 genes. To fully depict the metabolic pathways affected by CR and complement the changes observed through differential gene expression, the metabolome of C57BL6/J was further characterized in liver tissues, urine and plasma levels using a combination or targeted mass spectrometry and proton nuclear magnetic resonance spectroscopy. Overall, our integrated approach commonly confirms that energy metabolism, stress response, lipids regulators and the insulin/IGF-1 are key determinants factors involved in CR regulation.
subjected to quantitative RT-PCR analysis in the C57BL/6J mouse strain, and a CR-induced change expression was confirmed for 14 genes. To fully depict the metabolic pathways affected by CR and complement the changes observed through differential gene expression, the metabolome of C57BL6/J was further characterized in liver tissues, urine and plasma levels using a combination or targeted mass spectrometry and proton nuclear magnetic resonance spectroscopy. Overall, our integrated approach commonly confirms that energy metabolism, stress response, lipids regulators and the insulin/IGF-1 are key determinants factors involved in CR regulation.
Introduction
Managing the quality of life and delaying the onsets of aging related chronic disease is one of the quests to promote healthy aging. Within this aim, a significant proportion of research into this area is engaged in revealing the mechanisms underlying the aging and/or longevity processes and how such could be modulated. Aging is characterized by an increasing chronic, low-grade inflammatory status indicated as inflamm-aging [1,2] responsible for the major inflammation-driven age-related diseases, such as cardiovascular disease (CVD), diabetes mellitus (DM), Alzheimer disease (AD), and cancer [3,4]. Dietary caloric restriction (CR) is a robust but severe nutritional intervention with plausible effects on healthspan, body function and longevity [5]. Unlike many interventions, life-long CR suggests alteration of fundamental biological processes that control aging [6], as observed in a diverse range of organisms [7,8], including nematodes, mice, rats, dogs [9,10] and/or humans [11]. Both the duration and the time of CR initiation have been found to be crucial for developing an anti-aging strategy. Long-term CR initiated before mid-life has been reported to slow down the aging process and to increase life span in rodents [12] and dogs [13] whereas CR initiated late in life increases mortality in several rodent models [14]. Humans are not likely to benefit from CR as much as these organism models for increased lifespan since they have evolved to minimize the effect of food shortage [15]. However, CR may be a useful anti-aging (increase healthspan) strategy for humans to decrease or delay the onset of degenerative diseases as suggested by studies on non-human primates and human beings [16,17]. Studies of Okinawan centenarians support the view that a low-calorie diet can increase prospects for good health and longevity in humans [17,18]. Findings from the small group of volunteers confined to Biosphere 2 confirmed that a 30% dietary restriction could be imposed for two years and would produce many of the physiological, hormonal, and morphological effects expected [19]. Therefore, identification of pathways and biomarkers activated by caloric restriction would highly contribute to the determination of nutritional strategies aiming to mimic the health benefits of CR. Indeed, CR mimetics provide a more realistic anti-aging strategy by modulating the energy metabolism towards the one observed in CR without the requirement for reduced food intake and to promote healthspan effects. CR mimetics at present include glycolytic inhibitors, stress response enhancers, sirtuin controllers, manipulation of the insulin/IGF-1, lipids and adipokines regulators, and autophagic enhancers [20].
Nowadays the diet-restricted rodent model is widely used to understand the mechanisms of the aging process [21].Yet the underlying the molecular mechanism of the extended life span by CR is still a matter of debate [22,23]. Moreover, despite CR resulting in similar benefits in various animal models, the impact on the physiology and metabolic adaptation to the new metabolic homeostasis varies greatly from one animal model/strain to another [24]. It is therefore vital to delineate key regulatory processes and common metabolic pathways across species translating such knowledge into drug and/or nutritional targets for CR mimetics. Within this quest, metabolic phenotyping is a useful tool to establish gradual metabolic changes linked to dietary intervention and disease development. Recently, metabonomics had successfully been applied to study the modulation of the aging processes following nutritional interventions, including caloric restriction-induced metabolic changes in mouse [25] dogs [26,27], and non-human primates [28] as well as high fat induced weight gain and metabolic disorders [29]. Yet, beyond the insight provided by these studies, a comprehensive and comparative capture of common metabolic functions across multiple animal species is missing. In the present study, liver tissue transcriptomics and blood plasma metabonomics were employed to capture the hepatic and systemic metabolic processes under the influence of CR initiated early in life across six mouse models. We herein report tissue-specific candidate biomarkers that were commonly affected by CR. To fully depict the metabolic pathways affected by CR and confirm the transcriptomics changes, the metabotypes of C57BL6/J mice were further characterized using nuclear magnetic resonance ( 1 H-NMR) profiling approach in liver and urine. Among the selected strains of animals, the C57BL6/J mouse model is particularly interesting model due to its higher susceptibility to weight gain under specific dietary conditions due to its genetic background [30]. Indeed C57BL6/J mice fed with a high fat diet develop obesity, mild to moderate glycemia and hyper-insulinemia and related disorders [31]. Moreover, this strain of mice is a well-accepted system model to investigate the nutrition-metabolism paradigm and, therefore, well suited to investigate CR diet.
CR Led in a Significant Modulation of Body Weight Gain across the Animal Strains
Body weight gain was significantly lowered in all the animal strains by CR ( Figure 1, Table A1). Differences of body weight gain were statistically significant between CR and the control group across the different mouse strains, except for 129S1/SvlmJ (only at Weeks 20 and 22) and C57BL6/J strains (from Week 12 to Week 22 only) as tested by ANOVA and Bonferroni post-tests. In particular, body weight was significantly lower in C57BL6/J starting at 20 weeks, which suggests a late response to CR that may be related to the specific genetic susceptibility of this animal strain to gain weight (31). This observation highlights the existence of a strong phenotypic variability within the different mice models and suggests the existence of specific metabolic signatures associated to these CR phenotypes.
Figure 1.
Body weight gain overtime for the different strains of mice under control diet or calorie restriction (CR). Data are reported as mean ± SE values for the body weight gain calculated from Week 8 onwards. Control and CR groups are plotted with dot and square shapes, respectively.
Identification of Transcriptional Biomarkers of CR in Mouse Liver
Microarray analysis revealed that of the <20,000 transcripts represented, 76 genes were significantly changed in expression in all six mouse strains (Table A2). We selected 14 genes from this list to be confirmed by reverse transcriptase quantitative PCR (qPCR) in the C57BL6/J strain; selection criteria for qPCR confirmation included a robust fold change in expression (>2-fold change in expression) and a large relative signal intensity from the microarray data. All 14 genes tested by qPCR were confirmed to be significantly modulated by CR ( Figure 2).
Figure 2.
CR-induced change in expression from the microarray data for the 14 biomarkers of CR in mouse liver. The fold change in expression in response to CR is shown for each strain; significant changes are highlighted with yellow or blue fill.
Identification of Common Pathways Affected by CR in Liver of Multiple Mice Strains
Pathway analysis of the microarray data revealed 24 Gene Ontology (p < 0.01 and FDR < 0.15) terms that were significantly modulated in at least five of the six mouse strains ( Common pathways for liver CR in multiple mouse strains. Parametric analysis of gene set enrichment (PAGE) identified gene sets significantly modulated by CR (p < 0.01, FDR < 0.15) in liver for at least five of the six strains. Each row corresponds to the transcriptional alteration of each gene set with CR. Gene sets up-regulated with CR are shown in red, whereas gene sets down-regulated with CR are shown in blue. Labels indicate the GO ID and GO term of each pathway.
1 H NMR Spectroscopy of Blood Plasma on the Six Strains Highlighted a Very Heterogeneous Response in Circulating Lipids Following CR
To characterize the differences in the metabolic profile between Control and CR mice from each of the six strains (10 mice per group), pairwise multivariate models were calculated using OPLS-DA (see Table A3 for model descriptors). Differences (Table 1) in metabolites under CR were mainly driven by changes in high density lipoprotein (HDL), low density lipoprotein (LDL), very low density lipoprotein (VLDL), and poly-unsaturated lipid (PUFA/UFA). Commonly, CR induced lower concentrations of LDL and UFA in 129S1/SvlmJ, C57BL6/J, C3H/HeJ, and JC3F1/J mouse strains. In addition, a lower concentration of HDL was noted in 129S1/SvlmJ and C57BL6/J strains. On the contrary, C3H/HeJ and DBA/2J mice displayed higher concentrations of HDL and, LDL, and both C3H/HeJ and CBA/J strains showed a lower concentration of VLDL under CR. The complexity of the biofluid matrix, which captures the contribution from all the biological compartments, did not enable the identification of robust metabolic signatures of CR across species, but highlighted alterations of lipid handling which is genetic-dependent. Key: Importance of metabolic changes is indicated by their correlation coefficients in the respective pairwise OPLS-DA multivariate analyses between controls and CR group for each mouse strain. The report values correspond to the p (corr) variables. Threshold for significant metabolites was calculated based on two-tailed probability of a Pearson correlation coefficient (r = 0.68, p < 0.001, n = 20), (s = singlets, m = multiplets).
Metabolic Profiling of Urine and Liver Tissues in C57BL6/J Mice
Liver tissue extracts as well as urine samples from C57BL6/J mice were further analyzed using 1 H-NMR spectroscopy, and CR effect was assessed using chemometric approaches, including pairwise O-PLSA (See Table A4 for model descriptors). Initial analysis of the metabolic signatures (Table 2) ascribed to hydrophilic and lipophilic extracts from liver described a significant alteration of branched chain amino acid, as noted as per accumulation of valine and isoleucine, and a decreased level of the aromatic amino acid phenylalanine. Moreover, increased levels of glutathione were observed in CR. These changes were correlated with an increased level of dimethylglycine, a key metabolic intermediate in the betaine/homcysteine pathway, as well as nicotinurate, and carnosine. CR animals exhibited higher liver concentrations of free and esterified cholesterol, and phospholipids, including phosphatidylcholine and sphyngomyelines, and decreased triacylglycerides. These changes were associated with a remodeling of fatty acid composition, including increased n-3 and n-6 fatty acids C20:3, C20:4, C20:5, and C22:6, retinyl-conjgates, and a decrease in saturated fatty acids. The urine showed changes in several gut microbial metabolites, including an increase in several compounds related to bacterial protein fermentation, (e.g., phenylacetylglycine). This signature of protein digestion was associated with a deep remodelling of host protein/amino acid metabolism, as marked by different excretions of intermediate in the tricarboxylic acid cycle and urea cycle (e.g., -ketoglutarate and allantoin). A concomitant modulation of the NADP metabolic pathway was also noted with increased concentration of 1-methylnicotinamide, suggesting a differential contribution of TCA and urea cycle for energy production. In addition, some metabolites putatively related to protein synthesis/breakdown were also found in higher urine concentrations of CR animals, including deoxycytidine, taurine, uracil, and pseudouridine.
Targeted MS Profiles of Blood Plasma from C57BL6/J Mice
We monitored changes in C57BL6/J mouse blood plasma in the 10 CR and 10 control animals. O-PLS-DA was performed for quantitative information for 163 metabolites, including amino acids, sugars, acyl-carnitines, sphingolipids, and glycerophospholipids, to maximize the separation among CR and control animals. The model descriptors are characterized by high and positive R 2 X, R 2 Y and Q 2 parameters (Table S3). Based on a variable importance in projection (VIP) threshold (VIP < 1.5) from the seven-fold cross-validated OPLS-DA model together with the V-plot (plot constructed with the VIP value vs. p (corr) value of each metabolite), the 10 most influential metabolites responsible for class separation among CR and control are computed [32] (Table 3). Supporting the changes noted in NMR metabolic profiles, CR induced a reduction in circulating levels of polyunsaturated phospholipids, including diacyl and acyl ether phosphocholines. In addition, the concentration of long chain acylcarnitines (C14:1, C18:1) suggested a modification of long chain fatty acid -oxidation in mitochondria, whilst decreased concentration of sugars (mainly hexose) further illustrate the depth of the modulation of gluconeogenesis.
Discussion
In this study, we report metabolic readouts which are common across mouse strains in response to early CR. Transcriptomic results from the six mouse strains commonly revealed a CR-induced enhancement of mitochondrial metabolic pathways (including fatty acid β-oxidation, modulation of Krebs cycle and overall energy metabolism), a decreased intracellular insulin signaling metabolism, and complementary resistance against oxidative damage. Using complementary metabolic profiling on biofluids and liver, we further describe the metabolic changes in C57BL6/J mice, which support further some of these metabolic adaptive processes to CR.
Complementary Transcriptomics and Metabonomics Reveal CR Effects on Fatty Acid Oxidation, Gluconeogenesis and Cholesterol Metabolism
Our study reveal changes in genes related to acetyl CoA metabolic and catabolic processes (GO: 0006084 and GO: 0046356) in the liver, which illustrates how metabolic response to CR involves mobilization and metabolism of fatty acids. Although fatty acids are metabolized in the mitochondrial matrix, they cannot bypass the mitochondrial membrane without appropriate biochemical processing. Accordingly, the first enzymes involved in processing fatty acids for transport into the mitochondria are a family of acyl CoA synthetases which ligate a molecule of CoA to each fatty acid [33].
Moreover, in the liver of mice subjected to CR, the expression of Acss2 (acyl-CoA synthetase short-chain family member 2) was increased 6.9-fold (p < 0.0001). The increased expression of this gene describes an enhanced ability for hepatocytes to process fatty acids for mitochondrial β-oxidation. Interestingly, these physiological changes are mirrored by human studies, as it was hypothesised that inefficient muscle long chain fatty acid (LCFA) -oxidation is associated with insulin resistance and increased blood plasma fatty acylcarnitines (AC) was reported as a feature specific to type 2 diabetes subjects [34]. AC species are found as a consequence of incompletely oxidized fatty acids with higher rates of substrate use than energy demand, with accumulated acyl-CoA converted to AC that then exits cells and tissues [35]. Our data might suggest reduced entry in and flux through the mitochondrial fatty acid oxidation (FAO) pathway causing a reduced pool of acylcarnitines back into the plasma compartments. Moreover, studies using microarrays have shown that short-term fasting [36] and long-term CR [37] induce a metabolic shift in liver favoring gluconeogenesis over glycolysis, and a very similar pattern was also observed in the current study. In addition, two significant metabolic features associated with hepatic response to CR are representative of differences in mediating the influx of amino acids into the gluconeogenic pathway. The first gene Got1 encodes a cytosolic protein, namely glutamate oxaloacetate transaminase 1 or cytosolic aspartate aminotransferase, which catalyzes the conversion of glutamate into α-ketoglutarate (α-KG). However, the shuttling of α-KG into the mitochondria requires a transport protein located in the mitochondrial inner membrane, namely malate-α-ketoglutarate transporter. The gene coding for this protein (Slc25a11) was as also highlighted as having a significantly up-regulated gene expression in the liver of mice under CR. Therefore, the net effect of these two genes would be an increase in the influx of carbon skeletons into the Krebs cycle which can then be used for gluconeogenesis. The difference in gluconeogenetic metabolic pathways was further illustrated through the observed changes in specific gluconeogenic amino acids and a decreased level of glucose in liver tissues as evidenced by metabonomics. In response to a putative increase in mitochondrial α-KG (above), enzymes within the Krebs cycle and downstream of α-KG entry would be predicted to be up-regulated as well. Our study reveals indeed an up-regulation of set of genes related to TCA metabolic processes (GO: 0072350) and TCA cycle (GO: 0006099). In detail, genes encoding succinate dehydrogenase D (Sdhd) and fumarate hydratase 1 (Fh1) were increased in expression in response to CR (1.6-and 4.2-fold; p = 0.0307 and 0.0148, respectively). The net result of increases in the activity of these two enzymes would be an increased production of malate which would then be returned to the cytosol via the malate-α-ketoglutarate transporter Slc25a11 [38] and was identified as a transcriptional biomarker of CR. In response to CR, the pyruvate dehydrogenase complex is inhibited in multiple tissues by activating the family of pyruvate dehydrogenase kinase (Pdk) enzymes [39] the resulting phosphorylation inactivates the pyruvate dehydrogenase complex and inhibits glycolytic glucose metabolism. The gene encoding the mitochondrial isoform of malate dehydrogenase (Mdh2) was increased in expression 1.7-fold in response to CR (p = 0.0189), and mitochondrial malate can be subsequently transported to the cytoplasm for gluconeogenesis via Slc25a11. If cytosolic concentrations of malate are indeed increased, as suggested by the regulation of biomarkers of CR, and if malate were converted to phosphoenolpyruvate (PEP) via two known biochemical conversions, this would drive the glycolysis in reverse from PEP through several metabolic conversions to fructose-6-phosphate. Two enzymes in this pathway are phophoglycerate mutase and aldolase, and the genes encoding these proteins were increased in expression in response to CR (Pgam1 = 1.6-fold, p = 0.0057; Aldoc = 2.7-fold, p = 0.0008). The final transcriptional marker of CR is Pfkfb1 which encodes a single bifunctional protein that responds to circulating hormone levels to shift glycolysis pathways towards glucose catabolism or gluconeogenesis [40]. Circulating glucagon predominates over insulin levels during fasting and CR, and glucagon initiates a signaling cascade that phosphorylates the protein encoded by Pfkfb1 resulting in both a depletion of fructose-2,6-bisphosphate (an allosteric activator of glycolysis) and an accumulation of fructose-6-phosphate which together stimulate gluconeogenic pathways at the hepatic level. Thus, the increased expression of Pfkfb1 (1.7-fold, p = 0.0312) combined with relative abundance of glucagon (over insulin) serves to promote hepatic gluconeogenesis during CR. In our study, CR also induced an 8.6-fold increase (p < 0.0001) in Pmvk expression, a gene encoding the enzyme phosphomevalonate kinase involved in cholesterol synthesis. Therefore, it is likely that the role of increased Pmvk in response to CR relates to an alternative function of the mevalonate pathway, including synthesis of other sterols and isoprenoids [41]. Transcriptomics data also released that CR has an impact on Hsp90aa1 as well across the six mouse strains. Hsp90aa1 encodes the cytosolic heat shock protein 90 alpha, a molecular chaperone that has specificity for proteins involved in signal transduction. Hsp90aa1 is particularly noted for binding and activating members of the steroid hormone receptor family. In CR mice, Hsp90aa1 expression was changed 1.6-fold (p = 0.0067), suggesting a decreased demand for activation of steroid hormone receptors. This possibility is reinforced by the observation of decreased expression of enzymes involved in steroid hormones in response to CR. Moreover, the set of genes related to cholesterol and sterol metabolism were up-regulated (GO: 0006695 and GO: 0016126). Indeed we observed a 5.9-fold change (p = 0.0067) in the expression of Hsd3b5 in liver of CR mice, pointing out their role in the response to decreased food intake. It has been suggested that decreased activity of this family of proteins maintains high circulating levels of dehydroepiandrosterone (DHEAS) which has been associated with rhesus monkeys subjected to CR and with longevity in humans. Although the relationship between DHEA and longevity remains controversial, an alternative consequence of decreased Hsd3b5 expression is a decrease in the levels of steroid hormones.
Furthermore, plasma metabotype analysis revealed impact of CR on lipoprotein/lipid metabolism in C57BL6/J mice, as noted with reduced levels of high density lipoproteins (HDL), low density lipoproteins (LDL), and poly-unsaturated fatty acids (PUFA). Here, CR intervention displayed capabilities to modulate the typical lipoprotein atherogenic profile (e.g., high TG/HDL ratio and/or increased LDL concentration), common to metabolic syndrome and/or diabetes, but also in previous studies on CR in lifelong studies in dog [42]. In addition, targeted MS analysis of blood plasma lipids further described a decrease in several polyunsaturated phospholipids, including the following lipid species: PC 34:3, PC 36:1, PC 36:5, PC 40:4. We previously observed similar changes in non-human primates in response to CR, whilst older humans tend to generally show higher PUFAs compared to young, and in obese compared to lean males [43]. Thus, the observation in the current study may be explained in part by the lower fat mass of CR animals. Moreover, increased concentrations of plasma polyunsaturated fatty acids have been implicated in the pathogenesis of chronic diseases [44], with previous reports on higher unsaturated lipids in plasma from obese subjects compared to lean males [43]. Our results showed also an overall decrease on the levels of TAGs in liver from C57BL6/J under CR. The CR-induced decrease in hepatic triglycerides suggests a decreased lipogenesis (reduced FAS expression) and reduced incorporation of triacylglycerols into nascent lipoprotein particles. Despite that no significant changes in circulating levels of VLDL was noted by NMR spectroscopy, the profound remodeling of HDL and LDL levels with CR highlight a different dynamics in lipid exchange from peripherical tissues and the hepatic compartment, which may be involved in greater variations in hepatic lipid metabolism and recycling.
Complementary Transcriptomics and Metabonomics Reveal Impact of CR on Insulin Signaling and Stress Response
There is increasing awareness that the insulin pathway IIS is one of the most potent regulators of aging, by modulating glycolytic processes, lipid metabolism, and the metabolic response to oxidative stress. Indeed, there is a large body of evidence in diverse species showing that longevity is associated with decreased insulin-like signaling [45]. Pathway analysis of the microarray data shows increased liver glucose metabolism with up regulation of set of genes related to glucose catabolic processes (GO:0006007) as well as insulin-like growth factor binding protein 1 (Igfbp1). The latter was significantly increased in five of six strains of mice (not changed in the 129S1/SvImJ, p = 0.17) and qPCR analysis indicated that this gene was increased in expression 7.6-fold (p < 0.0001) by CR in the C57BL6/J strain (data not shown). An increase in the level of this protein would be predicted to sequester insulin-like growth factor (IGF), thereby decreasing IGF signaling. Interestingly, the expression of several IGF binding proteins was increased in the heart of mice subjected to a similar duration of CR as in this study [46] and in the liver of mice subjected to a 48-hour fast [36]. Thus, IGF binding proteins appear be important transcriptional regulators in response to CR. These changes were further supported by an overall decrease in the concentration of blood and hepatic glucose in C57BL6/J mice under CR, suggesting increased insulin sensitivity and glucose uptake. Another specific signature associated with changes in insulin signaling pathway was also noted via changes in hepatic branched chain amino acids [47] which highlight a change in central energy metabolic and exchanges with peripheral tissues, through BCAA dependent gluconeogenic pathway. Insulin resistance in muscle and fat cells reduces glucose uptake (and also local storage of glucose as glycogen and triglycerides, respectively), whereas insulin resistance in liver cells results in reduced glycogen synthesis and storage and a failure to suppress glucose production and release into the blood.
Following CR intervention, our data also suggest a differential metabolic response to oxidative stress. Here, the set of genes related to glutathione metabolism is up-regulated (GO: 0004364). Specifically, Gstm6 encodes the protein glutathione transferase 6, a member of a family of proteins important in drug metabolism and for defense against oxidative damage. Gstm6 was found to be decreased in expression in liver of diabetic mice [48], whereas Gstm6 was increased in expression 2.6-fold (p < 0.0001) by CR. Moreover, increased levels of glutathione and its precursor in the gamma-glutamyl cycle, glutamine, also suggest a different oxidative response induced by CR. In particular, GSH is important in the regulation of the redox state, and a decline in its tissue level has often been considered to be indicative of increased oxidative stress [49]. Further carnosine, found generally in any tissue, is considered to be an anti-aging substance, capable of counteracting oxidative damage and protein glycation [50]. In addition, NMR profiling of liver organic phase revealed increase production of retinly conjugates suggesting an increased protecting role. The liver is the major vitamin A reservoir in mammals, storing up to 80% of the total body vitamin A, and plays therefore a central role in the metabolism, storage and the distribution of retinol to peripheral tissues [51]. The identification of redox homeostasis and its core glutathione as affected pathways in our liver metabonomic analysis, suggesting that reactive oxygen species play a role in the initiation or progression of the CR phenotype. In agreement with these findings, C57BL6 Plasmalogens containing a vinyl ether bond link to the sn-1 aliphatic chain of the glycerol backbone are endogenous antioxidant. Plasmalogens have been implicated in protection of cellular functions against oxidative damage [52] and recent studies displayed that VLDL, LDL and HDL particles are characterized by class-specific ether lipid species composition [53,54].
Animals and Dietary Manipulation
All procedures were approved by the Animal Care Committee at the William S. Middleton Memorial Veterans Hospital. Six strains (129S1/SvImJ, C57BL6/J, C3H/HeJ, CBA/J, DBA/2J and /F1) of male mice were purchased at six weeks of age from Jackson Laboratories (Bar Harbor, Maine) and were individually housed in a specific pathogen free facility. Upon arrival, mice were provided with 12.7 kcal (53.1 kJ)·day −1 of a pelleted ration of AIN93M diet (Bio-Serv; Frenchtown, New Jersey). At eight weeks of age, half of the mice from each strain were randomly assigned to a control or 25% calorie restricted (CR, 9.44 kcal/39.5kJ·day −1 ) treatment group such that there were 10 mice of each strain in the control or CR group. Details on the feeding regiment and diets used in this study are described elsewhere [55]. Body weight was measured every other week for all mice. For two strains of mice (129S1/SvImJ and C57BL6/J), food intake of the CR group was subsequently decreased to 7.34 kcal (30.7 kJ)·day −1 at 16 weeks of age.
Sample Collection
At 20 weeks of age, C57BL6/J mice were individually maintained in Tecniplast metabolic cages for two days to collect urine. At 22 weeks of age, all mice strains were euthanized by cervical dislocation and blood was collected from the body cavity into two heparinized tubes. Tissues from the mice were rapidly dissected, flash-frozen in liquid nitrogen and were stored at −80 °C.
Transcriptomic Analyses
Affymetrix Mouse Genome 430 2.0 microarrays representing 20,341 known genes were used for gene expression profiling; detailed procedures for the microarray analysis are published elsewhere [56]. Briefly, the list of 45,101 probe sets represented on this array was filtered to a list of unique transcripts with an Entrez Gene ID (i.e., a gene was only represented once) probe sets representing more than one Entrez Gene ID were removed from the original dataset, and if a gene was represented by more than one probe set, we only retained that probe set having the largest signal intensity when averaged across all Control and CR samples within a strain. For each strain, a gene was considered to be differentially expressed when Two-tailed t-tests p < 0.01 using a two tailed t test and when the false discovery rate was < 0.15 [57] were used to identify genes that were differentially expressed in response to CR (p < 0.01).
Using an Eppendorf realplex2 instrument, the same RNA samples from C57BL6/J mice used for microarray analysis were analyzed by qPCR as described previously [46]. qPCR primers for all genes were purchased from Applied Biosystems. The TATA box binding protein (Tbp) gene was found to be unchanged by CR in all strains according to the microarray data and was therefore used as a housekeeping gene. Of the 19 genes examined by qPCR, only two genes (Foxa3 and Ugt2b35) were not confirmed as being changed by CR. To identify functional classes of genes changed by treatment, we performed parametric analysis of gene set enrichment (PAGE) [58]. PAGE allows for the detection of gene classes that are modulated by an intervention even when there are modest (not statistically significant) but consistent changes in the expression of genes within that functional category (relative to all genes represented on the array. In addition, PAGE generates a z-score indicating if a gene class was activated (z-score < 0) or repressed (z-score < 0) by treatment. We grouped genes into functional classes using the Gene Ontology (GO) hierarchy and only considered those GO terms that were annotated with at least 10 but not more than 1,000 genes per term. Gene functional classes were considered to be significantly altered by treatment at p < 0.01. Microarray data have been uploaded to NCBI-GEO [59].
Targeted LC-MS/MS Metabolite Profiling
A targeted LC-MS/MS global metabonomic approach on plasma was used by combining the Biocrates Life Sciences AbsoluteIDQTM kit for plasma samples as previously published [60].
Metabolite Profiling-Sample Preparation and 1H NMR Spectroscopic Analysis
Around 5-10 mg of freeze dried and ground tissue was used to extract hydrophilic and lipophilic metabolites applying an adapted Folch procedure [61] as below. Samples were extracted three times with 0.5 mL of a chloroform-methanol solution (2:1, v:v). Combined extracts were washed first with 0.5 mL of water and second with 0.5 mL of water-methanol (1:1, v:v). The upper hydrophilic phases were collected each time and combined together. Lipophilic and hydrophilic fractions were afterwards evaporated to dryness under nitrogen flow and freeze dried, respectively. Hydrophilic fractions were dissolved in 60 μL of a deuterated phosphate buffer (pH 7.4) containing 1 mM of 3-trimethylsilyl-1-[2,2,3,3,-2H4]-propionate (TSP) as a standard reference (δ = 0.0) and transferred into 1.7 mm NMR tubes. The lipophilic phases were reconstituted in 60 μL of a deuteriated chloroform-methanol solution (2:1, v:v) and transferred into 1.7 mm NMR tubes using octamethylcyclotetrasiloxane (OMS) as standard reference (δ = 0.092). All samples were analyzed at ambient temperature (300 K) by 1 H NMR spectroscopy at 600.13 MHz using a Bruker Avance II NMR spectrometer. For statistical analysis, all NMR spectra were converted into 22 K data points over the range of δ 0.2-10.0 and imported into the MATLAB software (version 7.0; The MathWorks Inc., Natick, MA, USA). The spectra were normalized to a constant total sum of all intensities within the specified range and auto scaled.
Chemometrics
Statistical analysis was performed by Multivariate Data Analysis (MVA) carried out with the Simca-P+ software (version 12.0; Umetrics AB, Umeå, Sweden) and the MATLAB software package (version 7.0; The Mathworks Inc., Natwick, MA). Data import and pre-processing steps for both 1 H NMR and targeted MS data from C57BL6/J mice groups were done using "in-house" routines written in MATLAB (version 7.11.0, The Mathworks Inc., Natick, MA, USA). Full resolution 1 H-NMR spectra incorporating data points within the 0.4-9.5 region were used for statistical multivariate analysis excluding the water residue signal between 4.5-6.5 (urine, plasma, tissue water extract dataset), ethanol δ 1.18 and δ 3.66 (plasma, ethanol residues from antiseptic swabbing contaminated plasma samples during collection), solvent signals (methanol δ = 3.24-3.27 and δ = 4.25-4.60; chloroform δ = 7.35-7.45 for liver tissues organic phase). The supervised Orthogonal PLS discriminant analyses (O-PLS-DA) was applied to the examined biofluids and tissue NMR profiling, and targeted LC-MS/MS metabolite profiling in plasma to maximize the separation between control and CR animals. The validity of the model against overfitting was monitored by computing the cross-validation parameter Q 2 which represents the predictability of the models and relates to the statistical significance. For the NMR data, differences between samples in the scores' plot were extracted by using the variable coefficients according to a previously published method [60].Variables correlating with the group separation in the MS data were identified by using the S-plot. It visualizes the variable importance (VIP) score, representing the impact of a single metabolite to the group discrimination of the model [62]. Metabolomics data (C57BL6/J LC-MS/MS M data concentration of plasma, and C57BL6/J area intensities (a.u.) of urine and tissues 1 H-NMR data) of representative metabolite signals responsible for class separation is uploaded as a supplemental file (MS Excel).
Conclusions
The present study provided a comprehensive comparison of the metabolic phenotype at both transcriptomics and metabolomics levels across mice with different genetic backgrounds to identify common metabolic markers affected by CR. Using a system biology approach comprising phenotyping of liver tissue and biofluids, we described the effect of CR across multiple mouse strains (129S1/SvlmJ, C57BL6/J, C3H/HeJ, CBA/J, DBA/2J, JC3F1/J). Overall, our integrated approach commonly described that lipid metabolism between liver and peripheral tissues, oxidative stress response and insulin dependent metabolic pathways are conserved, and are determinant factors involved in CR regulation. Table A1. Body weight for the control and CR strains. Data are reported as mean ± SE values for the body weight gain calculated from Week 8 onwards. Control and CR groups are plotted with dot and square shapes, respectively. Differences of body weight gain were statistically significant between CR and control group were tested by ANOVA and Bonferroni post-tests ( *** p < 0.001). | 2015-09-18T23:22:04.000Z | 2013-10-11T00:00:00.000 | {
"year": 2013,
"sha1": "29729dd4f414c8bc7d7a9a4a9caebe86e9aab430",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2218-1989/3/4/881/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "29729dd4f414c8bc7d7a9a4a9caebe86e9aab430",
"s2fieldsofstudy": [
"Biology",
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
106026656 | pes2o/s2orc | v3-fos-license | Electrochemical Biosensor Based on Nano TiO 2 Loaded with Highly Dispersed Photoreduced Nano Platinum
Precious metal nanomaterials have been widely used in electrochemical sensors. Further improving the dispersion of nanomaterials is beneficial to improving sensor performance and reducing the usage of noble metals. In this work, platinum nanoparticles (NPs) were loaded onto titanium dioxide nanoparticles (TiO 2 ) by the photoreduction method. The morphology, content, and distribution of NPs were determined by high resolution transmission electron microscopy (HRTEM) and energy dispersive spectroscopy (EDS). The method has simple steps and a short synthesis time (14 min). NPs with an average particle size of about 5 nm were uniformly dispersed on the surface of TiO 2 nanoparticles. The nano Pt loaded TiO 2 (Pt/TiO 2 ) nanocomposites were modified on the surface of the glassy carbon electrode, and an enzyme-free H 2 O 2 sensor was constructed and used as a carrier of lactate oxidase to prepare an amperometric lactic acid biosensor. The enzyme-free H 2 O 2 sensor has a wide detection range (0.002–15 mM), a low detection limit (0.92 μ M), and has a rapid response time (2 s) toward H 2 O 2 with a relative standard deviation (RSD) of less than 3%. At the same time, the constructed lactic acid biosensor has a linear detection range of 0.003–0.7 mM and a detection limit of 3 μ M toward lactate.©TheAuthor(s) 2018. Attribution
Hydrogen peroxide (H 2 O 2 ) has important applications in many fields such as the light industry, electronic technology, health care, and environmental engineering. [1][2][3][4] It is the intermediate product of many oxidase catalysis process. The accurate and rapid detection methods of H 2 O 2 have a huge significance in scientific practice. Although many methods have been applied to the quantitative detection of H 2 O 2 , such as spectrophotometry, titration analysis, and fluorescence analysis, [5][6][7] electrochemical sensors have attracted widespread attention due to their rapid detection, cost-effectiveness, accuracy, and reliability for H 2 O 2 quantitative analysis. On the other hand, lactic acid (LA) is an intermediate product of the body tissues' anaerobic metabolism, and is of great significance in the diagnosis and treatment of diseases and scientific sports management. 8,9 Therefore, it is important to construct a disposable LA sensor that has a rapid detection rate and a low cost.
Nanoparticles have unique chemical and physical properties and have been widely used to construct high performance biosensors. [10][11][12][13] The precious metal nanoparticles (NPs) have large surface areas for enhanced biorecognition and receptor immobilization, good reaction catalysis, rapid electron transfer capability, and good biocompatibility. These capabilities not only improve the sensitivity of the sensor, but they can further expand the types of test substances and the miniaturization of sensors. 14,15 Precious metal platinum (Pt) has stable chemical properties as well as excellent catalytic activity, and has an indisputable position in energy, materials, and chemical industries. 16,17 Titanium dioxide (TiO 2 ) was discovered as the first generation of photocatalytic materials and has led to a broad interest in semiconductor-based photocatalysis. 18 Although there are many crystal forms of titanium dioxide, there are only four crystal structures in the natural world: anatase, rutile, brookite, and TiO 2 (B). 19 It is well known that the P25, a commercial nano TiO 2 , is composed of anatase and rutile crystallites with a typically reported ratio of 70:30 or 80:20. 20 Nanosized TiO 2 exhibits higher reactivity and chemical stability under ultraviolet (UV) light [wavelength (λ) < 387 nm]. [21][22][23][24] In this study, TiO 2 (P25) was used as a photoreductant to reduce PtCl 6 2− to fine nano platinum particles under the irradiation of ultraviolet (365 nm). 25 The average particles size of the base TiO 2 (P25) particles are about 25 nm. Analysis by HRTEM showed that the reduced nano platinum was uniformly dispersed on the TiO 2 nanoparticles (Pt/TiO 2 ), and the particle size of the reduced nano platinum was only 5 nm. The atom ration of Pt on Pt/TiO 2 is only 1.49%. z E-mail: yanghp@szu.edu.cn The highly dispersed photo reduced nano platinum shows very high electrochemical activity, and the electrochemical sensors using the as-prepared Pt/TiO 2 nanocomposites have the characteristics of large linear range and lower limit of detection.
Materials and Methods
Reagents and instruments.-Nafion (5 wt%) and lactate oxidase (LOx) were purchased from J&K Scientific Ltd. The activity of LOx was 37 units/mg. The LOx solution was prepared using water and 7% of bovine serum albumin (BSA). N-methyl-2-one Pyrolidone was purchased from Xilong Chemical Reagent Factory. TiO 2 (P25) was provided by Shenzhen Biaole Co. Glutaraldehyde (GA) (50 wt%) was diluted to 0.1 wt% before use. The serum sample was obtained by Zhenglong Biochemical Products Laboratory. Hydrogen peroxide (30 wt%) was purchased from Xilong Chemical Co., Ltd. The pH value of the phosphate buffer solution (PBS) was 7.0. All other chemicals were of analytical reagent grade, and the water used in this work was deionized water.
The SEM photographs were obtained using the Hitachi SU-70. The morphologies and microstructures of the samples were obtained using the HRTEM (JEM-2100). All electrochemical measurements were performed using an electrochemical workstation from Shanghai Chenhua Instrument Company (CHI 660E). A bare electrode or modified electrode was used as the working electrode, a platinum electrode was used as an auxiliary electrode, and a saturated calomel electrode was used as a reference electrode to form a three-electrode system. Cyclic voltammetry (CV) and current time (I-T) curves were measured and recorded.
Preparation of Pt/TiO 2 nanocomposite.-17.4 mg of TiO 2 were dispersed in 10 mL water and stirred constantly. 25 mg of glucose were dissolved in the above suspension. 3 mL of H 2 PtCl 6 · 6H 2 O solution (0.01 g/mL) were added into the suspension dropwise and magnetically stirred at room temperature for 30 min. After adding 145 μL of N-methyl-2-pyrrolidone as a polymerization inhibitor, the suspension was magnetically stirred at room temperature for 30 min. The suspension was placed in a quartz glass bottle, and this bottle was placed in an ultraviolet synthesizer for 14 min under ultraviolet light (365 nm). After several centrifugation cleanings, the resulting product was vacuum dried for use.
Preparation of modified electrodes and lactic acid biosensor.-
The bare glassy carbon electrode (GCE) (Ø 3 mm) was polished to a mirror surface with 0.3 μM and 0.05 μM Al 2 O 3 powder, and the GCE was washed with deionized water and anhydrous ethanol for 5 minutes. 10 mg of Pt/TiO 2 nanocomposite were dissolved in 1 mL mixture solution (750 μL of deionized water, 210 μL of ethanol, and 40 μL of 5% Nafion solution) and sonicated for 60 min. 5 μL of this turbid solution were taken and placed on the treated GCE surface to make Pt/TiO 2 electrode (Pt/TiO 2 /GCE) for the detection of H 2 O 2 . For comparison, the TiO 2 /GCE also was made using pure TiO 2 nanopowder according to the same procedure.
A LA biosensor was prepared by dropping 5 μL of 1 U/μL lactic acid oxidase onto a Pt/TiO 2 /GCE followed by crosslinking with 5 μL of 0.1% GA (LOx/Pt/TiO 2 /GCE). This process is shown in Scheme 1.
Results and Discussion
Characterization of theTiO 2 and Pt/TiO 2 .-Titanium dioxide could produce electrons and holes with the excitation of ultraviolet light. With the excitation of ultraviolet light, electrons and holes appears on the surface of TiO 2 . Glucose was added to catch these holes, 26 and PtCl 6 2− was reduced by electrons to form nano Pt particles and loaded onto TiO 2 . The wavelength of the ultraviolet lamp used in this work was 365 nm. Fig. 1 shows the UV-visible absorption spectrum of TiO 2 . It can be seen that the TiO 2 used in this work can absorb ultraviolet light with a wide wavelength, including the wavelength λ = 365 nm. is the HRTEM image and electron diffraction pattern of Pt that was loaded onto the TiO 2 . It can be seen from the HRTEM images that the particle size of Pt is about 5 nm, and the lattice spacing of the nano Pt is 0.2 nm. The content of Pt in the sample and the mapping analysis were analyzed by EDS (Figs. 2d-2f). It was found that the Pt was evenly distributed on TiO 2 . The atom ration of Pt is only 1.49%, showing very high dispersity.
Electrochemical properties of the modified electrode (Pt/TiO 2 /GCE
).-In order to investigate the prospective applications of Pt/TiO 2 nanomaterials in electrochemical detection, an enzyme-free H 2 O 2 electrochemical sensor and a lactic acid biosensor based on the photoreduced Pt/TiO 2 nanomaterials were constructed, respectively. Fig. 3a shows the CV of TiO 2 modified glassy carbon electrode (TiO 2 /GCE), Pt/TiO 2 modified glassy carbon electrode (Pt/TiO 2 /GCE) and LOx/Pt/TiO 2 modified glassy carbon electrode (LOx/Pt/TiO 2 /GCE) between the potential range from −0.2 to 0.6 V in 5 mM potassium ferricyanide solution with a sweep rate of 50 mV/s. The CV of the bare GCE is also shown in Fig. 3a as a comparation. The CV of the Pt/TiO 2 /GCE shows a smaller peak separation and a higher peak current compared to the CVs of TiO 2 /GCE and bare GCE. The closer oxidation-reduction peak and the higher oxidation-reduction peak current indicate that this sensor has excellent electrochemical performance. Since the two insulating films of LOx and GA hinder the transfer of electrons, the oxidation peak current and the reduction peak current of the LOx/Pt/TiO 2 /GCE was reduced greatly. Fig. 3b shows the CVs of Pt/TiO 2 /GCE at various scan rates. It can be seen that both anodic peak potential (Epa) and cathodic peak potential (Epc) remained almost unchanged with the increase of potential scan rate, indicating the good electrochemical reaction ability and fast electron transfer kinetics of the electrode. The inset of Fig. 3b shows the relationship between the scan rate and the anode or cathode peak current. The anodic and cathodic peak current increase linearly with the square root of scan rate in the range from 0.01 to 0.1 V/s, implying the dominance of a diffusion controlled process. In order to explore the electrocatalytic activity of Pt/TiO 2 /GCE for H 2 O 2 , the CVs of bare GCE, TiO 2 /GCE, and Pt/TiO 2 /GCE (Fig. 3c) were carried out with 10 mM H 2 O 2 . The bare GCE and TiO 2 /GCE have lower electrocatalytic activity toward H 2 O 2 compared to the Pt/TiO 2 /GCE. The highly dispersed nano Pt on TiO 2 significantly lowers the overpotential for the oxidation of H 2 O 2 , improves the oxidation current and shows an obvious oxidation peak. In addition, as can be seen from Fig. 3d, the anode peak current increases as the H 2 O 2 concentration increases from 1 mM to 9 mM. These results strongly suggest that Pt/TiO 2 nanocomposites exhibit excellent electrocatalytic activity for the oxidation of H 2 O 2 . In order to achieve quantitative detection of H 2 O 2 conveniently, an amperometric method was used to test the Pt/TiO 2 /GCE. the increase of the concentration of H 2 O 2 , both of the response currents increase continuously. However, the response current of the Pt/TiO 2 /GCE is obviously higher than that of TiO 2 /GCE, showing that the photoreduced nano Pt has excellent catalytic ability. In addition, the response signal of Pt/TiO 2 /GCE changes rapidly toward the change of H 2 O 2 . It can reach a steady state current within 2 s with the addition of H 2 O 2 . Fig. 3f shows the response current curve with the successive addition of different concentrations of H 2 O 2 in 0.1 M PBS (pH = 7.0). The Pt/TiO 2 /GCE has a lower detection limit (0.92 μM) and a wider linear detection range (0.002-15 mM) toward H 2 O 2 and could be used as a highly performance enzyme-free sensor. These results are comparable or superior to the recent reported results, as shown in Table I.
Good stability is one of the indispensable factors of excellent sensor. The enzyme-free H 2 O 2 sensor (Pt/TiO 2 /GCE) was tested 5 times in 0 mM, 10 mM, and 20 mM of H 2 O 2 solution, respectively. The relative average deviation (RAD), standard deviation (S), and relative standard deviation (RSD) were used to estimate the stability of the sensor. The results are shown in Table II limit is estimated to be 3.0 μM (S/N = 3) according to the calibration curve.
The common interferents in serum such as ascorbic acid, uric acid, glycine, fructose, maltose and glucose (0.1 mM) are tested in 0.5 mM LA solution. All other possible interferents do not substantially change the response signal except the 0.1 mM of ascorbic acid. The response signals of 0.1 mM ascorbic acid is about 30% of the response signal from 0.5 mM LA. However, in human serum, the LA concentration usually is over 3 mM, and the concentration of ascorbic acid usually is 0.02 mM. Thus the interference of ascorbic acid toward LA could be ignored in real sample test.
In order to explore the detection performance of the biosensor in a physiological environment, standard addition method was used to detect LA in serum samples. The serum was diluted 25 times by PBS. The added concentration of LA was 0.1, 0.2, and 0.3 mM. Each sample was measured seven times. The relative standard deviations (RSD) of assays ranges from 0.42 to 1.68%. The recovery rates range from 92% to 97%, as shown in Table III, which demonstrates the reliability of this method.
Conclusions
Highly dispersed nano Pt were loaded onto nano TiO 2 via the photoreduction method. Though the amount of loaded platinum is very low (1.49%), it has very high electrochemical activities because of the large surface area of the highly dispersed nano Pt. The as-prepared enzyme-free H 2 O 2 sensor has good linear response, low detection limit (0.92 μM), wide detection range (0.002-15 mM), and excellent repeatability. A lactic acid biosensor was prepared by immobilizing LOx on Pt/TiO 2 /GCE. It could detect lactic acid in serum samples with a linear range from 0.003 to 0.7 mM and a detection limit of 3.0 μM. The results showed that the enzyme-free sensor and the biosensor based on photoreduced Pt/TiO 2 nanomaterials have great potential application in the detection of H 2 O 2 and LA in biological samples. | 2019-04-10T13:12:17.691Z | 2018-01-01T00:00:00.000 | {
"year": 2018,
"sha1": "af2fa83a374b9696ddf56ee74cfa76a46feffa4c",
"oa_license": "CCBY",
"oa_url": "http://jes.ecsdl.org/content/165/13/B610.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "d32fb751501f48af7e250b4548b100bccb61c2a1",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
236670392 | pes2o/s2orc | v3-fos-license | A method for cleaning tanks from oil product residues based on biotechnology
The known methods of cleaning tanks from the remnants of petroleum products using existing means are quite time-consuming, energy-intensive, and insufficiently advanced. In addition, waste cleaning solutions are poorly regenerated and slowly oxidized in the biosphere, and their removal to landfills (or discharge into reservoirs) causes great harm to the environment. Therefore, the search for new methods of cleaning is a very urgent task. This work aims to develop a method for cleaning tanks from the residues of petroleum products based on biotechnology. This article proposes a technological scheme of an experimental installation that simulates a tank for storing petroleum products in agricultural conditions. Studies on the oxidation of petroleum product residues by selected active cultures of microbial strains have been carried out. The modes of biological cleaning of the internal surfaces of tanks for the storage of petroleum products, from their residues, are determined. Strains of oiloxidizing microorganisms were used as biologics). The biomass of the studied microorganisms was obtained under laboratory conditions by deep cultivation in flasks on a mineral medium. The proposed method of tank cleaning is an environmentally friendly process, during which microorganisms decompose oil pollution at relatively low temperatures (20-40°C) and use hydrocarbons as a source for their growth. As a result of this process, many tons of oil deposits are converted into microbial cells, which in turn become a source of food for other organisms and the plant world.
Introduction
In Uzbekistan, large-scale measures are being taken to reduce the cost of fuel and lubricants, save them when performing agricultural work. Research works are being carried out to create energy-saving machines for tillage , sowing [22][23], harvesting [24], and processing [25] crops. The main goal is to reduce the cost of fuel and lubricants for agricultural work. The productivity of agricultural products mainly depends on the timely provision of fuel and lubricants for energy resources [28][29][30][31][32][33][34][35][36][37][38][39][40][41].
During long-term storage and transportation of petroleum products in the tanks, changes in the component composition occur, which leads to the accumulation of a large number of oil residues, which negatively affect the quality of petroleum products re-filled in these tanks. Contaminated fuel entering the fuel tanks of engines can cause serious damage, which leads to overspending of fuel by engines and, in general, to a decrease in the reliability and durability of machine parts and components. The quality of petroleum products is ensured during their storage and transportation using clean containers, which is possible only in the case of periodic cleaning of their residues of petroleum products and contaminants.
Tank cleaning is a very time-consuming process. The existing tank cleaning methods at the oil complexes of agricultural enterprises can be divided into two types: manual and mechanized. Manual cleaning method, currently almost not used.
The mechanized method of cleaning tanks from the residues of petroleum products at oil complexes of agricultural enterprises is carried out with the help of special installations for cleaning tanks developed in Russia. At the same time, an aqueous solution of preparations such as MS, ML," Labomid," and others, heated to 80-90º C, is used as a washing liquid.
This method significantly reduces the cleaning time, reduces the amount of manual labor and the cost of the process, but it has significant drawbacks.
The disadvantages of the mechanized method are the high energy consumption for heating cold water to a temperature of 80-90°C and the need for manual labor when unloading "dead" sediments from the tank. In addition, its significant disadvantage is the need to pump out the spent cleaning solution containing the remains of contaminated petroleum products in the tanks of treatment plants. Sometimes the waste solutions are taken to a landfill or drained into reservoirs, which causes great harm to the environment.
The known methods of cleaning tanks from the remnants of petroleum products using existing means are quite time-consuming, energy-intensive, and insufficiently advanced. In addition, waste cleaning solutions are poorly regenerated and slowly oxidized in the biosphere, and their removal to landfills (or discharge into reservoirs) causes great harm to the environment. Therefore, the search for new methods of cleaning is a very urgent task.
This work aims to develop a method for cleaning tanks from the residues of petroleum products based on biotechnology. The article sets the following tasks: to justify the possibility of biological cleaning of objects from oil pollution; develop a method for assessing the ability of microorganisms to disperse petroleum products; to determine the ability of microorganisms to absorb the remnants of petroleum products in the experimental facility; to develop a technology for the biological purification of tanks from the residues of petroleum products.
Methods
The objects of research were samples contaminated with residues of petroleum products, as well as tanks for their storage. As preparations of microbial biomass, strains of oiloxidizing microorganisms from the Sintez Belok Research Institute (Russia) collection were used. The biomass of the studied microorganisms was obtained in the laboratory by deep cultivation of them in flasks on mineral medium № 9. In the medium for growing yeast, the pH was established-5.0-5.5; for bacteria, 6.8-7.0. The amount of seed material -0.1 units of optical density about the volume of the nutrient medium. Cultivation mode: the temperature is optimal for the growth of each strain of the microorganism; the duration is 48 hours. An experimental setup has been developed for laboratory studies of the new cleaning method. The main part of the installation is the tank, which is a horizontal cylinder with a capacity of 10 liters made of organic glass. The following components are installed in the tank: bubbler (air supply system); heat exchanger, which is connected by silicone hoses with a thermostat; electrodes for measuring the pH of the medium; resistance thermometer; sampling valve. The research was carried out using modern equipment and devices, the use of modern research methods, the processing of the obtained data by methods of mathematical statistics and trial tests. The hydrocarbon composition of oil-derived pollutants is characterized. The selection of microorganisms that utilize the hydrocarbons of oil pollution was carried out. According to their ability to utilize various fractions of petroleum hydrocarbons, the qualitative and quantitative characteristics of the selected microorganisms are given. The biological method of tank cleaning from oil product residues is experimentally proved.
Results and Discussion
The research results made it possible to develop a technology for the biological cleaning of tanks from the remnants of petroleum products. The technology includes the following operations: obtaining the necessary amount of biomass (seed material) of microorganisms for the process; implementation of the process of biological cleaning of the tank; separation of the biomass from the culture fluid.
To obtain (prepare) the required amount of biomass, a biological laboratory is organized at the district sanitary and epidemiological station. Biological cleaning of tanks is carried out according to the scheme shown in Figure 1. 1. Technological scheme of biological cleaning of tanks from oil product residues: 1 is compressor; 2 is rotameter; 3 is bubbler; 4 is tank; 5 is separator; 6 is container for biomass; 7 is container for culture liquid; 8 is pump; 9 is hydromonitor; 10 is pH meter; 11 is register; 12 is pumpdispenser; 13 is vessel for titrating liquid; 14 are electrodes for measuring the pH of the medium.
The tank 4 with the remnants of petroleum products is filled with seed material (the biomass of microorganisms in the required amount), where the bubbler 3 is released, and the compressor 1 is turned on. In this case, the air passing through the rotameter 2 rushes into the cavity of the bubbler 3, which provides intensive aeration in the air. This ensures the growth of microorganisms, with their uniform distribution over the entire volume of the tank. Accordingly, the cleaning process is accelerated. Cleaning mode: the amount of seed material should ensure its optical density in the medium is not less than 0.5 units when measured in a cuvette of 5 mm; the pH of the medium is 5.0-5.5 for yeast; 6.8-7.0 for bacteria; medium temperature 26-40ºC, depending on the type of microorganism. Air consumption 3 l/lmin (3 liters per liter of pollution per minute). A rotameter regulates the air flow rate. The pH environment is controlled by a pH meter and is maintained automatically.
Since the tanks are cleaned in the summer, the ambient temperature is sufficient to carry out the process. If necessary, an electric air heater can be installed on the air supply line, with the help of which it is easy to maintain the temperature of the bubbled liquid in the process stages of 26-40ºC.
At the end of the biological treatment cycle, a separator is activated in the oil pollution tank, where cells are isolated from the culture liquid. The cells of microorganisms enter the container 6, and the clarified culture liquid enters the container 7. Then the purified culture liquid is fed through the pump 8 and the hydraulic monitor 9 to the tank 4 for jet removal of the cells remaining on the tank walls and refuel the next tank with oil pollution. Some of the microbial cells extracted at the separation stage can be reused, and some can be disposed of (for example, for resuscitation of the soil with contaminated oil products).
The process of extracting cells from the solution in the separation channel occurs in the field of action of centrifugal forces. (Fig.2) The speed of rotation of the separator required for the deposition of suspended cells can be determined from the Stokes equation: where µ is the coefficient of dynamic viscosity m -1 ·kg·sec 2 ; d is the average diameter of the cell, m; R is the radius of the separating channel, m; γ k , γ r are respectively, the density of cells (biomass) and the working solution (water), g/m 3 .
At a certain frequency n of rotation and the radius R of the separating channel, it is possible to ensure the optimal mode of extraction of the smallest cells in size. This effect is used to separate the spent cells.
Due to the mechanization of the process of extracting biologically active cells from the waste solution and reusing the clarified solution, it is possible to switch to closed biotechnological cleaning of tanks from oil pollution in the agro-industrial complex.
Due to the mechanization of the process of extracting biologically active cells from the waste solution and reusing the clarified solution, it is possible to switch to closed biotechnological cleaning of tanks from oil pollution in the agro-industrial complex.
Conclusions
Thus, the biotechnological (innovative) method of tank cleaning is an environmentally friendly process in which microorganisms decompose oil pollution at relatively low temperatures and use hydrocarbons as a source for their growth. As a result of this process, many tons of oil deposits are converted into microbial cells, which in turn become a source of food for other organisms and the plant world. | 2021-08-03T00:05:56.235Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "d11b11efcf3db61228fadf7072da43c4c0ddda9e",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2021/40/e3sconf_conmechydro2021_04052.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "040e5666fd8f49a5a2d8d6c59ec0b07efe3e977c",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
269118173 | pes2o/s2orc | v3-fos-license | Study on Characteristics of Failure and Energy Evolution of Different Moisture-Containing Soft Rocks under Cyclic Disturbance Loading
During the coal mining process in soft rock mines with abundant water, the rock mass undergoes cyclic loading and unloading at low frequencies due to factors such as excavation. To investigate the mechanical characteristics and energy evolution laws of different water-containing rock masses under cyclic disturbance loading, a creep dynamic disturbance impact loading system was employed to conduct cyclic disturbance experiments on various water-containing soft rocks (0.00%, 1.74%, 3.48%, 5.21%, 6.95%, and 8.69%). A comparative analysis was conducted on the patterns of input energy density, elastic energy density, dissipated energy density, and damage variables of different water-containing soft rocks during the disturbance process. The results indicate that under the influence of disturbance loading, the peak strength of specimens, except for fully saturated samples, is generally increased to varying degrees. Weakness effects on the elastic modulus were observed in samples with 6.95% water content and saturated samples, while strengthening effects were observed in others. The input energy density of samples is mostly stored in the form of elastic strain energy within the samples, and different water-containing samples adapt to external loads within the first 100 cycles, with almost identical trends in energy indicators. Damage variables during the disturbance process were calculated using the maximum strain method, revealing the evolution of damage in the samples. From an energy evolution perspective, these experimental results elucidate the fatigue damage characteristics of water-containing rock masses under the influence of disturbance loading.
Introduction
With the prolonged and continuous exploitation of coal resources in China, accessible coal reserves are steadily diminishing, and coal mining activities are progressively shifting towards deeper deposits [1][2][3].As coal resources are extracted, the dynamic stress adjustment in the roof and floor of goaf zones can be regarded as subject to perturbation loading.Additionally, deep mines are often subjected to mechanical vibrations and fatigue loads from activities such as blasting [4][5][6][7].In coal mines with rich water-bearing soft rocks, the presence of water can deteriorate the rock properties [8][9][10], thereby weakening the stability of mine roadway surroundings.Therefore, investigating the damage and energy evolution characteristics of rocks under the combined action of dynamic-static cyclic loading and water presents significant engineering implications and theoretical value.
Scholars have carried out compression [11][12][13], tension [14], and fracture [15,16] experiments that reveal the mechanical properties and fracture characteristics of various types of rocks under dynamic loading conditions.Jiang et al. [17] studied the impact of small-amplitude cyclic dynamic perturbation on soft rock-coal dual materials, detected the acoustic emission characteristic parameters of the sample during the loading process, and analyzed its main frequency characteristics.Dehghanipoodeh et al. [18] studied the mechanical properties and deformation characteristics of different grouting rocks under static and dynamic loads.Malik et al. [19] conducted dynamic compression experiments on basalt using a separated Hopkinson rod and proposed an empirical correlation of the dynamic increase factor of compressive strength.Dong et al. [20] examined the mechanical properties of sandstone and found a nonlinear relationship between sample compressive strength and initial cyclic values.Arora et al. [21] used sine wave loading to conduct cyclic compression tests on different types of rocks and concluded that the rock secant modulus degradation is related to the number of cycles and proposed a normalized relationship for rock modulus degradation.
A substantial body of experimental research has been carried out on the weakening effects of water on rocks and yielded a wealth of findings [22,23].Perera et al. [24] investigated the strength and deformation characteristics of coal rocks in their natural and saturated states.Feng et al. [25] studied the damage behavior of sandstone under the joint influence of moisture content and intermediate principal stress and concluded that with the increase in moisture content and intermediate principal stress, tensile cracks tend to increase, while shear cracks gradually reduce.Sun et al. [26] carried out creep experiments with different moisture conditions for sandstone in the roof strata of the Wanfu coal mine, providing a reliable theoretical basis for early warning of roadway creep failure.Zhang et al. [27] conducted uniaxial and cyclic loading and unloading experiments on rocks in dry, unsaturated, and saturated states, and obtained the changing rules of mechanical characteristic parameters of sandstone in different water-containing states.
Simultaneously, rock deformation is accompanied by the absorption and release of energy, constituting a damage process driven by energy considerations [28].Meng et al. [29] conducted cyclic loading and unloading experiments on rocks with different loading rates and obtained the energy evolution characteristics of rocks with different lithologies.Tarasov et al. [30] discussed the role of elastic energy in determining the dynamic energy balance and the fracture mechanism operating during spontaneous failure during loading of different brittle rocks, which provides a certain theoretical basis for understanding the dynamic process of rock bursts.Hu et al. [31] revealed the energy evolution law of weakly cemented sandstone under the action of creep, which provided a certain theoretical basis for the long-term stability of weakly cemented surrounding rock mine tunnels.
In summary, the current analysis of the failure modes and physical and mechanical properties of water-bearing rocks under the action of cyclic disturbance loads mainly focuses on dry, natural, and saturated states.There is a lack of quantitative analysis of the damage and destruction of water-bearing soft rocks under the action of perturbation loads.From a mechanical point of view, the rock deformation and failure process is a process from local destruction to overall catastrophe, and this process is accompanied by energy input, accumulation, and dissipation.Thermodynamics believes that rock failure is an instability phenomenon driven by energy.Studying materials from an energy perspective can better reveal the essential characteristics of their failure.Therefore, studying loaded rocks from an energy perspective and analyzing, in detail, the energy evolution rules during the deformation and failure process of water-rich soft rocks can provide new ideas for the analysis and prediction of deep soft rock engineering disasters caused by rock instability.
Materials
The specimens were sourced from the silty sandstone in the Shanghaimiao mining area of Ordos, Inner Mongolia.Following the standards set by the International Society for Rock Mechanics (ISRM), the samples underwent three processes of coring, cutting, and grinding to produce standard specimens with a diameter of 50 mm and a height of 100 mm.
The samples exhibited a yellowish color and a dense structure.A visual inspection was carried out and specimens with no visible bedding, streaks, or cracks, and with excellent overall integrity and uniformity, were selected as the standard specimens.Figure 1 shows a few of the experimental samples.
Materials
The specimens were sourced from the silty sandstone in the Shanghaimiao mining area of Ordos, Inner Mongolia.Following the standards set by the International Society for Rock Mechanics (ISRM), the samples underwent three processes of coring, cutting, and grinding to produce standard specimens with a diameter of 50 mm and a height of 100 mm.The samples exhibited a yellowish color and a dense structure.A visual inspection was carried out and specimens with no visible bedding, streaks, or cracks, and with excellent overall integrity and uniformity, were selected as the standard specimens.Figure 1 shows a few of the experimental samples.The preparation steps for sandstone samples with varying moisture content are as follows: (1) All prepared samples were initially placed in a drying oven and dried at a temperature of 106 °C for 24 h.After drying, the samples were weighed.When the mass remained essentially unchanged in two consecutive weighings, the drying process was considered complete, and the dried mass was recorded.(2) Samples, excluding those designated for drying, were immersed in distilled water.Then, the samples were removed at intervals, their surface moisture was wiped off, and their masses were recorded.The saturation point was considered to have been reached when the mass remained constant within adjacent 12 h periods.(3) Based on the magnitude of saturation, the moisture content for the intermediate four gradients was calculated, and at each level, the corresponding sample masses were inferred.The samples were immersed in distilled water, and measurement intervals were reduced as their masses approached the desired moisture content level.At such point, the preparation of unsaturated samples was completed.
Experimental Equipment and Protocol
The experimental equipment for this experiment is the creep impact dynamic disturbance loading system independently developed by Shandong University of Science and Technology, as shown in Figure 2.This system is capable of applying both axial static and dynamic loads.The static load unit has a maximum capacity of 800 kN, while the dynamic load unit can handle a maximum of 100 kN.The system can apply complex waveforms, including sinusoidal, rectangular, and custom waveforms.The latter part requires stress, displacement, and loading path waveforms to be independently designed.The perturbation waveform frequency ranges from 0.01 to 10.00 Hz.Data are sampled at intervals of 0.05 s during static loading, while sampled at intervals of 0.001 s for dynamic loading.Displacement is measured using Demec magnetic incremental displacement sensors with a range of up to 200 mm and an accuracy of 0.002 mm.Throughout the experiment, the system continuously collects data on axial load, axial strain, and time, and records corresponding parameter curves in a specified directory.The experiment is divided into two parts.The first part involves determining the static mechanical parameters of different moisture content samples and the initial value of σm for cyclic loading by using uniaxial compression tests.The second part entails cyclic loading experiments under The preparation steps for sandstone samples with varying moisture content are as follows: (1) All prepared samples were initially placed in a drying oven and dried at a temperature of 106 • C for 24 h.After drying, the samples were weighed.When the mass remained essentially unchanged in two consecutive weighings, the drying process was considered complete, and the dried mass was recorded.(2) Samples, excluding those designated for drying, were immersed in distilled water.Then, the samples were removed at intervals, their surface moisture was wiped off, and their masses were recorded.The saturation point was considered to have been reached when the mass remained constant within adjacent 12 h periods.(3) Based on the magnitude of saturation, the moisture content for the intermediate four gradients was calculated, and at each level, the corresponding sample masses were inferred.The samples were immersed in distilled water, and measurement intervals were reduced as their masses approached the desired moisture content level.At such point, the preparation of unsaturated samples was completed.
Experimental Equipment and Protocol
The experimental equipment for this experiment is the creep impact dynamic disturbance loading system independently developed by Shandong University of Science and Technology, as shown in Figure 2.This system is capable of applying both axial static and dynamic loads.The static load unit has a maximum capacity of 800 kN, while the dynamic load unit can handle a maximum of 100 kN.The system can apply complex waveforms, including sinusoidal, rectangular, and custom waveforms.The latter part requires stress, displacement, and loading path waveforms to be independently designed.The perturbation waveform frequency ranges from 0.01 to 10.00 Hz.Data are sampled at intervals of 0.05 s during static loading, while sampled at intervals of 0.001 s for dynamic loading.Displacement is measured using Demec magnetic incremental displacement sensors with a range of up to 200 mm and an accuracy of 0.002 mm.Throughout the experiment, the system continuously collects data on axial load, axial strain, and time, and records corresponding parameter curves in a specified directory.The experiment is divided into two parts.The first part involves determining the static mechanical parameters of different moisture content samples and the initial value of σ m for cyclic loading by using uniaxial compression tests.The second part entails cyclic loading experiments under certain static loading conditions, and the experimental plan is detailed in Table 1.The samples are first loaded to a specific static load σ m at a constant displacement rate of 0.1 mm/min using a closed-loop, constant velocity displacement control testing machine.Subsequently, with σ m as the average static load, periodic cyclic dynamic loads are applied to the samples.To simulate the elastic waves during water drainage and energy storage in mining and seismic vibrations, we selected a sinusoidal waveform for the cyclic perturbation waveform with a frequency of 5 Hz and an amplitude of 10 kN.The perturbation cycles were then set at 1000. Figure 2 provides a schematic representation of the sample loading path.
certain static loading conditions, and the experimental plan is detailed in Table 1.The samples are first loaded to a specific static load σm at a constant displacement rate of 0.1 mm/min using a closed-loop, constant velocity displacement control testing machine.Subsequently, with σm as the average static load, periodic cyclic dynamic loads are applied to the samples.To simulate the elastic waves during water drainage and energy storage in mining and seismic vibrations, we selected a sinusoidal waveform for the cyclic perturbation waveform with a frequency of 5 Hz and an amplitude of 10 kN.The perturbation cycles were then set at 1000. Figure 2 provides a schematic representation of the sample loading path.
Mechanical Properties Analysis
Figure 3 shows the stress-strain curve of the sample under static loading conditions, the peak strengths of different moisture content samples decrease by level as moisture content increases, indicating different softening tendencies.After reaching the peak, dry samples display linear stress reduction and exhibit brittle fracture characteristics.By contrast, moist samples show a gradual slowing of stress reduction with curves displaying varying degrees of stress recovery, indicating ductile characteristics.With increasing moisture content, the peak strengths of the samples and the slope of the stress-strain curve gradually decrease.Figure 4 illustrates the trends in average peak strength and elastic modulus as a function of moisture content, demonstrating the significant influence of water on the mechanical properties of the rock samples.As moisture content increases, the average uniaxial compressive strengths of the samples at moisture levels of 0.00%, 1.74%, 3.48%, 5.21%, 6.95%, and 8.69% decrease to 32.66, 17.93, 15.93, 15.52, 13.91, and 12.98 MPa, respectively, representing a continuous decline in strength.The corresponding elastic moduli are 4.03, 2.92, 2.70, 2.64, 2.37, and 2.30 GPa, showing a similar reduction trend.The initial value of 32.66 MPa compressive strength decreases by 60.26%, resulting in a softening coefficient of 0.397, while the elastic modulus initial value of 4.03 GPa decreases by 42.93%, yielding a reduction coefficient of 0.571.Both uniaxial compressive strength and elastic modulus for samples at different moisture levels exhibit an exponential decrease in relation to moisture content.The relationship between these variables is mathematically expressed in Figure 4.
content increases, indicating different softening tendencies.After reaching the peak, dry samples display linear stress reduction and exhibit brittle fracture characteristics.By contrast, moist samples show a gradual slowing of stress reduction with curves displaying varying degrees of stress recovery, indicating ductile characteristics.With increasing moisture content, the peak strengths of the samples and the slope of the stress-strain curve gradually decrease.Figure 4 illustrates the trends in average peak strength and elastic modulus as a function of moisture content, demonstrating the significant influence of water on the mechanical properties of the rock samples.As moisture content increases, the average uniaxial compressive strengths of the samples at moisture levels of 0.00%, 1.74%, 3.48%, 5.21%, 6.95%, and 8.69% decrease to 32.66, 17.93, 15.93, 15.52, 13.91, and 12.98 MPa, respectively, representing a continuous decline in strength.The corresponding elastic moduli are 4.03, 2.92, 2.70, 2.64, 2.37, and 2.30 GPa, showing a similar reduction trend.The initial value of 32.66 MPa compressive strength decreases by 60.26%, resulting in a softening coefficient of 0.397, while the elastic modulus initial value of 4.03 GPa decreases by 42.93%, yielding a reduction coefficient of 0.571.Both uniaxial compressive strength and elastic modulus for samples at different moisture levels exhibit an exponential decrease in relation to moisture content.The relationship between these variables is mathematically expressed in Figure 4. content increases, indicating different softening tendencies.After reaching the peak, dry samples display linear stress reduction and exhibit brittle fracture characteristics.By contrast, moist samples show a gradual slowing of stress reduction with curves displaying varying degrees of stress recovery, indicating ductile characteristics.With increasing moisture content, the peak strengths of the samples and the slope of the stress-strain curve gradually decrease.Figure 4 illustrates the trends in average peak strength and elastic modulus as a function of moisture content, demonstrating the significant influence of water on the mechanical properties of the rock samples.As moisture content increases, the average uniaxial compressive strengths of the samples at moisture levels of 0.00%, 1.74%,In addition to being subjected to axial stress in deep water-rich soft rocks, the mines are also exposed to periodic perturbation loading such as mechanical vibrations and mineinduced seismic events.In this study, a series of cyclic perturbation experiments, as outlined in Table 1, were carried out on multiple sets of samples with varying levels of moisture content.Figure 5 shows the resulting stress-strain curves for the samples.The trend of perturbation experiment curves follows a pattern similar to the uniaxial compression curves, with both showing brittle characteristics for dry samples and ductile characteristics for moist samples after the peak.However, after the perturbation tests, peak strength and elastic modulus show significant differences compared with uniaxial compression.Figure 6 illustrates the trends in average peak strength and elastic modulus as a function of moisture content, highlighting the substantial influence of moisture on the mechanical properties of rock samples.With increasing moisture content, the perturbation strength gradually decreases from an initial 35.87 to 12.56 MPa, which is a decrease of 64.98%.Similarly, the elastic modulus decreases from 4.55 to 1.92 GPa, a reduction of 57.80%.In comparison to uniaxial compression under dynamic perturbation loading, the peak strength of the samples increases by 9.83%, 15.78%, 11.99%, 6.37%, and 2.73% at moisture levels of 0.00%, 1.74%, 3.48%, 5.21%, and 6.95%, respectively, and decreases by 2.54% at a moisture level of 8.69%.The elastic modulus increases by 12.90%, 10.96%, 4.40%, and 0.75% at moisture levels of 0.00%, 1.74%, 3.48%, and 5.21%, respectively, and decreases by 14.35% and 16.52% at moisture levels of 6.95% and 8.69%.This behavior can be attributed to the internal particles becoming less rigid when losing water in a dry environment, increasing the friction between particles.Lower-amplitude perturbations result in recompacting internal voids, to a certain extent increasing the sample's elastic modulus.As the moisture content of the sample increases, the water in the internal pores relatively increases.Under cyclic perturbation loading, several pores lose their load-bearing capacity, leading to instability and failure, which reduce the strengthening effect.In full saturation state, the pores within the sample are filled with water.On one hand, water provides a degree of lubrication, but on the other hand, it is an incompressible fluid.During the cyclic perturbation, the process causes changes in pore water pressure that, in turn, lead to pore expansion and interconnection.Thus, the compressive strength and elastic modulus decrease.ture content.Figure 5 shows the resulting stress-strain curves for the samples.The trend of perturbation experiment curves follows a pattern similar to the uniaxial compression curves, with both showing brittle characteristics for dry samples and ductile characteristics for moist samples after the peak.However, after the perturbation tests, peak strength and elastic modulus show significant differences compared with uniaxial compression.Figure 6 illustrates the trends in average peak strength and elastic modulus as a function of moisture content, highlighting the substantial influence of moisture on the mechanical properties of rock samples.With increasing moisture content, the perturbation strength gradually decreases from an initial 35.87 to 12.56 MPa, which is a decrease of 64.98%.Similarly, the elastic modulus decreases from 4.55 to 1.92 GPa, a reduction of 57.80%.In comparison to uniaxial compression under dynamic perturbation loading, the peak strength of the samples increases by 9.83%, 15.78%, 11.99%, 6.37%, and 2.73% at moisture levels of 0.00%, 1.74%, 3.48%, 5.21%, and 6.95%, respectively, and decreases by 2.54% at a moisture level of 8.69%.The elastic modulus increases by 12.90%, 10.96%, 4.40%, and 0.75% at moisture levels of 0.00%, 1.74%, 3.48%, and 5.21%, respectively, and decreases by 14.35% and 16.52% at moisture levels of 6.95% and 8.69%.This behavior can be attributed to the internal particles becoming less rigid when losing water in a dry environment, increasing the friction between particles.Lower-amplitude perturbations result in recompacting internal voids, to a certain extent increasing the sample's elastic modulus.As the moisture content of the sample increases, the water in the internal pores relatively increases.Under cyclic perturbation loading, several pores lose their load-bearing capacity, leading to instability and failure, which reduce the strengthening effect.In full saturation state, the pores within the sample are filled with water.On one hand, water provides a degree of lubrication, but on the other hand, it is an incompressible fluid.During the cyclic perturbation, the process causes changes in pore water pressure that, in turn, lead to pore expansion and interconnection.Thus, the compressive strength and elastic modulus decrease.
Macro-Destructive Features
Figure 7 shows the typical failure modes of different moisture content samples under cyclic loading.We observe that for dry samples, cracks initiate from the top and extend towards the middle, representing a typical tensile failure mode.As the moisture content
Macro-Destructive Features
Figure 7 shows the typical failure modes of different moisture content samples under cyclic loading.We observe that for dry samples, cracks initiate from the top and extend towards the middle, representing a typical tensile failure mode.As the moisture content increases, the failure mode transitions from pure tensile to a tensile-shear composite failure, with shear cracks becoming more pronounced with increasing moisture content.When the moisture content reaches 5.21%, a clear "X"-shaped shear failure trend is observed.Further increasing the moisture content to 6.95% increases the prominence of the "X"-shaped shear failure mode.When the sample reaches a saturated state, the internal voids are filled with water, weakening the friction between particles and providing lubrication.At this point, the sample exhibits a single inclined plane shear failure.Thus, under the influence of cyclic perturbation loading, the macroscopic failure mode of the sample evolves from tensile failure to a tensile-shear composite failure and eventually to a single inclined plane shear failure.
Macro-Destructive Features
Figure 7 shows the typical failure modes of different moisture content samples under cyclic loading.We observe that for dry samples, cracks initiate from the top and extend towards the middle, representing a typical tensile failure mode.As the moisture content increases, the failure mode transitions from pure tensile to a tensile-shear composite failure, with shear cracks becoming more pronounced with increasing moisture content.When the moisture content reaches 5.21%, a clear "X"-shaped shear failure trend is observed.Further increasing the moisture content to 6.95% increases the prominence of the "X"-shaped shear failure mode.When the sample reaches a saturated state, the internal voids are filled with water, weakening the friction between particles and providing lubrication.At this point, the sample exhibits a single inclined plane shear failure.Thus, under the influence of cyclic perturbation loading, the macroscopic failure mode of the sample evolves from tensile failure to a tensile-shear composite failure and eventually to a single inclined plane shear failure.
Characteristics of Energy Evolution
It is expected that studying the damage characteristics of the surrounding rock mass of water-rich soft rock mines under the action of cyclic dynamic loads from an energy perspective will aid in better describing the essential characteristics of deep soft rock instability.It is also important for understanding the damage energy mechanism and dynamic disasters of rock masses under cyclic dynamic loads.The development of topics such as mechanisms is of positive significance.
The total input energy () generated by the external forces performing at work is the sum of elastic energy ( ) and dissipative energy ( ) (assuming there is no heat exchange occurring with the surroundings during the physical process).In uniaxial cyclic loadingunloading experiments, the stress-strain curve of the sample and the area enclosed by the coordinate axes represent the energy variation characteristics of the sample during the loading process.As shown in Figure 8, the loading curve AB and the area enclosed by the coordinate axes represent the total energy density () absorbed by the rock sample, which is the work carried out by the testing machine on the rock sample during loading.The unloading curve BC and the area enclosed by the coordinate axes represent the accumulated elastic energy density ( ) stored within the rock sample and can be released during
Characteristics of Energy Evolution
It is expected that studying the damage characteristics of the surrounding rock mass of water-rich soft rock mines under the action of cyclic dynamic loads from an energy perspective will aid in better describing the essential characteristics of deep soft rock instability.It is also important for understanding the damage energy mechanism and dynamic disasters of rock masses under cyclic dynamic loads.The development of topics such as mechanisms is of positive significance.
The total input energy (U) generated by the external forces performing at work is the sum of elastic energy (U e ) and dissipative energy (U d ) (assuming there is no heat exchange occurring with the surroundings during the physical process).In uniaxial cyclic loadingunloading experiments, the stress-strain curve of the sample and the area enclosed by the coordinate axes represent the energy variation characteristics of the sample during the loading process.As shown in Figure 8, the loading curve AB and the area enclosed by the coordinate axes represent the total energy density (U) absorbed by the rock sample, which is the work carried out by the testing machine on the rock sample during loading.The unloading curve BC and the area enclosed by the coordinate axes represent the accumulated elastic energy density (U e ) stored within the rock sample and can be released during the unloading process.The dissipative energy (U d ) of the rock sample can be calculated as the difference between these two areas, namely, those enclosed by ABC and the coordinate axes.This portion of energy is typically used to overcome internal damping effects, friction, plastic deformation, and other factors.In the following formulas, σ + i represents the loading curve for the i-th cycle, σ − i represents the unloading curve for the i-th cycle, and σ + i+1 represents the loading curve for the (i + 1)-th cycle.The formulae for calculating each energy indicator are as follows: where U i , U ei , and U di denote the input energy density, elastic energy density, and dissipated energy density in the i-th cycle, respectively; σ + i and σ − i denote, the stresses on the loading and unloading curves in the i-th cycle, respectively; and ε A , ε B , and ε C denote the strains, as shown in Figure 9.
terials 2024, 17, x FOR PEER REVIEW the loading and unloading curves in the i-th cycle, respectively; and the strains, as shown in Figure 9.The fatigue damage variable D is an important parameter used to describe the extent of damage to the sample under fatigue loading.Xiao et al. [32] suggested that the fatigue variable can be defined using the maximum strain method, as follows: where ε 0 max is the initial cyclic maximum strain value, ε n max is the instantaneous maximum strain value after n cycles, and ε f max is the limiting strain value during the disturbance.Using Equations ( 1)-( 4), the fatigue energy parameters of water-containing rock samples under dynamic disturbance loading are determined.The parameters include input energy density, elastic energy density, and dissipative energy density.Figure 9 shows the representative curves of energy density parameters for different water-containing soft rocks under cyclic disturbance loading, and the corresponding results are presented in Tables 2-4.From the graph, we observe that the input energy density of the water-containing samples decreases to varying degrees from the 1st to the 100th cycle, while that of saturated samples remains almost constant.With the same number of disturbance cycles, the input energy density of the samples gradually decreases as the water content increases.For example, in the first cycle, between adjacent water contents (0.00% and 1.74%, 1.74% and 3.48%, and so on), the input energy density of the samples decreases by 29.88%, 7.05%, 3.03%, 9.47%, and 5.98%, respectively.The reason is that the initial values of the disturbance cycles are selected in the linear elastic stage of the samples during the experiment.At this stage, the inherent cracks and micropores in the samples are compacted.However, during the initial phase of the disturbance cycles, the load with an amplitude of 10 kN sudden change occurs, causing the originally compacted microcracks and micropores inside the samples to reach their bearing limits and fail.As such, the samples show significant deformation during the first cycle, leading to increased input energy.As the water content further increases, the amount of water increases inside the samples, leading to an increase in softening and a decrease in elastic modulus.However, the presence of water has a certain buffering effect, thereby weakening the impact of load fluctuations on the samples, resulting in smaller deformations and lower input energy.As the disturbance cycles continue, the elastic energy density of the samples increases initially and then stabilizes, and gradually decreases with an increase in water content.For example, in the first cycle, between adjacent water contents (0.00% and 1.74%, 1.74% and 3.48%, and so on), the elastic energy densities of the samples decrease by 31.38%,5.93%, 3.80%, 11.70%, and 5.37%, respectively.The reason is a load fluctuation during the disturbance stage, and part of the energy is used to develop new microcracks and micropores.After 100 disturbances, the internal stresses in the samples are readjusted, causing recompaction of the newly developed microcracks and micropores.The samples adapt to external disturbance loads, leading to a stable mechanical performance.Elastic energy density accounts for over 85% of input energy density and gradually decreases with increasing water content, from 89.76% in the dry state to 86.59% in the saturated state.This result indicates that during dynamic disturbance, energy is mainly stored as elastic energy within the rock, with only a small fraction used for overcoming material damping and plastic deformation.The evolution of dissipative energy density follows the trend of elastic energy density, initially decreasing and then stabilizing, gradually decreasing with increasing water content.This reason is the load fluctuation effect, which causes previously closed microcracks and micropores to reach their bearing limits, increasing the higher dissipative energy from the 1st to the 100th disturbance.The damage variable D for different water-containing soft rocks under dynamic disturbance is calculated using Equation ( 5), as shown in the figure .A comparison of the graphs reveals that the change in disturbance-induced damage variable D for different water-containing soft rocks follows a nearly identical trend, characterized by two phases: an accelerating and a stable accumulation.Given that most of the damage to the samples occurs within the first 100 disturbance cycles, the damage variable D from the 1st to the 100th cycle exhibits an accelerating accumulation state and gradually adapts to the external disturbance load.After 100 disturbances, the internal crack structure redistributes and stabilizes, thereby entering a phase of stable damage.
Discussion
The surrounding rock mass of deep water-rich soft rock mine tunnels is disturbed by excavation, mining, etc., and the stress state changes.Energy change is the essential characteristic of physical changes in materials.This paper uses energy as the basis to explore the mechanical response behavior and internal damage rules of different water-bearing soft rocks under cyclic disturbance, and determines the saturated moisture content of the samples.Through further experiments, the mechanical parameters and macroscopic damage characteristics of different samples were obtained, the changes in mechanical parameters of the samples under the action of disturbance factors were analyzed, and the evolution rules of the energy of the samples with the number of disturbances and moisture content during the disturbance process were obtained through calculations.By analyzing the data obtained, it was found that the disturbance load has a certain strengthening effect on samples with low moisture content, and this strengthening effect gradually decreases as the moisture content increases.During the process of cyclic perturbation, all samples adapted to external loads within 100 cycles.In subsequent cycles, the energy changes of the samples tended to a constant value, and most of the energy was stored inside the rock samples in the form of elastic energy.This also leads to the shortcoming that the damage characteristics of the sample are not obvious during the disturbance stage.In this experiment, the initial value of the cyclic disturbance σ m selected for the disturbance load did not exceed the upper limit stress value of the sample, and the sample was not damaged during the disturbance.As a result, the energy evolution law of the sample when it was damaged by the disturbance load was not detected during the experiment.The experiment itself also has limitations and shortcomings, which are specifically reflected by the following: (1) Only the initial value of the cyclic disturbance load is considered, while the disturbance amplitude and frequency are ignored; (2) There is a lack of long-term simulation of the surrounding rock under real underground disturbance loads.This experiment only explored the mechanical behavior under short-term disturbance.These shortcomings need to be carefully considered in future studies to obtain more comprehensive and accurate results.
Figure 2 .
Figure 2. Schematic diagram of creep dynamic disturbance impact loading system.
Figure 6 .
Figure 6.Changing pattern of mechanical properties of the samples: (a) Peak strength, (b) modulus of elasticity.
Figure 6 .
Figure 6.Changing pattern of mechanical properties of the samples: (a) Peak strength, (b) modulus of elasticity.
Figure 8 .
Figure 8. Schematic diagram of cyclic loading energy calculation.
Figure 8 .
Figure 8. Schematic diagram of cyclic loading energy calculation.
Table 1 .
Loading scheme for perturbation experiments.
Rock Sample Number Average Water Content (%) σm (MPa) Δσ (kN) Frequency (Hz) Number of Disturbances/N Figure 2. Schematic diagram of creep dynamic disturbance impact loading system.
Table 1 .
Loading scheme for perturbation experiments.
RockSample Number Average Water Content (%) σ m (MPa) ∆σ (kN) Frequency (Hz) Number of Disturbances/N 3.48%, 5.21%, 6.95%, and 8.69% decrease to 32.66, 17.93, 15.93, 15.52, 13.91, and 12.98 MPa, respectively, representing a continuous decline in strength.The corresponding elastic moduli are 4.03, 2.92, 2.70, 2.64, 2.37, and 2.30 GPa, showing a similar reduction trend.The initial value of 32.66 MPa compressive strength decreases by 60.26%, resulting in a softening coefficient of 0.397, while the elastic modulus initial value of 4.03 GPa decreases by 42.93%, yielding a reduction coefficient of 0.571.Both uniaxial compressive strength and elastic modulus for samples at different moisture levels exhibit an exponential decrease in relation to moisture content.The relationship between these variables is mathematically expressed in Figure4.
Table 2 .
Input energy density (mJ/m 3 ) of water-bearing samples with different number of perturbations.
Table 3 .
Dissipative energy density (mJ/m 3 ) of water-bearing samples with different number of perturbations.
Table 4 .
Elastic energy density (mJ/m 3 ) of water-bearing samples with different number of perturbations. | 2024-04-14T15:20:09.315Z | 2024-04-01T00:00:00.000 | {
"year": 2024,
"sha1": "8b2504ca75d9c10e12fb909be6c761abda23f3dd",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/17/8/1770/pdf?version=1712887173",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dc7538b33a31666bab39a7f89e853a92b10ed525",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": []
} |
199133192 | pes2o/s2orc | v3-fos-license | On the weak solution $u ∈ C_1-α(I,E) of a fractional-order weighted Cauchy type problem in reflexive Banach spaces
In this paper, we study the existence of a weak solution u ∈C1−α (I,E) of the nonlinear weighted Cauchy type problem of fractional-order.
Introduction
In this paper, we study the existence of solutions, in the Banach space C 1−α [I, E], for the nonlinear weighted Cauchy-type problem of the following type ⎧ ⎨ ⎩ D α u(t) = f (t, u(t)), t > 0, α ∈ (0, 1) This problem has been studied by many authors for example in ( [4]), the author supposed that the function f (t, u) is continuous on R + × R, | f (t, u)| t μ e −σt ψ(t)|u| m , μ 0, m > 1, σ > 0, ψ(t) is a continuous function on R + . Also; In ([2]- [3]) the author proved the existence of L 1 and L p solution of the same problem respectively.
Preliminaries
Let L 1 (I) be the space of Lebesgue integrable functions on the interval I = [0, 1]. Unless otherwise stated, E is a reflexive Banach space with norm ||.|| and dual E * . We will denote by E w the space E endowed with the weak topology σ (E, E * ) and denote by C(I, E) the space of continuous functions defined on I = [0, 1] with norm 1] ||u(t)||.
Also; define the space C 1−α (I, E) by We recall that the fractional integral operator of order α > 0 with left-hand point a is defined by (see [9], [14], [15] and [20]) DEFINITIONS. Let E be a Banach space and let u : I → E . Then (1) u(.) is said to be weakly continuous (measurable) at t 0 ∈ I if for every ϕ ∈ E * we have ϕ(u(.)) continuous (measurable) at t 0 .
(2) A function h : E → E is said to be weakly sequentially continuous if h takes weakly convergent sequences in E to weakly convergent sequences in E .
Note that: (1) If u is weakly continuous on I , then u is strongly measurable (see [7]), hence weakly measurable.
(2) In reflexive Banach spaces weakly measurable functions are Pettis integrable (see [1], [7] and [13] for the definition) if and only if ϕ(u(.)) is Lebesgue integrable on I for every ϕ ∈ E * . Now, we present some auxiliary results that will be needed in this paper. Firstly, we state O'Regan fixed point theorem ( [12] The following theorems can be found in [5], [22] and [10] respectively: THEOREM 2.2. (Dominated convergence theorem for Pettis integral) Let u : I → E . Suppose there is a sequence (u n ) of Pettis integrable functions from I into E such that lim n→∞ ϕ(u n ) = ϕ(u) a.e. for ϕ ∈ E * . If there is a scalar function ψ ∈ L 1 (I) with ||u n (·)|| < ψ(·) a.e. for all n , then u is Pettis integrable and J u n (s) ds → J u(s) ds weakly ∀ t ∈ I. Finally, we state some results which is an immediate consequence of the Hahn-Banach theorem. THEOREM 2.5. Let E be a normed space with u 0 = 0 . then there exists a ϕ ∈ E * with ||ϕ|| = 1 and ϕ(u 0 ) = ||u 0 ||.
Now consider the fractional-order integral equation In [12] the author studied the integral equation Also, in [11] the author studied the Volterra-Hammerstein integral equation under the assumptions that f : [0, T ] × B → B is weakly-weakly continuous and h : Here we study the existence of weak solution of the fractional-order integral equation (2) such that the function f : I × B r → E satisfies the following conditions: (1) For each t ∈ I, f t = f (t,.) is weakly sequentially continuous.
(3) for any r > 0 , the weak closure of the range of f ( This function is weakly measurable and for each φ ∈ L * ∞ , we have φ f ∈ L 1 (each φ f is a function of bounded variation). Thus, according to Lemma 3.2 , I α f exists. Also, the fractional order Pettis integral of f exists see [6,16,18].
Fractional-order integrals in reflexive Banach spaces
Here, we define the fractional-order integral operator in reflexive Banach spaces. Definition given below is an extension of such a notion for real-valued functions. DEFINITION 3.1. Let u : I → E be a weakly measurable function, such that ϕ(u(.)) ∈ L 1 (I), and let α > 0 . Then the fractional (arbitrary) order Pettis integral (shortly FPI) I α u(t) is defined by In the above definition the sign " " denotes the Pettis integral.
(2) lim α→1 I α u(t) = I 1 u(t) weakly uniformly on I if only these integrals exist on I .
Main result
In this section we present our main result by proving the existence of solution of equation (2) in Let E be a reflexive Banach space. And let We will consider the set Now, we are in a position to formulate and prove our main result. Proof. Let us define the operator T as We will solve equation (2) by finding a fixed point of the operator T . We will prove that T : First note that from assumption (2), we get that for each u ∈ C 1−α [I, E], f (., u(.)) is weakly measurable on I . Since f has weakly compact range, then ϕ( f (., u(.))) is Lebesgue integrable on I for every ϕ ∈ E * and thus the operator T is well defined.
Let t, τ ∈ [0, 1] with t > τ . Without loss of generality, assume t 1−α Tu(t)−τ 1−α Tu(τ) = 0 . Then there exists (a consequence of Theorem 2.5) ϕ ∈ E * with ||ϕ|| = 1 and which proves that Note that Q is nonempty, closed, bounded, convex and equicontinuous subset of C (3) shows that T Q is norm continuous. Now, take u ∈ Q; without loss of generality, we may assume that t 1−α I α f (t, u(t)) = 0 , then, by Theorem 2.5, there exists ϕ ∈ E * with ||ϕ|| = 1 and Thus therefore Thus T : Q → Q. Finally, we will show that T is weakly sequentially continuous. To see this, let {u n } ∞ n=1 be a sequence in Q and let u n (t) → u(t) in E w for each t ∈ [0, 1]. Recall [10] that a sequence {u n } ∞ n=1 is weakly convergent in C [I, E] if and only if it is weakly pointwise convergent in E . Fix t ∈ I . From the weak sequential continuity of f (t,.), the Lebsegue dominated convergence theorem (see assumption (3)) for the Pettis integral [5] implies for each ϕ ∈ E * that ϕ(Tu n (t)) → ϕ(Tu(t)) a.e. on I , Tu n (t) → Tu(t) in E w . So T : Q → Q is weakly sequentially continuous. The proof is complete. Now, we are looking for sufficient conditions to ensure the existence of Pseudo solution to the nonlinear weighted Cauchy-type problem (1). | 2019-08-02T20:21:56.065Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "89cc80bcd0ff391aa2b989c028b9acff913e12f7",
"oa_license": null,
"oa_url": "http://files.ele-math.com/abstracts/fdc-09-04-abs.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "aa433b2376d05b00540b9d87637c0b11653c15f1",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
236154755 | pes2o/s2orc | v3-fos-license | A new form of general soliton solutions and multiple zeros solutions for a higher-order Kaup-Newell equation
Due to higher-order Kaup-Newell (KN) system has more complex and diverse solutions than classical second-order flow KN system, the research on it has attracted more and more attention. In this paper, we consider a higher-order KN equation with third order dispersion and quintic nonlinearity. Based on the theory of the inverse scattering, the matrix Riemann-Hilbert problem is established. Through the dressing method, the solution matrix with simple zeros without reflection is constructed. In particular, a new form of solution is given, which is more direct and simpler than previous methods. In addition, through the determinant solution matrix, the vivid diagrams and dynamic analysis of single soliton solution and two soliton solution are given in detail. Finally, by using the technique of limit, we construct the general solution matrix in the case of multiple zeros, and the examples of solutions for the cases of double zeros, triple zeros, single-double zeros and double-double zeros are especially shown.
First, when β = 0, x → ix, t → it and r = −q * the system (1.5) become which can be viewed as the higher-order DNLS I or higher-order KN equation. Eq.(1.6) also can be derived from the generalized KN hierarchy [15] under n = 3 and proper parameter. Second, when β = 1 4 and x → ix, t → it, r = −q * the system (1.5) become which can be viewed as the higher-order DNLS II or higher-order CLL equation. Third, when β = 1 2 and x → ix, t → it and r = −q * , the system (1.5) become (1. 8) which can be regarded as the higher-order DNLS III or higher-order GI equation. It has been proved in [14] that these equations (1.6)-(1.8)have multiple Hamiltonian structures and are Liouville integrable. The N-soliton solutions of Eq.(1.7) and Eq.(1.8) have been studied in [16,17]. In this paper, we mainly consider the soliton solutions and higher-order soliton solution of system (1.6). In fact, there are several classical methods to obtain the soliton solutions, such as the inverse scattering (IST) method, Darboux/Bäcklund transform, Hirota bilinear method, RH method [18][19][20][21][22][23]. Here we will use the Riemann-Hilbert(RH) method to derive the soliton solutions of (1.6) since it is more convenient to study the exact long-time asymptotic and large n asymptotic [29].
The high-order soliton solution of the NLS type has been widely concerned by many scholars in recent years. It can be used to describe the weak bound states of solitons, which may appear in the study of soliton train transmission with specific chirp and almost equal velocity and amplitude [24]. There are not many studies on DNLS type higher-order soliton solutions. Recently, Chen's team studied the double and triple zeros of GI equation [32], and the double zeros of higher-order KN [31]. Here, we study more extensive cases and give the general form of the solutions with multiple zeros.
The main content of this paper is to construct the general soliton solution matrix of the higher-order KN equation by using RH method. It is worth noting that we recover the potential q(x, t) as the spectral parameter ζ → 0, it effectively reduces the operation process and avoids the interference of implicit function, and the matrix form of the soliton solution is more direct. Taking the single soliton solution and the two-soliton solution as examples, the properties of the soliton are studied. Then, on the basis of the soliton solution, through a certain limit technique, the solution matrix of the high-order soliton solution of the multiple zeros is obtained.
The organization of this letter is as follows. In section 2, the inverse scattering theory is established for the 2 × 2 spectral problems and the corresponding matrix Riemann-Hilbert problem (RHP) is formulated. The N-soliton formula for higher-order KN equation is derived by considering the simple zeros in the RHP in section 3. In section 4, we construct the higher-order soliton matrix and obtain the general expression of the higher-order soliton, which corresponds to the multiple zeros in the RHP. The section 5 is devoted to conclusion and discussion.
2. Inverse scattering theory of (1.6) The main work of this part is to study the inverse scattering problem of Eq.(1.6) and construct the corresponding RHP.
The Eq. (1.6) is Lax integrable with the linear spectral problem it is easy to verify which plays an important role in symmetry research later, and the symbol † represents the conjugate transpose of a matrix. In the following analysis, we assume that the potential function q, q * rapidly tends to zero as x → ±∞. In this case, the solution of the boundary form can be clearly obtained In the scattering problem, the Lax equation (2.8) of time t is ignored temporarily. By solving Eq.(2.1) with constant variation method and using transformation (2.6), the solution of Eq.(2.7) can be obtained, which satisfies the following integral equations x e iζ 2 σ3(y−x) Q(y)J P e iζ 2 σ3(x−y) dy, (2.10) and these two Jost solutions are satisfied the following asymptotics at large distances J ∼ I, as |x| ∼ ∞. (2.11) In order to analyze the analytical properties of Jost solutions in the ζ plane, we divide the entire ζ plane into two regions Dividing J into columns as J = J (1) , J (2) , due to the structure (2.4) of the potential Q, and Volterra integral equations (2.9)-(2.10), we have Proposition 1. The above Volterra integral equations exist and are unique, and have the following properties: • The column vectors J M and J (2) P are continuous for ζ ∈ C 13 ∪ R ∪ iR and analytic for ζ ∈ C 13 , •The column vectors J (1) p and J (2) M are continuous for ζ ∈ C 24 ∪ R ∪ iR and analytical for ζ ∈ C 24 . Through the Eq.(2.6) we know that J P E and J M E are both solutions of the linear Eq. (2.1), so they are linearly related by a matrix S(ζ) where E = e −iζ 2 xσ3 and S(ζ) = (s ij ) 2×2 . It should be noted that tr(−iζ 2 σ 3 + ζQ) = 0, using the Abel's formula, we can get that (det Y ) x = 0, (2.13) considering transformation (2.6) has det J = det Y det(e iζ 2 xσ3 ) = det Y. Reusing Eq.(2.13) has (det J) x = 0, which means that the detJ is independent of x, and then from the asymptotic (2.11), we know Taking the determinant on both sides of relation (2.12) to get det S(λ) = 1.
In order to construct the RHP, we consider the adjoint scattering equation of (2.7) it is easy to see that J −1 is the solution of the adjoint equation (2.14) and satisfy the boundary condition J −1 → I as x → ±∞, where the inverse matrices J −1 as a collection of rows Due to the structure (2.4) of the potential Q, we also have Proposition 2. According to the properties of Jost solution, we can deduce that the inverse matrix J −1 has the following properties: • The row vectors (J −1 P ) (1) and (J −1 M ) (2) are continuous for ζ ∈ C 13 ∪ R ∪ iR and analytic for ζ ∈ C 13 . • The rows (J −1 M ) (1) and (J −1 P ) (2) are continuous for ζ ∈ C 24 ∪ R ∪ iR and analytical for ζ ∈ C 24 . Further, the analytical properties of the scattering data can be obtained as follows: Proposition 3. Suppose that q(x, t) ∈ L 1 (R), then s 11 is analytic on C 13 , s 22 is analytic on C 24 ; s 12 and s 22 are not analytic in C 13 and C 24 , but are continuous to the real axis R and imaginary axis iR.
Proof. The scattering matrix can be rewritten as: (2.16) The elements corresponding to the matrices on both sides can be written clearly According to proposition 1 and Proposition 2, it's easy to know s 11 is analytic on C 13 , s 22 is analytic on C 24 ; s 12 and s 22 are not analytic in C 13 and C 24 , but are continuous to the real axis R and imaginary axis iR.
To find the boundary condition of P, we consider the following asymptotic expansion as ζ → 0, Substituting (2.18) into (2.7) and equating terms with like powers of ζ, we find P (0) x = 0. It can be seen from (2.9) and (2.10)
19) Then the Riemann-Hilbert problem of the higher-order KN equation is
Riemann-Hilbert Problem 4. The matrix function P(ζ; x) has the following properties: • Analyticity : P(ζ; x, t) is analytic function in ζ ∈ C 13 ∪ C 24 ; • Jump Condition: Next, we consider the symmetric properties of Jost solutions and scattering data, so that we can consider interesting reduction.
Proposition 5. There are two symmetries of the Jost solutions and scattering data: • The first symmetry reduction • The second symmetry reduction Proof. For the first symmetric case, replacing ζ with ζ * , and then take the conjugate transpose of the Eq.(2.7) to get owing to Q † = −Q, so the above equation is Comparing with Eq.(2.14), it is found that J −1 (x, ζ) and J † (x, ζ * ) satisfy the same equation form, and then according to the boundary conditions at x → ±∞, we know that Notice that the P that we construct is part of the Jost solutions, so there must be also In addition, in view of the scattering relation (2.12) between J M and J P , we see that S also satisfies the involution property For the second symmetry, replacing ζ with −ζ, and both sides of the equation are multiplied by σ 3 , due to σ 3 Qσ 3 = −Q, the above equation can be reduced to It is easy to find that J(ζ) and J(−ζ) satisfy the same equation, so there is and From the (2.24)and (2.27), we obtain the relations and Thus s 11 (λ) is an even function, and each zero ζ k of s 11 is accompanied with zero −ζ k . Similarly,ŝ 11 has two zeros ±ζ * k .
Solvability of RHP problem.
In general, if the det P(ζ) = 0 of the RHP, the RHP is considered to be regular, its solution is unique, and can be given by using Plemelj formula [30]. But more often than not they are non-regular, where det P(ζ) = 0, i.e,s 11 (±ζ k ) = 0 andŝ 11 (±ζ k ) = 0 at certain discrete locations, ±ζ k and ±ζ k are called zeros. Here we first consider the case of simple where N is the number of these zeros. These zeros are known from the relation (2.33) above In this case, both ker (P (±ζ k )) are one-dimensional and spanned by single column vector |v k and single row vector v k | , respectively, thus By the symmetry relation (2.23), it is easy to get Regarding this non-regular RHP (2.20) under the canonical normalization condition, its solution is also unique. Next we construct a matrix function Γ(x, t, ζ) which could cancel all the zeros of P. From the relations (2.33) and (2.34), we should construct a matrix Γ k whose determinant is (2.38) From the above properties (2.23), (2.26) and (2.38), we could readily construct the explicit form for the matrix then det PΓ −1 k = 0 at points ±ζ k and det Γ −1 k P = 0 at points ±ζ * k . Introducing Riemann-Hilbert Problem 6. The matrix functionP(ζ; x) has the following properties: • Analyticity :P(ζ; x, t) is analytic function in ζ ∈ C 13 ∪ C 24 ; •Jump Condition:P (2.45) • Asymptoticbehaviors :P(ζ; x) =P 0 + O(ζ), as ζ → 0.
The form of G has been given by equation ( It concludes that
N Soliton Solutions
In this part, we mainly obtain the potential q. The expansion of P(ζ) with ζ → 0, Substituting the expansion into Eq. (2.7), the potential matrix function can be obtained by comparing the coefficients from this formula, we can get the potential q(x, t). It is well known that the soliton solutions correspond to the vanishing of scattering coefficients, G = I,Ĝ = 0. Thus, we intend to solve the corresponding RHP(2.45). According to equations (2.44) and (2.46), we can consider the following expansion form t). Below, the main effort is to find an explicit expression for (Γ| ζ=0 ) −1 Γ 1 (x, t).
In fact, the form of Γ from (2.42)and (2.43) have more compact form and To determine the form of matrix B k , we consider Γ(ζ)Γ −1 (ζ) = I, we have it's easy to figure out where |z k l denotes the l−th element of |z k , matrix M is defined as Then we have From these equations enable us to have by Eq. (3.2), we can obtain that the potential function q(x, t) is where M has been given by Eq. (3.8). Notice that M −1 can be expressed as the transpose of M s cofactor matrix divided by detM . Hence the solution (3.9) can be rewritten as To get the explicit N -soliton solutions, we may take v k0 = (a k , b k ) T , then In what follows, we will take single soliton and two-soliton solutions as examples to study the properties of solitons in more detail. For convenience, let ζ j = ζ jR + iζ jI , where ζ jR , ζ jI are the real and imaginary parts of ζ j .
Single-soliton solution.
For N = 1, taking the discrete spectrum point ±ζ 1 and ±ζ * 1 , then using the formula (3.9) to directly calculate, it can be seen
12)
The velocity for the single soliton is The center position for |q| locates on the line The amplitudes associated with |q| 2 are given by In Fig. (2)(a), we give the 3-D graph of the single-soliton solution.
Two-soliton solutions.
When N = 2, the solution (3.9) can also be written out. The two-soliton solutions of higher-order KN equation has the form of q(x, t) = ∆ 1 /∆ 0 with The coefficients of these exponential terms are constituted of a 1 , a * 1 , a 2 , a * 2 , b 1 , b * 1 , b 2 , b * 2 and ζ 1 , ζ * 1 , ζ 2 , ζ * 2 . However, it is tedious to write them all out, and they can be calculated directly via the computer. Instead of presenting the complex expression, we show the typical solution behavior in Fig. (2)(b). It can be seen from Fig. (2)(b) that when t → −∞, the solution consists of two single solitons that are far apart and travel opposite each other. When they collide together, the interaction weakens. When t → +∞, these are separated into two single solitons, and there is no change in shape and speed, and no energy radiation is emitted to the far field. Therefore, the interaction of these solitons is elastic. But it can be observed from the graph that after the interaction, each soliton has a phase shift and a position shift.
Next, we verify the rationality of the above analysis through the expression of the soliton solution. In general, making the assumption ξ i η i > 0 and v 1 < v 2 . This means that at t → −∞, soliton-2 is on the left side of soliton-1 and moves faster, and the two solitons are in the moving frame with velocity v i = 8ζ 2 iR ζ 2 iI − 6(ζ 2 iR − ζ 2 iI ) 2 , note that where we used the asymptotic analysis technique [28], we intend to investigate the collision dynamics of these two-soliton solutions. Then we have asymptotic expressions of q(x, t) under different asymptotic states of θ 1R and θ 2R . whereã . It is pointed out that the asymptotic solutions can also be written as the function of solitary waves, and the respective velocities are v 1 and v 2 , which remain unchanged before and after the collision. This elastic interaction is a remarkable property, which indicates that DNLS Eq.(1.6) is integrable. From the above asymptotic solution, we can get the phase difference of soliton-1 solution, Following similar calculations, we can get the phase difference of soliton-2 solution,
Soliton matrix for multiple zeros
In this section, we will further consider the case of multiple zeros, where the multiplicity of {±ζ i , ±ζ * i } is greater than 1, then the determinant of P can be written in the following form: where ρ(ζ i ) = 0 (i = 1..r) for all ζ ∈ C 13 , andρ(ζ i ) = 0 (i = 1..r) for all ζ ∈ C 24 .
Compared with the case of simple zeros, the number of kernel functions with multiple zeros is related to the multiplicity of zeros. For example, for discrete spectral point {ζ 1 , ζ * 1 }, its kernel function is |v j is linearly independent. For the case of multiple zeros, the corresponding Γ and Γ −1 can be given by using the following theorem, Lemma 7. ( [26],Lemma 3) Consider a pair of higher order zeros of order n j (j = 1, .., r): {ζ j , −ζ j } in C 13 and {ζ * j , −ζ * j } in C 24 . Then the corresponding soliton matrix Γ j (ζ) and its inverse can be cast in the following form where the matrices Ξ j (ζ) and Ξ j (ζ) are defined as are upper-triangular and lower-triangular Toeplitz matrices defined as: In fact, the rest of the vector parameters in (4.2) can be derived by calculating the residue of each order in the identity Γ(ζ)Γ −1 (ζ) = I at ζ = ζ j and ζ = −ζ j , Using this method, the process of solving soliton solution is very complex. Next, the corresponding Γ can be constructed by using the method of [27], the dressing matrix of multiple zeros is derived by unipolar limit method. The results are given by the following theorem: Theorem 8. Suppose ζ = ζ i is the zero of geometric multiplicity n j (j = 1, .., r), and r j=1 n j = N , then the modified matrix can be expressed as Then we can get . Hence, formula (4.5) gives the general expression of high-order solitons with multiple zeros. Because the spectral parameters here cannot be pure real or pure virtual, the expression of high-order soliton is relatively complex, but different n j and appropriate parameters can be selected, and the graphics of mixed high-order solitons solution can be given by using mathematical software such as Maple and Mathematica. Here, we give several representative mixed solutions. In Fig. (3), let n 1 = 2, n j = 0(j = 2..r), in (4.5) which represent the simple double-zero case, and n 1 = 3, n j = 0(j = 2..r), in (4.5) is the simple triple-zero case. In Fig. (4), take n 1 = 2, n 2 = 1, n j = 0(j = 3..r), that is, a mixed solution of a double zero and a single zero, and take n 1 = 2, n 2 = 2, n j = 0(j = 3..r), which means a mixed solution of two double zeros.
Conclusions and discussions
In a word, the inverse scattering method is applied to the higher-order KN equation with vanishing boundary, and the soliton matrix is constructed by studying the corresponding RHP. Using RHP regularization of finite simple zeros, the determinant form of general N-solitons of higher-order KN equation without reflection is obtained, which is different from the soliton solution form of previous KN system. In the process of inverse scattering, the potential function is recovered when the spectral parameter tends to zero, which effectively avoids the appearance of implicit function [31]. At the same time, the properties of the single soliton solution and the collision dynamics and asymptotic behavior of the two soliton solution are investigated.
In addition, the multiple zeros of RHP are considered, and the higher-order soliton matrix of higher-order KN equation is obtained by limit technique. Several typical graphs are given, including the graphs of double zeros solutions, trip zeros solutions, single-double zeros solutions, and double-double zeros solutions. It provides a good basis for future experimental observation.
In this context, we only consider the solution with vanishing boundary conditions. How to adjust the analysis to find the Jost solution of the spectral problem, so as to obtain the solution with non vanishing boundary conditions. The global well posedness, long-time behavior and asymptotic stability of solitons need to be further studied. | 2021-07-22T01:16:25.733Z | 2021-07-21T00:00:00.000 | {
"year": 2021,
"sha1": "9cdbedd856efe2bfffa33447e98d55dcd1716971",
"oa_license": null,
"oa_url": "https://aip.scitation.org/doi/pdf/10.1063/5.0064411",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "9cdbedd856efe2bfffa33447e98d55dcd1716971",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics"
]
} |
14844491 | pes2o/s2orc | v3-fos-license | A genomic perspective on the potential of Actinobacillus succinogenes for industrial succinate production
Background Succinate is produced petrochemically from maleic anhydride to satisfy a small specialty chemical market. If succinate could be produced fermentatively at a price competitive with that of maleic anhydride, though, it could replace maleic anhydride as the precursor of many bulk chemicals, transforming a multi-billion dollar petrochemical market into one based on renewable resources. Actinobacillus succinogenes naturally converts sugars and CO2 into high concentrations of succinic acid as part of a mixed-acid fermentation. Efforts are ongoing to maximize carbon flux to succinate to achieve an industrial process. Results Described here is the 2.3 Mb A. succinogenes genome sequence with emphasis on A. succinogenes's potential for genetic engineering, its metabolic attributes and capabilities, and its lack of pathogenicity. The genome sequence contains 1,690 DNA uptake signal sequence repeats and a nearly complete set of natural competence proteins, suggesting that A. succinogenes is capable of natural transformation. A. succinogenes lacks a complete tricarboxylic acid cycle as well as a glyoxylate pathway, and it appears to be able to transport and degrade about twenty different carbohydrates. The genomes of A. succinogenes and its closest known relative, Mannheimia succiniciproducens, were compared for the presence of known Pasteurellaceae virulence factors. Both species appear to lack the virulence traits of toxin production, sialic acid and choline incorporation into lipopolysaccharide, and utilization of hemoglobin and transferrin as iron sources. Perspectives are also given on the conservation of A. succinogenes genomic features in other sequenced Pasteurellaceae. Conclusions Both A. succinogenes and M. succiniciproducens genome sequences lack many of the virulence genes used by their pathogenic Pasteurellaceae relatives. The lack of pathogenicity of these two succinogens is an exciting prospect, because comparisons with pathogenic Pasteurellaceae could lead to a better understanding of Pasteurellaceae virulence. The fact that the A. succinogenes genome encodes uptake and degradation pathways for a variety of carbohydrates reflects the variety of carbohydrate substrates available in the rumen, A. succinogenes's natural habitat. It also suggests that many different carbon sources can be used as feedstock for succinate production by A. succinogenes.
Background
Actinobacillus succinogenes is a Gram-negative capnophilic bacterium that was isolated from bovine rumen as part of a search for succinate-producing bacteria [1].
Succinate is an important metabolic intermediate in the rumen, where several bacteria obtain energy by decarboxylating succinate to propionate, which in turn serves as a nutrient for the ruminant [2,3]. Succinate is used as a specialty chemical in food, agriculture, and pharmaceutical industries, but it has a much greater potential value for augmenting or replacing a multi-billion dollar petrochemical-based bulk chemical market [4,5]. Succinate production by fermentation of renewable feedstocks is both economically and environmentally attractive. A further environmental benefit is that fermentative succinate production uses CO 2 , a greenhouse gas, as a substrate.
A. succinogenes is one of the best succinate producers ever described, but it also produces formate and acetate in high concentrations. Flux distribution between succinate and alternative fermentation products is affected by environmental conditions. For example, higher succinate yields can be obtained by increasing the available CO 2 and a reductant (e.g., by supplying H 2 or by using carbon sources that are more reduced than glucose) [6]. Optimizing the environmental conditions is not sufficient to achieve a homosuccinate fermentation, though. Engineering A. succinogenes's metabolism for homosuccinate production will be most effective if based on an understanding of the enzymes and mechanisms controlling flux distribution. Deciphering the A. succinogenes genome sequence is thus invaluable for defining, understanding, and engineering A. succinogenes metabolic pathways.
There is also much knowledge to be gained by comparing the A. succinogenes genome to its closest relatives. A. succinogenes is a member of the Pasteurellaceae family, which contains thirteen named genera as well as candidates for new taxa [7]. The best known genera are Actinobacillus, Haemophilus, and Pasteurella. At least thirty-two Pasteurellaceae genome sequences (complete and draft) are publicly available, fifteen of which are from different H. influenzae strains. While most Pasteurellaceae are studied for their pathogenic traits, A. succinogenes and its closest relative, Mannheimia succiniciproducens [8], collectively referred to as "succinogens" in this paper, are studied for their industrially attractive metabolic trait of succinate production. It will be important to confirm lack of pathogenicity in these succinogens before they are recommended for use on an industrial scale. Because A. succinogenes and M. succiniciproducens have never been reported in association with any disease, searching their genome sequences for Pasteurellaceae pathogenicity genes is a logical starting point to assess their potential for non-pathogenicity.
Here we present the first detailed analysis of the A. succinogenes genome sequence with a biotechnological perspective. The A. succinogenes and M. succiniciproducens genome sequences are also examined for known Pasteurellaceae virulence genes.
Methods
Chemicals, source strain, growth conditions, and genomic DNA purification All chemicals were purchased from Sigma-Aldrich (St. Louis, MO). A. succinogenes type strain 130Z (ATCC 55618) was obtained from the American Type Culture Collection (Manassas, VA). To identify the A. succinogenes vitamin auxotrophies, A. succinogenes was grown in the defined medium, AM3 [9], and then transferred (1:100 dilutions) in parallel into ten tubes containing fresh AM3 medium, each tube lacking a single vitamin. A. succinogenes was considered prototrophic for a vitamin, if growth was maintained for three consecutive transfers in its absence. To confirm the minimal vitamin requirements, A. succinogenes was grown through seven transfers in AM3 containing only the required vitamins. To determine A. succinogenes's ability to grow on various carbon sources, cells were grown anaerobically in Medium B (g/L: NaH 2 PO 4 ·H 2 O, 8.5; K 2 HPO 4 , 15.5; bactotryptone, 10.0; yeast extract, 5.0; and NaHCO 3 , 2.1) supplemented with a single carbon source (1 g/L). The initial pH was adjusted to 7.0-7.2. A. succinogenes was considered able to grow on a carbon source when cell yields (absorbance at 660 nm) were higher in medium B supplemented with that carbon source than in non-supplemented medium. Growth data were recorded after each of two serial transfers of three biological replicates. For genomic DNA extraction, A. succinogenes was grown in 100 mL of tryptic soy glucose broth (Becton Dickinson, Sparks, MD) with 25 mM NaHCO 3 in a 160-mL anaerobic serum vial at 37°C. The culture was harvested in log phase (~7.7 × 10 10 cells) and washed twice in 45 mL of phosphate buffer (g/L: K 2 HPO 4 , 15.5; NaH 2-PO 4 *H 2 O, 8.5; NaCl, 1). Genomic DNA was purified using a Qiagen genomic tip protocol with a Qiagen maxiprep column (Valencia, CA) as described in the QIAGEN Genomic DNA Handbook.
Genome sequencing and assembly
Sequencing was performed by the Department of Energy's Joint Genome Institute (JGI). The genome of A. succinogenes was sequenced using a combination of three Sanger genomic libraries: 3 kb pUC18c, 8 kb pMCL200, and 40 kb fosmid libraries. All general aspects of library construction and sequencing performed at the JGI can be found at the JGI website [10]. 41,370 Sanger reads were assembled using PGA assembler (Paracel Genome Assembler 2.6.2, Paracel, Pasadena, CA). Possible mis-assemblies were corrected and gaps between contigs were closed by custom primer walks from sub-clones or PCR products. A total of 2,986 additional reactions were necessary to close gaps and to raise the quality of the finished sequence. The completed genome of A. succinogenes 130Z contains 43,200 reads. The error rate of the finished genome sequence is less than 1 in 100,000. Together all libraries provided 11× coverage of the genome. The genome sequence of A. succinogenes strain 130Z is available in GenBank under accession number CP000746.
Automated annotation
Automated annotation was performed by the Oak Ridge National Laboratory [11]. Open reading frames (ORFs) were identified using three gene caller programs: Critica, Generation, and Glimmer. Translated ORFs were subjected to an automated basic local alignment search tool (BLAST) for proteins [12] against GenBank's non-redundant database. The translated ORFs were also subjected to searches against KEGG, InterPro (incorporating Pfam, PROSITE, PRINTS, ProDom, SmartHMM, and TIGRFam), and Clusters of Orthologous Groups of proteins (COGs).
Manual annotation
The ORFs described in this paper have also been manually annotated. BLAST alignments were examined to assess the correctness of the start codon. DNA sequences upstream of each ORF were examined for a ribosomal binding site (at least 3 nt of the AAGGAGG sequence, 5-10 nt upstream of the start codon) using the web Artemis tool [11]. To assign product names to each ORF, results from BLAST, HMM (i.e., PFAM and TIGRFAM), and domain and motif searches were considered. Most importantly, efforts were made to find a citation of biological function for a homologous gene. If a translated ORF was at least 75% identical to a protein of known function over 75% of the length, or if it belonged to a TIGRFAM equivalog, it was given the associated product name. If a translated ORF was less than 75% identical to a protein of known function, the product name was modified as follows: 60-75% identity over 65% of the length, putative product; 40-64% identity over 40% of the length, probable product; 25-39% identity over 25% of the length, possible product. If a translated ORF was at least 60% identical to a protein of unknown function, it was named a conserved hypothetical protein. If there was no adequate alignment with any protein (less than 25% identity or aligned region is less than 25% of the product length), the translated ORF was named a hypothetical protein.
Other genome analyses
To compare their gene contents, the A. succinogenes and M. succiniciproducens genomes were re-annotated using the fully automated, prokaryotic genome annotation service, RAST (Rapid Annotation using Subsystem Technology) [13]. Pairwise BLAST comparisons of protein sets encoded by A. succinogenes and M. succiniciproducens genomes and predictions of the number of subsystems were performed using the sequence-based comparison tool available in RAST. Orthologous protein-coding genes in the two succinogens were manually compiled by comparing gene order, gene orientation (forward/reverse), features of intergenic regions, and protein similarity (minimum 25% identity at the protein level).
NUCmer and PROmer [14] whole-genome alignments were performed using an online Synteny plot tool [15]. Clustered regularly interspaced short palindromic repeats (CRISPR) and spacers were identified using the CRISPRs web service [16][17][18]. Spacer sequences were then aligned against the A. succinogenes genome sequence using BLAST. 16 S rRNA phylogeny was determined using the Michigan State University Ribosomal Database Project tools [19,20]. Hierarchical clustering of Pasteurellaceae genomes was done using tools at the JGI's Integrated Microbial Genomes (IMG) website [21,22].
Uptake signal sequence (USS) 9-mer cores [23] were counted, and their surrounding sequences were reported using our perl script, 200804USS.pl. The output was pasted into a Microsoft Excel spreadsheet to calculate the frequency of each nucleotide occurring at each position, upstream and downstream of the USS core. A search of A. succinogenes and M. succiniciproducens genomes for Pasteurellaceae virulence genes was performed by compiling a list of known Pasteurellaceae virulence genes based on the literature, then using a custom Python script to align their sequences against the two genomes using BLAST and report the data for the top hit.
General features
Even though it is one of the largest Pasteurellaceae genomes sequenced to date (Table 1), A. succinogenes's genome is relatively small (2,319,663 bp, GenBank accession number CP000746). A total of 2,199 genes have been annotated in the genome, of which only 2,079 are protein-coding, a desirable feature for metabolic engineering. General features of A. succinogenes's genome are compared to those of fifteen other Pasteurellaceae genomes in Table 1.
The A. succinogenes genome is most closely related to that of its succinogen relative, M. succiniciproducens [8] (additional file 1: Figures S1 and S2). M. succiniciproducens was also isolated from a bovine rumen, albeit on a different continent, and it shares many metabolic traits with A. succinogenes (see below). Based on genome reannotations performed with RAST (2,223 proteinencoding genes, total), the two succinogens' genomes have 1,735 ORFs (78%) in common, 442 ORFs are found only in M. succiniciproducens, and 488 ORFs are found only in A. succinogenes. Of 2,081 automated KEGG comparisons [24], 1,861 (89%) A. succinogenes genes were most similar to other Pasteurellaceae genes, with 1,252 (60%) being most similar to M. succiniciproducens genes. However, A. succinogenes and M.
succiniciproducens are among at least twenty-four misclassified Pasteurellaceae species that will likely be renamed, as they do not cluster with properly classified species in phylogenetic trees based on Pasteurellaceae 16 S rRNA, infB, rpoB, or atpD gene sequences [25]. The two succinogens are often clustered together using phylogenetic approaches, but not closely enough to suggest that they belong to the same genus [25]. Hierarchical clustering of gene function categories also places the two succinogens in a clade separate from other Pasteurellaceae (additional file 1: Figure S2). To better gauge how closely related the two organisms are, we performed whole genome NUCmer and PROmer alignments of the two succinogens with each other, as well as with eight other Pasteurellaceae (additional file 1: Figure S3). NUCmer plots show little to no conservation of genome structure at the nucleotide level between A. succinogenes and any other Pasteurellaceae. PROmer plots reveal that A. succinogenes and M. succiniciproducens are more related to each other than to other Pasteurellaceae. Overall, though, the PROmer plots show that drastic changes in genome structure have occurred as A. succinogenes and M. succiniciproducens evolved divergently from their last common ancestor, and that the two succinogens are more distantly related than their functional traits would suggest.
Prophage
A 39,489-bp prophage genome is encoded in the Asuc_1205-58 region. The presence of a prophage has biotechnological relevance for two reasons. First, it raises the possibility of using phage-based genetic engineering. Second, it suggests that A. succinogenes may be susceptible to phage lysis in an industrial bioreactor; if so, steps should be taken to eliminate this prophage from the host genome. This prophage has an organization similar to that of the Aggregatibacter actinomycetemcomitans phage, AaΦ23 [26], and it contains a DNA N-6-adenine-methyltransferase (Asuc_1221). The A.
succinogenes prophage differs from AaΦ23, though, in that the integrase gene (Asuc_1258) is located at the opposite end of the phage genome from its location in AaΦ23. Despite sharing a similar organization, many of the A. succinogenes phage proteins are not found in AaΦ23, and they are conserved in only a few Pasteurellaceae genomes. For example, the Asuc_1233-44 proteins are not well conserved among Pasteurellaceae, but they include such crucial proteins as both terminase subunits (Asuc_1235-6), a portal protein (Asuc_1238), a prohead protease (Asuc_1239), a major capsid protein (Asuc_1240), a protein with possible DNA-packing function (Asuc_1241), and a putative head-tail adaptor (Asuc_1242). Two sets of addiction module killer and antidote proteins are also encoded (Asuc_1211-4). Another interesting feature of the A. succinogenes phage is that Asuc_1219, encoding a homolog to replication protein O (which, along with the P protein [Asuc_1220], initiates lytic replication), has an internal frame shift. It is unclear at this point whether this prophage corresponds to a functional temperate phage. Growth experiments in the presence of mitomycin C showed that 0.1 μg/mL mitomycin C started inhibiting growth after 3 h, 1 μg/mL mitomycin C inhibited growth starting at 90 min, and no growth was observed at 10 μg/mL mitomycin C. These results are similar to those observed for phage induction in E. coli K12 [27]. More work (beyond the scope of this paper) would be needed, though, to demonstrate that the inhibitory effect of mitomycin C is associated with the release of phage particles in the culture broth.
A set of CRISPRs (located between Asuc_1293 and 1294) and a set of CRISPR-associated genes (Asuc_1284-93) are located thirty-four ORFs downstream from the prophage. Together, these CRISPRs and CRISPR-associated genes can provide resistance against phage [28]. Homologs in the 30% to 75% identity range to A. succinogenes CRISPR-associated genes are found in M. succiniciproducens and Mannheimia haemolytica, but they are not found in other Pasteurellaceae. Short fragments of five of the ten A. succinogenes CRISPR spacers matched short sequences in A. succinogenes prophage genes. One spacer sequence showed similarity to a gene encoding a phage integrase family protein (Asuc_0030). This gene is accompanied by only one other phage gene encoding a probable capsid portal protein Q with two internal frame shifts (Asuc_0029, 58% identical to an M. haemolytica phage ΦMHaA1 protein and 73% identical to a phage protein found in several H. influenzae genomes). These two genes may be the last remnants of an ancient phage integration-excision event.
A. succinogenes, M. succiniciproducens, and nine other Pasteurellaceae species contain a homolog of Escherichia coli LamB (Asuc_0322, 54% identical), the maltoporin functioning as receptor for phage lambda. In both succinogens, the lamB homolog is part of the maltose transport operon (additional file 2: Table S1). This gene is likely functional in A. succinogenes, since A. succinogenes grows well on maltose [1]. This result suggests that A. succinogenes may be susceptible to infection by lambdarelated bacteriophages.
Natural competence
All Pasteurellaceae genomes contain USS repeats that feature a conserved 9-nt sequence [29]. In most Pasteurellaceae the conserved 9-mer is AAGTGCGGT (i.e., USS1), whereas ACAAGCGGT (i.e., USS2) predominates in Actinobacillus pleuropneumoniae, H. ducreyi, H. parasuis and M. haemolytica [29]. Each USS is followed by a conserved AT-rich region, and USS2 is additionally followed by GCAAA(A/T) 20-nt downstream of the 9mer [29] (additional file 1: Figure S4). The only function yet demonstrated for USS repeats is in natural competence [30]. Under certain conditions (e.g., starvation or the presence of elevated cAMP levels) many Pasteurellaceae preferentially internalize USS-containing DNA, perhaps being recognized as self-DNA [29,31]. Uptake of USS-containing DNA is facilitated by a number of competency proteins, resulting in transformation frequencies that can be as high as 10 -3 to 10 -2 transformants per CFU [31] and in homologous recombination with chromosomal DNA. This DNA uptake mechanism works best with linear DNA, making it well suited for strain engineering, since constructs can be easily generated by PCR and double recombination events are needed to integrate the linear DNA into the chromosome. Genetic tools for A. succinogenes are currently limited to expression vectors [32,33]. The ability to replace chromosomal DNA with engineered DNA is invaluable for making gene knockouts (e.g., to block unwanted fermentation pathways) and knock-ins (e.g., of a modified promoter to affect enzyme expression).
We examined the A. succinogenes genome sequence and all other complete Pasteurellaceae genome sequences for USS occurrences (as of September 2009). All Pasteurellaceae favored either USS1 or USS2, but not both. In all cases, USS sequences were roughly equally distributed between each DNA strand. A. succinogenes has a USS density of 0.73 USS/kb and contains 1,690 USS1 repeats; only Aggregatibacter aphrophilus and A. actinomycetemcomitans had more ( Table 2). As in other Pasteurellaceae (additional file 1: Figure S4), the A. succinogenes 9-mer core is usually preceded by an A and followed by an AT-rich region ( Figure 1). One outstanding difference is that the A. succinogenes 9-mer core is immediately followed by a C in 71% of the USS repeats. The frequency of C in this position ranges from 27% in Histophilus somni to 51% in A. aphrophilus for USS1, and from 36% in H. ducreyi to 57% in A. pleuropneumoniae for USS2 (Table 2).
A regulon of twenty-five competency genes is known in H. influenzae [31]. The A. succinogenes genome contains homologs of all of these genes except for two, HI0660 and HI1631, which encode hypothetical proteins in H. influenzae. Seven of the A. succinogenes competency proteins are less than 40% identical to their H. influenzae homologs (additional file 2: Table S2). In addition to twenty-three competency genes, the A. succinogenes genome also encodes the master regulator of the competence regulon (the cAMP receptor protein, Asuc_0008) and the essential competence regulatory factor, Sxy (Asuc_0283). It also encodes proteins that are not competence-induced, but that are known to participate in DNA uptake or recombination (i.e., RecA, TopA, AtpA, and DsbA, additional file 2: Table S2) [30]. The abundance of USS repeats in A. succinogenes and the possible presence of the necessary machinery for natural competence suggested that A. succinogenes could be naturally competent. Recent experiments in our laboratory demonstrated that A. succinogenes could uptake DNA by natural transformation. These transformations led to the construction of two gene knockouts (Joshi et al., manuscript in preparation).
Metabolic reconstruction Central metabolism
A complete inventory of A. succinogenes's metabolic machinery is crucial for understanding and engineering the pathways that are involved in succinate production. A. succinogenes's metabolism has been studied using fermentation balances, enzyme assays, and 13 C-metabolic flux analyses, primarily in glucose-grown cultures [6,9,34,35]. These studies indicated that glucose uptake takes place both through a permease (followed by glucose phosphorylation by hexokinase) and through the phosphoenolpyruvate (PEP)-dependent phosphotransferase system (PTS). Glucose-6-phosphate is then catabolized to phosphoenolpyruvate (PEP) via glycolysis, with little involvement of the pentose phosphate pathway. PEP is then converted into fermentation products via the C 3 pathway (leading to formate, acetate, and ethanol) and the C 4 pathway (leading to succinate), with malic enzyme and oxaloacetate (OAA) decarboxylase forming reversible shunts between these pathways. These studies also showed the absence of glyoxylate and Entner-Doudoroff pathway fluxes. The enzymes of central metabolism encoded in the genome are summarized in Figure 2 and additional file 2: Table S3. While the A. succinogenes genome encodes the EI, Hpr, and EIIA components of the PTS (Asuc_0994-96), it does not encode a homolog of E. coli EIIBC (PtsG). PTS-dependent glucose uptake in A. succinogenes might take place, instead, using the mannosespecific PTS proteins ManXYZ (Asuc_936-38). The PTSindependent glucose uptake mechanism is believed to be a major factor explaining A. succinogenes's ability to produce large amounts of succinate, but the genes involved are not characterized at this point. The genome encodes a sugar transport protein (Asuc_0496) that shows 40% similarity to the Zymomonas mobilis glucose facilitator protein, as well as possible sugar kinases (Asuc_1504, 0131, and 0084). In agreement with previous studies [6,34], genes encoding all of the glycolytic and pentose phosphate pathway enzymes are present, whereas those encoding glyoxylate pathway enzymes are absent. While the gene encoding the Entner-Doudoroff enzyme phosphogluconate dehydratase is not present in the genome, three possible genes encoding 2-keto-3-deoxy-6-phosphogluconate (KDPG) aldolases were identified (Asuc_0152, 0374, and 1471). These three genes are part of operons encoding possible glucuronate or galacturonate degradation pathways. Because A. succinogenes did not grow on these two substrates in the conditions tested (see Materials and Methods), the functions of Asuc_0152, 0374, and 1471 remain unknown. These aldolases likely break down KDPG originating from yet unknown growth substrates rather than from the Entner-Doudoroff pathway. M. succiniciproducens was reported to have a complete Entner-Doudoroff pathway [8], which would be a significant difference between the two succinogens. However, the purported M. succiniciproducens phosphogluconate dehydratase (MS2219) is more likely the dihydroxy-acid dehydratase (IlvD) involved in branched-chain amino acid synthesis. In fact, BLASTP searches using the E. coli IlvD or phosphogluconate dehydratase (accession number NP_416365.1) as the query sequence identify the same top hit in most Pasteurellaceae proteomes. E. coli IlvD shares at least 75% identity with the top hit in each Pasteurellaceae species, while phosphogluconate dehydratase shares at most 30% identity with the same top hits. It is therefore 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 Position Nucleotide frequency unlikely that any Pasteurellaceae sequenced to dateincluding M. succiniciproducens-has a full Entner-Doudoroff pathway.
Most of the C 3 pathway enzymes were identified, including pyruvate dehydrogenase and formate dehydrogenase (Asuc_1261-6) ( Figure 2). Fluxes through these two dehydrogenases were shown to be important sources of reducing power for succinate production and for anabolism when coupled with transhydrogenase activity [34,35]. A single predicted membrane-bound transhydrogenase was also identified (Asuc_1021-22). The only uncertainty in the C 3 pathway is the mechanism by which ethanol is produced from acetyl-CoA, but the genome contains a number of aldehyde and alcohol dehydrogenases. Among them, Asuc_0403, 0591, 1136, and 1955 belong to the iron-containing alcohol dehydrogenase protein family. Asuc_0591 is a good candidate for ethanol production. It is probably a multifunctional aldehyde/alcohol dehydrogenase [37], and it is 47% identical (66% similar) to E. coli AdhE (GenBank accession no. P0A9Q7), the enzyme responsible for ethanol production. Asuc_0067 encodes a class III alcohol dehydrogenase. A class III alcohol dehydrogenase functions primarily as a formaldehyde dehydrogenase (E.C. 1.1.1.284), but it can also produce ethanol [38]. Even though M. succiniciproducens is not known to produce ethanol, it has homologs of all the A. succinogenes alcohol dehydrogenases mentioned above. Thus, these proteins are either not functional for ethanol production in M. succiniciproducens, or they are not involved in ethanol production in A. succinogenes.
Metabolic flux distribution between the C 3 and C 4 pathways is known to be influenced by CO 2 and H 2 concentrations. Malic enzyme (Asuc_0669) and a sodium-pumping OAA decarboxylase (Asuc_0301-3), which are responsible for large reversible fluxes between the C 3 and the C 4 pathways [34], are encoded in the genome. Carbonic anhydrase (Asuc_1199), which interconverts CO 2 and HCO 3 -, was identified, and it could be important for making CO 2 available for succinate production in environments with different pH values. A single membrane-bound hydrogenase (Asuc_1277-83) was also identified.
In contrast to M. succiniciproducens, A. succinogenes does not produce lactate [6]. The A. succinogenes genome encodes a single lactate dehydrogenase, Asuc_0005, which is 60% identical to the E. coli enzyme (EC 1.1.1.28) that couples lactate oxidation to amino acid and sugar transport [39]. Asuc_0005 is therefore expected to oxidize, rather than generate, lactate. In contrast, the M. succiniciproducens genome does not encode an Asuc_0005 homolog. A. succinogenes was grown anaerobically in AM3 with 100 mM NaHCO 3-, 50 mM glucose, and 25 mM D, L-lactate, but no lactate consumption was observed (data not shown).
A. succinogenes is also capable of gluconeogenesis, since it can grow by anaerobic respiration using H 2 or electrically reduced neutral red as an electron donor and using fumarate or malate as the carbon source and electron acceptor [6,40,41]. The genome sequence encodes a putative type I (Asuc_1073) and a type II (Asuc_0394) fructose-1,6-bisphosphatase, but it does not encode a PEP synthase. As a result, A. succinogenes must rely on gluconeogenic flux through PEPCK to make PEP from malate and fumarate.
Auxotrophic features
Production of succinate in a chemically defined medium can decrease downstream costs in product purification. A. succinogenes is known to require glutamate, cysteine, and methionine to grow in a defined medium [9]. Glu auxotrophy is due to an inability to synthesize αKG from glucose [9], which is now explained by the absence of genes encoding isocitrate dehydrogenase in the genome sequence. αKG cannot be synthesized from succinate because of the unidirectional activity of αKG dehydrogenase from αKG to succinyl-CoA (ΔG°' = -30,000) and because the A. succinogenes genome does not encode the reductive-TCA cycle enzyme, αKG ferredoxin oxidoreductase. Surprisingly, A. succinogenes encodes all the enzymes required to synthesize Cys (additional file 2: Table S4). Since both Cys and Met are sulfur-containing amino acids, we wondered if these auxotrophies could be due to an inability to assimilate sulfate, the only mineral sulfur source in AM3. Indeed, the A. succinogenes genome does not encode adenylsulfate kinase (additional files 1 and 2: Figure S5 and Table S4), which is required for assimilatory sulfate reduction. A. succinogenes grew normally in AM3 once sodium sulfide or sodium thiosulfate was added in place of Cys, confirming that the Cys auxotrophy is due to an inability to reduce sulfate. Met, however, was still required for growth in the presence of reduced sulfur compounds. A. succinogenes is missing several genes necessary to synthesize Met through the L-homocysteine pathway, and it would require a source of methanethiol to produce Met from O-acetyl homoserine (additional files 1 and 2: Figure S5 and Table S4).
We determined that nicotinic acid, pantothenate, pyridoxine, and thiamine are the only four vitamins required by A. succinogenes. The A. succinogenes genome sequence is missing several genes involved in the biosynthesis of these vitamins (additional files 1 and 2: Figure S6 and Table S4). Even though it is missing several genes involved in biotin synthesis (e.g., bioA, bioF, and bioW), A. succinogenes grows repeatedly in the absence of biotin (five consecutive transfers in AM3). However, A. succinogenes grows more reliably in AM3 supplemented with biotin when inoculated from frozen stocks or from rich medium. Biotin can therefore be considered non-essential, but stimulatory, for the growth of A. succinogenes, a feature shared with M. succiniciproducens [42]. One mechanism that could explain the growth of A. succinogenes in the absence of biotin despite the absence of a full set of biotin biosynthetic genes is that A. succinogenes might be able to use thiamine as a precursor for biotin synthesis, as has been observed with the fungus, Humicola, strain 16-1 [43]. The A. succinogenes gene encoding a putative biotin synthase (Asuc_1132) seems to be co-transcribed with the thiamine ABC transporter genes (Asuc_1229-31).
Similar to A. succinogenes, M. succiniciproducens is auxotrophic for Cys, Met, nicotinic acid, pantothenate, pyridoxine, and thiamine, and the genetic bases underlying these auxotrophies are the same as those identified in A. succinogenes [42]. A. succinogenes and M. succiniciproducens differ in one respect, though. M. succiniciproducens is not auxotrophic for Glu since, unlike A. succinogenes, it has genes encoding a citrate synthase (MS2371) and an isocitrate dehydrogenase (MS2370). Although the two succinogens have an incomplete assimilatory sulfate reduction pathway, they, along with various A. pleuropneumoniae strains and Actinobacillus minor NM305, have the most complete assimilatory sulfate reduction pathway among all other sequenced Pasteurellaceae. The inability of A. succinogenes and M. succiniciproducens to carry out assimilatory sulfate reduction is likely an adaptation to their natural environment. The rumen flora produces hydrogen sulfide [44,45]. Both succinogens encode a serine acetyltransferase (Asuc_0384 and MS2212) and a cysteine synthetase (Asuc_2108 and MS1770) that may allow them to synthesize L-Cys from H 2 S produced in the rumen.
The insights into A. succinogenes's auxotrophies obtained by genome analyses have allowed us to modify our original defined medium. Whereas Glu and Met are two of the least expensive amino acids (~$1/kg), cysteine is more expensive (> $10/kg) [46]. Using inexpensive inorganic reduced sulfur compounds (such as thiosulfate in place of cysteine) and eliminating several nonessential vitamins are expected to significantly reduce the cost of defined growth medium.
Dicarboxylic acid transporters
As a producer of the dicarboxylate succinate, it is interesting to note that A. succinogenes encodes twelve possible anaerobic dicarboxylate transporters (additional file 2: Table S5). Nine of them are similar to the tripartite ATP-independent periplasmic transporters (T.C. 2.A.56) encoded by dctPQM [47]. These transporters have been characterized in Rhodobacter capsulatus and Wolinella succinogenes for their roles during fumarate respiration, where fumarate is transported by proton symport [47,48]. The other three anaerobic dicarboxylate transporters are related to DcuA, B, and C (T.C. 2.A.13). These transporters have been characterized during fumarate respiration by E. coli and W. succinogenes [48][49][50]. They operate by exchanging an intracellular dicarboxylate (e.g., succinate) for an extracellular dicarboxylate (e.g., fumarate, malate, or aspartate). DcuA and B may also transport Na + in symport with the dicarboxylates to avoid dissipating the proton motive force [48]. DcuC may have preferential succinate efflux activity, since a dcuC ,-E. coli strain has increased dicarboxylate exchange and fumarate uptake activities [51]. During E. coli mixed acid fermentation, glucose did not repress dcuC expression, suggesting that DcuC plays a role in succinate excretion [51]. A microarray study examining changes in H. influenzae gene expression during competency induction suggested that DcuA and B are important for Pasteurellaceae fermentation or fumarate respiration [31]. In that study, 151 genes showed > 4fold increase in transcript levels as H. influenzae became competent, including transcripts for the C 4 pathway enzymes aspartate ammonia-lyase, malate dehydrogenase, fumarase, fumarate reductase, and a single dicarboxylate transporter. This H. influenzae transporter is 84% and 44% identical to A. succinogenes's putative DcuBtype transporters, Asuc_0142 and 1999, respectively. A. succinogenes dcuA, B, and C are therefore candidate genes to investigate the importance of dicarboxylate transport during fumarate respiration and succinate fermentation.
succinogenes was recently shown to grow on glycerol as its sole carbon source with dimethyl sulfoxide (DMSO) or nitrate as the terminal electron acceptor (Schindler and Vieille, manuscript in preparation). Accordingly, the A. succinogenes genome encodes a glycerol uptake facilitator (Asuc_1603), a glycerol kinase (Asuc_1604), and an anaerobic glycerol-3-phosphate dehydrogenase (Asuc_0205-3). It also encodes a DMSO reductase (Asuc_1524-1521), a periplasmic nitrate reductase (NapFDAGHBC, Asuc_2040-35) and a periplasmic nitrite reductase (NrfABCDEFG, Asuc_0704-11). In contrast, M. succiniciproducens contains only a truncated homolog of Asuc_1521 and lacks the other ORFs for the DMSO reductase complex.
5-Ketogluconate
degradation pathways encompass all the sugars A. succinogenes is known to use, except arabitol [1,6]. Homologs of the E. coli arabitol transporter and arabitol dehydrogenase were not found in the A. succinogenes genome. The genome also encodes transporters and degradation pathways for carbon sources A. succinogenes was not known to metabolize (e.g., ascorbate, pectin, glucarate, and galactarate). In the conditions tested, though, we were able to detect growth on galactarate and ascorbate only. Growth on ascorbate was surprising because the ascorbate PTS transporter encoded by Asuc_0235-40, appears to be missing a component. It is not clear whether this missing component is encoded elsewhere or whether another PTS system shares some specificity for ascorbate. The absence of growth on pectin was surprising, since A. succinogenes contains three separate operons encoding a full pectin degradation pathway (Asuc_0145-58, 0366-74, and 1467-75). Three of A. succinogenes's ten possible tripartite ATP-independent periplasmic transporters (Asuc_0146-48, 0156-58, and 0366-68) are found in these operons. Several other operons appear to have a role in sugar transport and degradation, but the identity of the sugars is unknown. For example, one of the four proteins that show similarity to an idonate, a gluconate, or a 5-ketogluconate transporter is in an operon that encodes enzymes to degrade an unidentified sugar (Asuc_0119-30). Also, Asuc_0585-8 encodes a fructose-like PTS system and a protein that might be a sugar kinase.
Explanation for lack of pathogenicity
The natural Pasteurellaceae ecology is in association with a host [52]. The host is usually mammalian, with a few exceptions [7], such as P. multocida colonizing birds [52] and possibly even amoebas [53]. Most Pasteurellaceae can be isolated from healthy hosts and are considered part of the normal flora. However, in circumstances such as host stress, many Pasteurellaceae cause disease and are considered opportunistic pathogens. Most Pasteurellaceae are isolated from the respiratory tract and cause pulmonary diseases [52]. Others have been isolated from the oral cavity (e.g., A. actinomycetemcomitans, which causes periodontitis), the genital tract (e.g., H. ducreyi, which causes sexuallytransmitted chancroid), and bovine rumen (e.g., A. lignieresii, which causes wooden tongue) [7]. Virulence is an undesirable trait for an industrial organism, and the relatedness of A. succinogenes and M. succiniciproducens to several pathogens cannot be ignored. With no reports of disease caused by the succinogens, their genome sequences are a convenient and logical starting point to assess their potential for lack of pathogenicity.
Many Pasteurellaceae virulence factors have been characterized. We manually compiled a list of 341 Pasteurellaceae virulence proteins (with some functional redundancy), including and expanding on those compiled by Challacombe and Inzana [54]. We then aligned their sequences against the A. succinogenes and M. succiniciproducens protein databases (additional file 3: Table S6). Our comparison focuses on gene products having functions in toxin production, synthesis of cell surface structures, and iron uptake (additional file 3: Table S6). We excluded from the comparison virulence factors in the categories of amino acid transporters, purine and pyrimidine biosynthetic enzymes, and enzymes for anaerobic metabolism. While these metabolic activities may affect host health [55][56][57], they are also necessary for nonvirulent processes. The major findings from our alignments are summarized in this section with more details available in additional file 4: Supplementary text. Raw data from the alignments are reported in additional file 3: Table S6.
Cell surface structures
Cell surface virulence factors used by pathogenic Pasteurellaceae include pili, adhesins, lipopolysaccharide (LPS), and capsules. Adherence to respiratory epithelial cells is the first colonization stage by respiratory Pasteurellaceae pathogens. It involves a number of cell surface mechanisms [64]. Both succinogens have possible homologs to OapA and B and all components of a type IV pilus (pilABCD), which is involved in host surface binding in H. influenzae [65,66]. Because type IV pili are also part of the H. influenzae competence regulon [31], and because A. succinogenes was recently shown to be naturally competent (Joshi et al., manuscript in preparation), type IV pili might not be related to virulence in the succinogens. M. succiniciproducens has probable homologs of the A. actinomycetemcomitans pili needed for tight adherence (flp and tad loci) [67], whereas A. succinogenes does not. Both succinogens have several large ORFs that could encode HMW adhesins, including gene clusters that may be involved in hemagglutinin production (Asuc_1006-12 and MS1162-9). However, Asuc_1006 and 1008 have internal frame shifts. It is currently not known whether either succinogen makes an adhesin. It is also possible that the succinogens use this feature for survival in a competitive rumen environment, rather than to cause disease.
Nontypeable H. influenzae strains are able to evade host immune defenses by incorporating host sialic acid and choline into their LPS, thereby mimicking host cell surfaces [68,69]. The succinogens' genome sequences contain many genes involved in LPS synthesis and modification but key genes for choline and sialic acid incorporation are not present. None of these genes contain variable number tandem repeats, suggesting that the succinogens are not capable of LPS phase variation.
A. succinogenes has several LPS glycosyltransferases not found in M. succiniciproducens (e.g., Asuc_0524, 1375), suggesting that its LPS could be more complex. The two succinogens' LPSs might also differ in sugar composition. A. succinogenes is one of only four Pasteurellaceae that encodes the L-rhamnose synthesis pathway. L-rhamnose is a common component of the LPS Oantigen [70,71], and the A. succinogenes L-rhamnose biosynthetic pathway (Asuc_0826-32) is encoded just downstream of the LPS biosynthesis genes (Asuc_0821-24). Because LPS O-antigens are mostly studied in pathogenic bacteria, it is unclear how often non-pathogenic bacteria contain rhamnose in their LPS. For this reason, the possible presence of rhamnose in A. succinogenes LPS is by no means indicative of a virulence trait. In contrast, M. succiniciproducens encodes proteins likely involved in L-rhamnose transport (RhaT, MS2326) and catabolism (RhaBAD, MS2327-29), as well as the Lrhamnose-dependent regulators RhaS (MS2322) and RhaR (MS2323). None of these genes are found in A. succinogenes. Thus, the two succinogens have evolved completely different L-rhamnose pathways-a biosynthetic one in A. succinogenes and a catabolic one in M. succiniciproducens. P. multocida, A. pleuropneumoniae, M. haemolytica, and typeable H. influenzae produce a capsule that is important for virulence [72][73][74][75][76]. Both succinogens have possible homologs to, at most, two of the four capsule biosynthesis and export proteins, suggesting that they are not capsulated bacteria. However, non-typeable H. influenzae are non-capsulated but they are still virulent.
Iron uptake mechanisms
Iron acquisition is a common trait in most bacteria, making it difficult to associate iron uptake with virulence. Nonetheless, some insight can be gained from the form of iron transported. Some pathogenic Pasteurellaceae can use the mammalian iron sources, transferrin and hemoglobin [72,77], but the succinogens have possible homologs to only a few of the proteins required for transferrin, heme/hemopexin, or hemoglobin uptake. A. succinogenes does not have homologs to the hemin receptor HemR or to the heme utilization protein Hup [66], while M. succiniciproducens has possible homologs of each. Still, BLAST searches show that both succinogens can assimilate other forms of iron, including iron bound by various siderophores and that they contain the heme biosynthetic pathway from L-glutamate [78] (additional file 3: Table S6). In both succinogens the potential hemagglutinin production system mentioned above is encoded alongside genes involved in ferrous iron transport, including feoAB. FeoA and B are not encoded in any sequenced Pasteurellaceae other than the succinogens and A. minor NM305, but they have been implicated in more distantly related bacteria in virulence and colonization of mammalian intestines [79,80]. This genetic region may be a worthwhile target for deletion, provided it does not contain essential genes for growth and succinate production.
Other virulence proteins
Both succinogens have a putative homolog to the inner membrane protein, ImpA, involved in autoaggregation [59]. Some Pasteurellaceae have a urease, which is a known virulence factor of gastroduodenal and urinary tract pathogens [81], but the succinogens have no urease homologs, and A. succinogenes tested negative for urease activity [1]. There is also no homolog in either succinogen genome sequence to the H. influenzae Iga protease, which cleaves immunoglobulin A1, helping H. influenzae avoid host defenses at mucosal surfaces [82,83].
We want to stress that nonpathogenicity cannot be concluded from the analysis of a genome sequence. Most Pasteurellaceae species cause respiratory diseases. The virulence factors associated with a hypothetical rumen succinogen-caused disease would likely be different. For example, the FeoAB iron uptake system, which is important for the virulence of some intestinal pathogens, is unique to the succinogens among the thirty-two partially and fully sequenced Pasteurellaceae, with A. minor NM305 (part of the pig respiratory tract normal flora) an exception. This system, though, could also be important for a commensal relationship with the mammalian host.
Conclusions
Sequencing of the A. succinogenes genome confirms many of our earlier results based on growth experiments, enzyme assays, and metabolic flux studies [6,9,34,35]. For example, A. succinogenes lacks a complete TCA cycle as well as a glyoxylate pathway, and PEP carboxykinase is the only PEP-carboxylating enzyme in this organism. The genes missing in the glutamate, cysteine, and methionine biosynthetic pathways represent possible positive markers that can be used in genetic engineering strategies. The fact that the A. succinogenes genome encodes uptake and degradation pathways for a variety of carbohydrates reflects the variety of carbohydrate substrates available in the rumen, A. succinogenes's natural habitat. It also suggests that many different carbon sources can be used as feedstock for succinate production by A. succinogenes. The abundance of USS repeats in A. succinogenes and the possible presence of the necessary machinery for natural competence suggested that A. succinogenes is naturally competent, a feature that was recently demonstrated in our laboratory. It is encouraging that the succinogens' genome sequences lack a considerable number of the virulence genes used by their relatives, and that there are no reports of disease caused by A. succinogenes or M. succiniciproducens. The lack of pathogenicity of these two succinogens is an exciting prospect not just for industrial purposes, but because comparisons with pathogenic Pasteurellaceae could lead to a better understanding of Pasteurellaceae virulence.
Additional material
Additional file 1: Figures S1 to S6. Figure S1: Phylogenetic tree of representative Pasteurellaceae with complete genomes based on 16 S RNA sequences. 16 S rRNA phylogeny was determined using the Michigan State University Ribosomal Database Project tools [19]. Figure S2: Hierarchical clusterings of Pasteurellaceae species according to COGS, PFAM, Enzymes, and TIGRfam classifications. Hierarchical clustering of Pasteurellaceae genomes was done according to COG, Pfam, Enzyme, and TIGRfam functional profiles at the JGI's Integrated Microbial Genomes website [21]. The four functional profile clustering approaches place the two succinogens in a clade separate from other Pasteurellaceae. Figure S3: NUCmer and PROmer alignments of A. succinogenes and M. succiniciproducens, P. multocida, and A. pleuropneumoniae L20. Synteny plots of the whole-genome alignments of A. succinogenes and M. succiniciproducens, A. succinogenes and P. multocida, and A. succinogenes and A. pleuropneumoniae L20 at the nucleotide level (NUCmer) and at the protein level (PROmer). Alignments were performed using the mummer software package [15]. These plots give overviews of the rearrangements that have taken place at the genome level between two bacterial species. Red lines from the bottom left to upper right indicate conservation of nucleotide (NUCmer) or protein (PROmer) sequence, reading in the same direction in both species. Blue lines from upper left to lower right indicate sequence conservation but with sequence inversion between the two species. NUCmer and PROmer comparisons of A. succinogenes with H. influenzae KW20, H. influenzae 028NP, H. somnus, H. ducreyi, and A. pleuropneumoniae JL03 were also performed, but are not shown in this Figure. The NUCmer plots show little to no conservation of genome structure at the nucleotide level between A. succinogenes and any other Pasteurellaceae. PROmer plots reveal that A. succinogenes and M. succiniciproducens are more related to each other than to other Pasteurellaceae. The PROmer plot of A. succinogenes vs. M. succiniciproducens shows that drastic changes in genome structure have occurred as A. succinogenes and M. succiniciproducens evolved divergently from their last common ancestor, indicating that the two succinogens are more distantly related than their functional traits would suggest. Figure S4: Comparison of nucleotide frequencies in Pasteurellaceae uptake signal sequences. Figure S4 shows nucleotide frequencies in the USSs of six representative Pasteurellaceae species containing either USS1 (A. succinogenes, M. succiniciproducens, A. aphrophilus NJ8700, and H. somni 129PT) or USS2 (A. pleuropneumoniae L20 and H. ducreyi 3500HP). USS 9-mer cores were counted and their surrounding sequences reported using a perl script. The output was pasted into a Microsoft Excel spreadsheet to calculate the frequency of each nucleotide occurring at each position, upstream and downstream of the USS core. Nucleotide frequencies in the USSs of sixteen more Pasteurellaceae species containing USS1 (H. influenzae Rd KW20, 028NP, PittEE, PittAA, PittGG, PittHH, PittII, 22.1-21, 22.4-21, 3655, R2846, 2866, and R3021; P. multocida; A. actinomycetemcomitans; and H. somni 2336) and four more Pasteurellaceae species containing USS2 (A. pleuropneumoniae JL03 and 4074, M. haemolytica PHL213, and H. parasuis 29775) were also calculated, but are not shown here. These data are available upon request. Figure S5: A. succinogenes has incomplete pathways for assimilatory sulfate reduction and methionine synthesis. Four-digit numbers are Asuc_ORF (locus tags) numbers and are followed by E.C. numbers. Hyphenated locus tag numbers indicate that the enzyme is encoded by several successive genes. Reaction names: see additional file 2: Table S4. XH, reduced thioredoxin; X + , oxidized thioredoxin. Arrow and number colors: black, product function assumed; blue, probable function assumed; red, possible function assumed. Bold arrows indicate central metabolic pathways. Dotted arrows indicate that A. succinogenes is missing the gene for that function. Figure S6: A. succinogenes has incomplete pathways for biotin, nicotinic acid, pantothenic acid, and pyridoxine synthesis. Four-digit numbers are Asuc_ORF (locus tags) numbers and are followed by E.C. numbers. Hyphenated locus tag numbers indicate that the enzyme is encoded by several successive genes. Reaction names: see additional file 2: Table S4. Arrow and number colors: black, product function assumed; green, putative function assumed; blue, probable function assumed; red, possible function assumed. Bold arrows indicate central metabolic pathways. Gray dotted arrows indicate that A. succinogenes is missing the gene for that function. Metabolites: Alac, 2-acetolactate; AON, 8-amino-7oxonoanoate; APP, 3-amino-2-oxopropyl phosphate; CoA, coenzyme A; Dbio, dethiobiotin; DCoA, dephospho-CoA; DhP, 2-dehydropantoate; DMB, 2,3-dihydroxy-3-methylbutanoate; dNAD + , deamido-NAD + ; DON, 7,8-diaminononanoate; DXP, 1-deoxyxylulose-5-phosphate; Er4P, erythronate-4-phosphate; HPB, 2-oxo-3-hydroxy-4-phosphobutanoate; IAsp, iminoaspartate; MOB, 3-methyl-2-oxobutanoate; NRS, nicotinate ribonucleoside; NRT, nicotinate ribonucleotide; Pan, pantoate; PCA, pimeloyl-CoA; PHT, O-phospho-4-hydroxythreonine; Pim, pimelate; PNP, pyridoxine phosphate; Ppc, 4'-phosphopantothenoyl-cysteine; Ppt, 4'phosphopantothenate; Ppth, 4'-phosphopantetheine; QNL, quinolinate. Other abbreviations are as in Figure 2.
Additional file 2: Tables S1 to S5. Table S1: A. succinogenes ORFs encoding sugar transporters and degradation pathways. Table S1 lists all the A. succinogenes transporters, enzymes, and regulatory proteins potentially involved in sugar transport and assimilation, based on our manual annotation of the genome. Annotation criteria are described in the materials and methods section. The ORFs putatively encoding sugar transport and degradation pathways encompass all the sugars A. succinogenes is known to use, except arabitol. The A. succinogenes genome also encodes transporters and degradation pathways for carbon sources A. succinogenes does not metabolize (e.g., pectin). Table S2: A. succinogenes homologs of H. influenzae competency proteins. List of the H. influenzae competency genes and their A. succinogenes homologs, with the likeliness that the A. succinogenes homologs have the same function. A. succinogenes homologs are considered putative if they share 60-75% amino acid identity with the query sequence, probable if they share 40-59% amino acid identity with the query sequence, and possible if they share 25-39% amino acid identity with the query sequence. NA indicates that no suitable homolog was identified in A. succinogenes either due to insufficient alignment length (less than 25% of the query sequence length) or to no hits retrieved from the BLAST search. Table S3: A. succinogenes ORFs encoding central metabolic enzymes. List of A. succinogenes genes encoding enzymes of central metabolism with their locus names and EC numbers. Enzyme names are based on our manual annotation of the genome, using the criteria described in the materials and methods section. Table S4: Partial biosynthetic pathways present in A. succinogenes for amino acids and vitamins required for growth. Cysteine, glutamate, methionine, biotin, nicotinic acid, pantothenate, and pyridoxine are required for A. succinogenes's growth on defined medium. Table S4 lists the components of the cysteine, methionine, biotin, nicotinic acid, pantothenate, and pyridoxine biosynthetic pathways that are present in A. succinogenes. Enzyme names are based on our manual annotation of the genome, using the criteria described in the materials and methods section. This list confirms that A. succinogenes contains an incomplete assimilatory sulfate reduction pathway, but that it is able to synthesize cysteine from sulfide or thiosulfate. It also suggests that A. succinogenes is unable to synthesize | 2014-10-01T00:00:00.000Z | 2010-11-30T00:00:00.000 | {
"year": 2010,
"sha1": "40581261d54e2a845b5a826c848762a5acc1ccb3",
"oa_license": "CCBY",
"oa_url": "https://bmcgenomics.biomedcentral.com/track/pdf/10.1186/1471-2164-11-680",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "40581261d54e2a845b5a826c848762a5acc1ccb3",
"s2fieldsofstudy": [
"Biology",
"Chemistry",
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
20383312 | pes2o/s2orc | v3-fos-license | Adverse effects of AMP-activated protein kinase 2-subunit deletion and high-fat diet on heart function and ischemic tolerance in aged female mice
AMP-activated protein kinase (AMPK) plays a role in metabolic regulation under stress conditions, and inadequate AMPK signaling may be also involved in aging process. The aim was to find out whether AMPK alpha2-subunit deletion affects heart function and ischemic tolerance of adult and aged mice. AMPK alpha2(-/-) (KO) and wild type (WT) female mice were compared at the age of 6 and 18 months. KO mice exhibited subtle myocardial AMPK alpha2-subunit protein level, but no difference in AMPK alpha1-subunit was detected between the strains. Both alpha1- and alpha2-subunits of AMPK and their phosphorylation decreased with advanced age. Left ventricular fractional shortening was lower in KO than in WT mice of both age groups and this difference was maintained after high-fat feeding. Infarct size induced by global ischemia/reperfusion of isolated hearts was similar in both strains at 6 months of age. Aged WT but not KO mice exhibited improved ischemic tolerance compared with the younger group. High-fat feeding for 6 months during aging abolished the infarct size-reduction in WT without affecting KO animals; nevertheless, the extent of injury remained larger in KO mice. The results demonstrate that adverse effects of AMPK alpha2-subunit deletion and high-fat feeding on heart function and myocardial ischemic tolerance in aged female mice are not additive.
Introduction
AMP-activated protein kinase (AMPK) is a heterotrimeric serine/threonine kinase expressed in most mammalian tissues including myocardium.It acts as a cellular fuel gauge in response to a depletion of ATP levels (Hardie 2003) and its activation is essential for the control of whole body energy homeostasis during physiological and pathological stresses such as exercise, pressure overload, nutritional deprivation, hypoxia or ischemia.Once activated, AMPK phosphorylates a number of target proteins resulting in a stimulation of ATPproducing processes and an inhibition of energy-consuming biosynthetic pathways.Increased glucose uptake, glycogenolysis and glycolysis as well as increased fatty acid transport and oxidation are the main acute metabolic actions of AMPK aiming at a restoration of cellular energy balance (for review see Hardie and Carling 1997;Steinberg and Kemp 2009;Wang et al. 2012;Zaha and Young 2012).In addition, AMPK inhibits protein synthesis, stimulates protein degradation and promotes autophagy in line with its role in providing fuel during energy deprivation (Zaha and Young 2012).
Rapid activation of AMPK during myocardial ischemia (Kudo et al. 1995;Folmes et al. 2009) may help to preserve cardiac function and viability by stimulating glycolytic ATP production.On the other hand, the AMPK-dependent stimulation of fatty acid oxidation at reperfusion occurs at the expense of glucose oxidation with potentially harmful consequences due to acidosis (Liu et al. 2002;Dyck and Lopaschuk 2006).Indeed, a number but not all studies demonstrated beneficial effects of AMPK against various manifestations of acute ischemia/reperfusion (I/R) injury (for review see Zaha and Young 2012) and this issue is still a matter of debate.
Cardiovascular aging and senescence is associated with complex alterations at the molecular level resulting in unfavorable myocardial biochemical and structural remodeling and eventually in impaired cardiac contractility and pump function (Lakatta and Sollott 2002;Ferrari et al. 2003).It has been repeatedly demonstrated that aged hearts are more susceptible to I/R injury, and their endogenous protective mechanisms activated by various forms of preand postconditioning are attenuated or lost (for review see Boengler et al. 2009).The cause is obviously multifactorial and still poorly understood (Ashton et al. 2006).
AMPK controls various signaling pathways involved in the aging process (Salminen and Kaarniranta 2012) and its chronic pharmacological activation has been proposed as a strategy for delaying aging and extending the lifespan (McCarty 2004).Senescent mice exhibited significant reduction in both AMPK 1 and 2 isoform activities in left ventricular myocardium (Turdi et al. 2010) and the stimulation of AMPK 2 activity was blunted in skeletal muscle of old rats (Reznick et al. 2007).It has been shown that AMPK deficiency exacerbated cardiac contractile dysfunction in senescent mice (Turdi et al. 2010).In addition, AMPK has been implicated in the mechanism of pronounced protective effect of caloric restriction against myocardial I/R injury in aged mice (Edwards et al. 2010).On the other hand, Gonzales at al. (2004) suggested that the age-associated decline in myocardial hypoxic tolerance is caused by neither changes in AMPK activity nor blunted AMPK response to hypoxia.
The purpose of the present study was to find out whether AMPK 2-subunit deletion would affect heart function and ischemic tolerance of adult and aged mice.As high circulation levels of fatty acids can contribute to myocardial I/R injury (Lopaschuk et al. 2007) and AMPK 2-subunit plays an important role in fatty acid uptake (Abbott et al. 2012) and prevention of metabolic disorders induced by high-fat (HF) feeding (Fujii et al. 2008), we also assessed functional changes and the extent of I/R injury in hearts of mice fed HF diet for 6 months at advanced age.We hypothesized that deletion of AMPK 2-subunit, which is the predominant AMPK -subunit expressed in mice hearts (Li et al. 2006), will impair heart function and ischemic tolerance of aged mice and these effects will be further exacerbated by HF diet.
Mice were housed in a controlled environment (21°C; 12-h light-dark cycle) with free access to water and standard chow diet (extruded Ssniff R/M-H diet; Ssniff Spezialdieten GmbH, Soest, Germany).Some mice were randomly assigned to corn oil-based HF diet from 12 th to 18 th month of age.Composition of the diets is given in Table 1 (for further details, see Kuda et al. 2009).All mice were used in ad libitum fed state.The study was conducted in accordance with the Guide for the Care and Use of Laboratory Animals published by the US National Institutes of Health (NIH publication no. 85-23, revised 1996).The experimental protocols were approved by the Animal Care and Use Committee of the Institute of Physiology of the Czech Academy of Sciences.
Quantification of AMPK
Mice were killed by cervical dislocation, hearts were dissected and frozen in liquid nitrogen.The heart lysates were prepared by homogenization in liquid nitrogen.The total contents of catalytic α1-and α2-subunit of AMPK and the phosphorylated form of AMPK were determined by Western blotting as described previously (Kůs et al. 2008;Matějková et al. 2004).
Echocardiography
The echocardiographic evaluation of the geometrical and functional parameters of the LV was performed using the GE Vivid 7 Dimension (GE Vingmed Ultrasound, Horten, Norway) with a 12 MHz linear matrix probe M12L.The animals were anesthetized by the inhalation of 2% isoflurane (Aerrane, Baxter SA) and their rectal temperature was maintained within 36.5 and 37.5°C by a heated table throughout the measurements.For the baseline evaluation, the following diastolic and systolic dimensions of the LV were measured: the posterior wall thickness (PWTd and PWTs), anterior wall thickness (AWTd and AWTs), and the cavity diameter (LVDd and LVDs).From these dimensions, the main functional parameter, fractional shortening (FS) was derived by the following formula: FS [%] = 100 × (LVDd -LVDs) / LVDd.
Coronary flow was measured by timed collection of coronary effluent and normalized to heart weight.After 20 min of stabilization, the spontaneously beating hearts were subjected to 45 min of global no-flow normothermic ischemia and 60 min of reperfusion.
Infarct size determination
A 2 ml bolus of 1% 2,3,5 triphenyltetrazolium chloride (TTC) was injected through the aorta followed by incubation of the heart in TTC for 20 min at 25°C and fixation overnight in 10% neutral formaldehyde solution.After the right ventricle (RV) separation, the left ventricle (LV including the septum) was cut perpendicularly to the long axis into 0.5 mm thick slices.The infarct size (TTC-negative) and the size of the LV were determined from photographs by a computerized planimetric method using the software Ellipse (ViDiTo, Slovakia).The infarct size was normalized to the size of the LV.
Statistical analysis
Analyses were performed using GraphPad Prism software (version 2005; Graph Pad Inc., San Diego, CA).A two-way ANOVA (with genotype and experimental conditions as categories) was carried out to determine significant interactions, followed by a Tukey's post-hoc multiple-comparisons test to examine differences between groups.All values are expressed as means ± SEM with p < 0.05 considered as statistically significant.
Basic characteristics
Body weight was significantly higher in aged mice than in adult ones and it was further increased by HF diet-feeding without any effect of the genotype.Heart weight was also higher in mice kept on HF diet and the increase was more pronounced in the KO group.
However, no difference among groups was observed in relative heart weight.Aging was associated with a significant decrease in hematocrit level regardless the genotype or diet (Table 2).
Protein expression and phosphorylation of AMPK
Western blot analysis of AMPK in the hearts from standard diet-fed mice (Fig. 1A) revealed age-dependent decrease in the levels of both α1-subunit (Fig. 1B) and α2-subunit (Fig. 1C).Whereas the level of AMPK α1-subunit was comparable to that of WT mice (Fig. 1B), a negligible amount of the α2-subunit was present in the KO mice, independent of age (Fig. 1C).Phosphorylated AMPK levels were markedly reduced in KO compared with WT mice and they were also decreased during aging in both genotypes (Fig. 1D).
Heart function
Echocardiography was used to assess effects of age, diet and AMPK deletion in separate groups of mice.LVDs significantly increased in response to AMPK deletion in all groups.Both LVDd and LVDs were significantly larger in the HF diet-fed as compared with the standard diet-fed aged mice, but this effect reached statistical significance only in the KO animals.Wall thickness measurements did not show any significant differences among groups, except for a slight decrease in PWTs in aged KO compared to WT mice fed standard diet (Table 3).
FS was lower in all groups of KO mice compared to corresponding WT mice and it was not significantly affected by aging.Feeding HF diet decreased this index of LV systolic function in WT animals without having a significant effect in KO mice, despite the fact that 3 animals out of 13 in this later group exhibited a marked drop of FS to around 20%.Only a combination of aging and HF diet resulted in a significantly (p = 0.047) decreased FS in KO mice (Fig. 2).
Coronary flow and infarct size
Baseline preischemic coronary flow normalized to heart weight was comparable among groups regardless the age, diet or genotype, except for slightly but significantly higher values in aged WT hearts compared to adult ones.Coronary flow at the end of reperfusion was lower compared with preischemic values in all groups, but the difference was least pronounced in the aged WT group.AMPK α2-subunit deletion negatively affected the flow recovery in both age groups kept at standard diet.The HF diet-feeding tended to decrease the flow at reperfusion, but this effect reached statistical significance in the WT hearts only (Table 4).
Infarct size was similar in both strains at the age of 6 months.Surprisingly, aging resulted in a significant infarct size-sparing effect in WT mice that was absent in animals with AMPK α2-subunit deletion.The HF diet abolished the age-associated improvement of myocardial ischemic tolerance in WT mice without significantly affecting KO mice.
Nevertheless, the extent of injury was larger in the later group compared to WT animals (Fig. 3).
Discussion
The results of the present study provide further evidence for the important role of AMPK α2-subunit in the regulation of processes associated with heart aging.Aged female mice exhibited decreased myocardial levels of both AMPK 1and 2-subunits and AMPK phosphorylation compared with adult littermates, the later effect being more pronounced in KO animals.The major finding is that AMPK α2-subunit deletion and HF feeding significantly impaired both cardiac contractile function and tolerance to acute I/R injury in aged mice, but the negative effects of these two interventions were not additive to each other.
An increasing evidence suggests that AMPK activity can slow down aging process and extend the lifespan.AMPK is involved in a complex network of signaling pathways that control a number of cellular events helping to maintain energy balance under various stress conditions and the loss of AMPK responsiveness may contribute to age-related metabolic disturbances (Salminen and Kaarniranta 2012).However, reports concerning changes of AMPK expression and activity during aging and senescence are rather controversial.For example, Reznick et al. (2007) observed the loss of AMPK activation in skeletal muscle by AICAR or exercise in aged rats without any change in the expression of AMPK 1and 2subunits.Similarly, aging impaired phosphorylation of AMPK -subunit in rat skeletal muscle but not the expression of either 1or 2-subunits (Qiang et al. 2007).In contrast, an increased AMPK activity with aging was observed in cultured human fibroblasts (Wang et al. 2003).Whereas the basal activity of AMPK 1-, but not 2-subunit, was higher in livers from old mice compared to young animals, hypoxia-induced activation was blunted with aging (Mulligan et al. 2005).Concerning the heart, neither basal activity of AMPK 1and 2-subunit, nor its stimulation by AMP was afected by age in mice (Gonzales et al. 2004), and no effect of aging on AMPK phosphorylation was found in human atrial tissue (Niemann et al. 2013).On the other hand, recent reports showed that aging or senescence did not affect murine myocardial AMPK expression, but it decreased its phosphorylation and activity as well as the specific activities of both AMPK 1and AMPK 2-subunits (Turdi et al. 2010;Aurich et al. 2013).In the presents study, we found significant decreases in protein levels of both -subunit isoforms and phosphorylated AMPK indicating its reduced basal activity in the hearts of aged mice that was futher attenuated in animals with 2-subunit deletion.
Although the reason for these differences is unclear, our data are in line with the view that AMPK function is likely compromised in aged hearts.
Despite the fact that AMPK is highly expressed in the myocardium, its role in the pathogenesis of heart dysfunction associated with aging has not been fully understood.Our echocardiographic data clearly show that the left ventricular systolic function in both adult and aged KO mice was lower compared to WT animals as indicated by a decreased fractional shortening.This observation is in agreement with the study of Turdi et al. (2010), who demonstrated that the impairment of calcium handling and contractility of myocytes isolated from aged murine hearts was more pronounced in transgenic animals overexpressing a dominant negative AMPK 2 subunit (kinase dead).Moreover, aging-induced contractile defects were attenuated by treatment with the AMPK activator metformin.These data suggest that AMPK deficiency may contribute to age-induced cardiac dysfunction.It likely involves oxidative stress, impaired intracellular calcium handling and disrupted mitochondrial function (Turdi et al. 2010), but the complex mechanism remains to be elucidated.
The majority of studies that investigated an impact of aging on intrinsic cardiac tolerance to I/R injury demonstrated its impairment with advanced age, possibly as a consequence of enhanced oxidative stress (for review see Boengler et al. 2009).Our observation of a reduced infarct size in female WT mice aged 18 months compared to their younger littermates can be, therefore, considered rather surprising.However, available evidence shows not only that aging is not always associated with exacerbated I/R injury (Azhar et al. 1999;Peart et al. 2007) but also that myocardial ischemic tolerance can improve with aging or senescence.For instance, several studies demonstrated infarct size reduction in aged rats (Sniecinski andLiu 2004), mice (Gould et al. 2002;Boengler et al. 2007;Przyklenk et al. 2008) or guinea-pigs (Rhodes et al. 2012).These discrepancies may be, in part, due to marked differences in the age of animals used in various studies often regardless their sex.
Our preliminary observation that the infarct size reduction is absent in 18-month-old male mice (Slámová at al. 2012) points to an important role of sex.Interestingly, Willems et al.
(2005) reported biphasic changes in the extent of myocardial injury caused by I/R in mice that suggest a decreasing tolerance with aging and increasing tolerance with senescence; the developmental profile of these changes differed between males and females.It seems, therefore, that a certain sex-related window may exist during the aging process when intrinsic protective mechanisms are more active allowing the heart to better survive acute I/R insult than at younger stages.It has been proposed that aging-associated cardioprotection may be linked to an attenuation of mitochondrial calcium overload (Rhodes et al. 2012) but the underlying mechanism is unknown at present.Although in the present study the AMPK α2subunit deletion did not significantly worsen the extent of myocardial injury in young animals, the absence of infarct-sparing effect of aging in KO mice suggests that the AMPK pathway plays a role in this phenomenon.
Consistent with the view that high intake of fatty acids may cause cardiac lipotoxicity (Lopaschuk et al. 2007), here we show that feeding WT mice with HF diet for 6 months during aging decreased LV fractional shortening and impaired ischemic tolerance compared to their age-matched littermates fed standard diet.These results support a number of the previous reports indicating HF diet-induced cardiac contractile dysfunction (Ouwens et al. 2005;Relling et al. 2006;Turdi et al. 2011;Guo et al. 2013) and exacerbation of I/R injury by increased levels of fatty acids (Lopaschuk et al. 2007;Thakker et al. 2008).However, it should be mentioned that other reports failed to demonstrate cardiac lipotoxicity and dysfunction following long term exposure to HF diet (Nascimento et al. 2011;Brainard et al. 2013) and this issue remains controversial.Besides the source of diet, the age of animals likely plays a role due to the inability of old myocytes to adapt to high fatty acid load (Aurich et al. 2013).In the present study, switch to HF diet took place at the age of 12 months when reproductive function of female C57BL/6J starts to cease (Felicio et al. 1984) in association with neuroendocrine and hormonal changes that may also influence effects of the diet.
It has been shown that AMPK α2-subunit activity is important in the regulation of fatty acid uptake in HF diet-fed mice (Abbott et al. 2012).A limitation of our work is that we could not measure AMPK subunits activity in aged mice fed HF diet.However, earlier studies have demonstrated that HF diet decreases myocardial AMPK phosphorylation status and activity (Guo et al. 2013;Lindholm et al. 2013), these effects being more pronounced with advanced age (Aurich et al. 2013).In addition, AMPK α2-subunit deficiency exaggerates insulin resistance (Fujii et al. 2008), cardiac contractile dysfunction and impaired intracellular calcium handling (Turdi et al. 2011) induced by HF diet-feeding in middle age mice.In our experiments, the LV systolic dysfunction and ischemic intolerance in mice on HF diet was more pronounced in KO compared to WT group.However, these unfavourable effects of the HF diet did not reach statistical significance on the background of AMPK α2-subunit deletion.
The reason for the absence of additive effects of HF diet and AMPK deficiency is unclear, but it can be related to the fact that both heart function and ischemic tolerance of aged KO mice fed standard diet were already compromised compared to WT animals, whereas any notable effect of AMPK deficiency itself was not observed in the study of Turdi et al. (2011) on younger mice.Alternatively, the defect in AMPK function may be apparent only under the conditions promoting lipogenesis, i.e. in the animals fed standard diet when activation of AMPK can increase fatty acid oxidation to preserve intracellular energy status, whereas AMPK inactivation likely remains silent when lipogenesis is heavily suppressed in response to HF diet-feeding.Our previous results on the functional significance of AMPK in the liver support the later possibility (Jeleník et al. 2010).
Conclusions
Here we demonstrate that aging resulted in significant AMPK downregulation and improved ischemic tolerance of female murine hearts.Global genetic ablation of AMPK α2subunit or long-term feeding HF diet similarly resulted in cardiac dysfunction and abolished the anti-ischemic protection.However, the effects of AMPK α2-subunit deletion were not further potentiated by HF diet.Our findings support the view that AMPK activity plays a role in normal heart aging, suggesting this kinase as a potential target for cardioprotective interventions.
Table 1
Macronutrient composition and energy content of diets and fatty acid coposition of dietary lipids.
Table 2
Weight parameters and hematocrit in adult and aged AMPK α2 -/-and wild-type mice fed standard or high-fat diet. | 2018-04-03T05:41:56.669Z | 2016-01-01T00:00:00.000 | {
"year": 2016,
"sha1": "5d110dfb183759add3043d382d6db94ed0dab076",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.33549/physiolres.932979",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "5d110dfb183759add3043d382d6db94ed0dab076",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
262186072 | pes2o/s2orc | v3-fos-license | Validation of the illustrated questionnaire on food consumption for Brazilian schoolchildren (QUACEB) for 6- to 10-year-old children
Introduction Evaluating the food consumption of school-aged children is crucial to monitor their dietary habits, promote targeted interventions, and contribute public policies that aimed healthy eating. In this context, our objective was to develop and validate the Illustrated Questionnaire on Food Consumption for Brazilian Schoolchildren (QUACEB) of 6 to 10 years old, which is a self-reported illustrated recall. Methods Validity was obtained in four stages as follows: selection of foods, validation of items, validation of illustrations, and pretest. Foods were selected by considering the data from the main surveys that have been conducted with the Brazilian population and schoolchildren in recent years, the degree of food processing, and the main foods from each of the country's five macroregions. The content of the items was validated by comparing the children's and their parent's responses. For this, the questionnaire was published in an online format, and 6- to 10-year-old elementary schoolchildren were recruited using the snowball technique. The first part of the questionnaire was answered by the parent after the child's lunch, and the second was completed by the child the following day. Thirty-two parent and child dyads participated. Sensitivity, specificity, area under the curve (AUC), and kappa (k) tests were performed. Results Of the 30 foods presented on the questionnaire, 15 were reported as consumed. High sensitivity (mean of 88.5%), high specificity (average of 92.0%), substantial agreement (k = 0.78), low disagreement (6.2%), and AUC of 0.90 were found. The illustrations were validated in a focus group with fourth-grade children from a school chosen for convenience. The food illustrations were designed for children, who were asked to name the food. Eighteen children participated and verified that the images were representative of the foods. In the pretest, three schools were chosen for convenience that announced the link to the online questionnaire in WhatsApp groups of parents with students from first to fifth grade. Fifteen children answered the questionnaire and 86.7% (n = 13) judged it excellent or good. Conclusion Thus, the food consumption questionnaire is valid for elementary schoolchildren of 6 to 10 years old and can be applied in research to assess the dietary patterns of children in Brazil.
Introduction: Evaluating the food consumption of school-aged children is crucial to monitor their dietary habits, promote targeted interventions, and contribute public policies that aimed healthy eating.In this context, our objective was to develop and validate the Illustrated Questionnaire on Food Consumption for Brazilian Schoolchildren (QUACEB) of to years old, which is a self-reported illustrated recall.
Methods: Validity was obtained in four stages as follows: selection of foods, validation of items, validation of illustrations, and pretest.Foods were selected by considering the data from the main surveys that have been conducted with the Brazilian population and schoolchildren in recent years, the degree of food processing, and the main foods from each of the country's five macroregions.The content of the items was validated by comparing the children's and their parent's responses.For this, the questionnaire was published in an online format, and -to -year-old elementary schoolchildren were recruited using the snowball technique.The first part of the questionnaire was answered by the parent after the child's lunch, and the second was completed by the child the following day.Thirty-two parent and child dyads participated.Sensitivity, specificity, area under the curve (AUC), and kappa (k) tests were performed.
Results: Of the foods presented on the questionnaire, were reported as consumed.High sensitivity (mean of .%), high specificity (average of .%), substantial agreement (k = .), low disagreement ( .%), and AUC of .were found.The illustrations were validated in a focus group with fourth-grade children from a school chosen for convenience.The food illustrations were designed for children, who were asked to name the food.Eighteen children participated and verified that the images were representative of the foods.In the pretest, three schools were chosen for convenience that announced the link to the online questionnaire in WhatsApp groups of parents with students from first to fifth grade.Fifteen children answered the questionnaire and .% (n = ) judged it excellent or good.
. Introduction
Childhood obesity has reached epidemic levels and is a modern public health problem (1,2).In Brazil, data from the Food and Nutrition Surveillance System (SISVAN in Portuguese) indicate an unfavorable trend toward obesity in Brazilian children between 5 and 10 years old, with a prevalence of 10.45% and 16.96% in a temporal variation between 2008 and 2021, respectively (3).Although the etiology of childhood obesity is complex, poor diet is an important independent risk factor for the development of non-communicable diseases (NCDs) and obesity (4).
The data are related to changes in the quality of the Brazilian diet in recent years, marked by an increase in the consumption of ultra-processed foods (5).In general, ultra-processed foods are highly palatable, are low in fiber, contain excess sugar and/or sodium, and have high levels of total and saturated fats, which add greater energy value to the diet and could increase the risk of chronic diseases (6)(7)(8).
The unfavorable dietary nutrient profile of ultra-processed foods impacts the quality of the diet negatively and has direct consequences on health, and their consumption should be continuously evaluated and monitored at this stage of life (9).The most common methods to monitor the food consumption of Brazilian children are the Food Record, the 24-h Dietary Recall (R24 h), and the Food Frequency Questionnaire (10).Structured food consumption questionnaires, such as the R24 h, are a good option for studies that assess student health, as they are simple, practical, and low-cost methods (11).
Most of the tools developed to assess food consumption are not validated and are not intended for school-aged children, or the food list is outdated (12)(13)(14).Most of the recent questionnaires validated for school-aged children are from other countries, including Poland (15), Japan (16), Turkey (16), Malaysia (17), Lebanon (18), England (19), Spain (20), Chile (21), and Europe (22).Specifically in Brazil, instruments validated for school-aged children are rare.Studies with children from São Paulo (23, 24), Salvador (25), and Western Amazon (26) stand out.However, all of these questionnaires are semi-quantitative, and none are illustrated.In addition, most respondents are parents (15,16,18,19,21,23,24,26,27), but a few have a questionnaire applied directly to children under parental or teacher guidance (17,18,20,22).Burrows et al. (28) showed that the Food Frequency Questionnaire reported by children of 8 to 12 years of age was the closest to the gold standard measure (doubly labeled water method) when compared with the report of children's food consumption by their parents.
However, especially for children aged 6 to 10 years, few instruments are available to collect information on food consumption, especially when the objective is for the child to be the informant.This is explained by the fact that the assessment of food consumption is challenging, considering that children in this age group are not able to provide reliable information on usual intake and serving size, in addition to requiring memory, attention span, motivation, and cognition (29).
The instruments proposed to fulfill this objective are the Previous Day Food Questionnaire (PDFQ) and the Food Consumption and Physical Activity Questionnaire for schoolchildren (Web-CAAFE).The PDFQ is a questionnaire designed for schoolchildren that uses an illustrated recall to qualitatively analyze food consumption on the previous day (30).However, the instrument does not include regional Brazilian foods.The presence of these foods is important to enhance the culture, habits, and food traditions.Web-CAAFE is a software for the qualitative measurement of food consumption through the recall of the previous day.The instrument includes more food options than the PDFQ, including regional ones.However, access is restricted and is through a system with login and password (31).
The importance of research that evaluates the food consumption of schools for carrying out epidemiological studies is undeniable.However, this assessment must be carried out with adequate, updated, and validated instruments, which consider the cognitive limitations of each age.Thus, the objective of this study is to validate an accessible qualitative questionnaire on food consumption for Brazilian children of 6 to 10 years of age.
. Methods
We developed and validated an illustrated questionnaire to investigate the food consumption of elementary schoolchildren between 6 and 10 years of age.This is a quantitative and qualitative study and was carried out in four stages as follows: (1) selection of foods to develop the questionnaire; (2) validity test of the chosen foods by comparing the children's self-report and their parents' observation; (3) two focus groups with children to validate the illustrations; and (4) pretest.
The project was approved by the Ethics Committee on Research with Humans of the Faculty of Health Sciences of the University of Brasilia (Protocol CAAE 25866919.4.0000.0030).The parents or guardians agreed with the free and informed consent form and the children with the free and informed consent form.
The proposed questionnaire was given the acronym QUACEB, corresponding to the initials of the name in Portuguese: "Questionário de Consumo Alimentar para Crianças Escolares Brasileiras" (Illustrated Questionnaire on Food Consumption for Brazilian Schoolchildren).
. . Stage : QUACEB development
The questionnaire was created according to the following criteria: (a) The most consumed foods were included according to data from the Family Budget Survey (POF in Portuguese) 2017-2018 (32) and the 2015 National School-based Health Survey (PeNSE in Portuguese) (33); (b) To choose the foods, the food groups and the degree of food processing were considered, according to the Dietary Guidelines for the Brazilian Population (7); (c) Later, representative foods from all Brazilian macroregions were inserted (34,35); and (d) The name of the food was added in uppercase letters as a caption for the figures.The figures were designed by a graphic designer specializing in products for children.Initially, 30 foods or food groups were included, among those most consumed according to the national surveys (Table 2).
. . Stage : tests to validate the foods in the QUACEB
Evidence of the validity of the foods illustrated was obtained through comparison tests between the parent's account (father, mother, or guardian) who observed the food consumed by the child after lunch and the child's self-report on the following day of the food they had eaten for lunch.The parents' report was considered a gold standard.We chose only one meal to make the instrument faster, simpler, and accessible.Lunch was selected because it is the most consumed meal for Brazilians (32).
The sample was selected for convenience and consisted of dyads of Brazilian parents and children, between 6 and 10 years of age, who had access to the Internet and enrolled in elementary schools in Brazil.The sample size was calculated based on kappa for 2 raters' estimation with an expected kappa (k) equal to 0.75, expected precision of 0.3, the proportion of the outcome (p) of 0.5 (considering the same probability of having or not having the binary outcome), and the confidence level of 99%.The sample size was calculated at 33, and we added 10% to cover dropouts, totaling 37 dyads of parents and children.We used an online calculator available at https://wnarifin.github.io/ssc/sskappa.html (36).Study participants were recruited using the snowball methodology (37).The snowball is a sampling technique, and the participants recommend the survey to another individual in his network (37).For this, a research poster containing the QRCode for accessing the questionnaire link was prepared.Moreover, the poster was publicized on the researcher's and the university's social networks.In addition, disclosure was carried out in groups of WhatsApp parents from some schools known to the researchers.
On the first day, parents completed their questionnaire immediately after the child's lunch.The questionnaire for the guardians included four screens as follows: (1) the free and informed consent form; (2) identification of data about the child (date of birth, sex, and initial name); (3) characterization of the parent (gender, marital status, age, education, family income in minimum wages, and state residing), education network of the child's school and the child's school year; and (4) information on child's food consumption.The included questions are as follows: whether the child had lunch that day; a list of 30 foods for the parent to mark which of these the child consumed at lunch; and an openended question to write down any foods that were consumed but were not on the list.
At the end of the questionnaire, the parent was instructed that the child should answer the questionnaire the next day, in the morning, without any interference.
The child's questionnaire contained three screens as follows: (1) data to identify the child (date of birth, gender, and initials); (2) term of consent of the minor; and (3) information on food consumption.This screen asked what the child had eaten for lunch the day before and contained 30 illustrations of food with captions for the child to select which ones were consumed at lunch the day before, for better understanding and adaptation to the age group.
The researchers used the collection of child identification data in both questionnaires to aggregate the responses obtained on the collection for 2 days and identify the respective parent-child dyads.The questionnaire was accessible for 31 days from September to October 2020.
The responses to the questionnaires automatically generated a database in Microsoft Office Excel format, which was exported to use for the analysis.The tests were performed using the MedCalc software, adopting a significance level of 5%.Data are presented in absolute (n) and relative (%) frequencies.
For the external validity of the questionnaire, the values of sensitivity (the ability to detect the consumption actually presented, i.e., true positives divided by the sum of true positives and negatives), specificity (the ability to indicate no consumption when there were actually none presented, i.e., true negatives divided by the sum of true negatives and false positives), the area under the curve (AUC), and their respective 95% confidence intervals (95% CI) were calculated using the parents' report as the gold standard.The closer the AUC value is to 1, the better the instrument performed (38).Kappa statistics (k) with its 95% CI were also calculated to assess the agreement between the responses of the parent and the child, considering k = 0 as an absence of agreement; k between 0.41 and 0.60 for moderate agreement; k between 0.61 and 0.80 for substantial agreement; k between 0.81 and 0.99 for almost perfect agreement; and k = 1 for perfect agreement (39).
. . Step : focus groups to substantiate the validity of QUACEB illustrations
To validate whether the illustrations were representative of the food, two focus groups were conducted in October 2021 with fourth-grade children (9 years old) from a public school in the Federal District, Brazil, chosen for convenience and the fourth-grade class was chosen by the school principal.Two sessions were held in the classroom, and each session lasted for ∼50 min.In the second session, reached information saturation.The participants included nine children in each session and were conducted by three researchers (one moderator and two observers).The sessions were audio-recorded with the children's permission.
The group sessions were organized according to the following steps: presentation of the researchers and the research; clarification of the dynamics of participatory discussion and request for consent for participation and audio recording; presentation of illustrations and individual active listening; and ending by thanking them.The group dynamics to assess students' understanding of the figures occurred through the projection on a classroom wall of 43 illustrations without captions (33 food illustrations after modifications based on the results of the earlier validity test and 10 more regional food groups in Brazil).The group moderator provided the following guidance: "Let's play a guessing game.I'm going to show you some pictures and I would like you to tell me the names of what you see."Then, each figure of food was presented separately.Next, they were asked: "Do you recognize this food?And what is the name of this food?".After the question, it was advised that the child lifts his hand to tell the name of the food being projected.To ensure that everyone participates, we randomly chose a few children to say what they observed.The participants were free to discuss among themselves and actively listen.Frontiers in Public Health frontiersin.org
FIGURE
Additional module with figures of regional foods that can be inserted in the Illustrated Questionnaire on Food Consumption for Brazilian Schoolchildren (QUACEB). .
. Stage : QUACEB pretest
The Illustrated Questionnaire on Food Consumption for Brazilian Schoolchildren was built on the Google Forms platform, with items, writing, and illustrations modified according to the results of the previous stages.In addition, information on the age and gender of the child was included, as well as the number of meals eaten on a typical day and possible foods consumed the previous day which were not included in the questionnaire.Three schools were chosen for convenience, one public, located in the Federal District, Brazil, and two-thirds of the sample coming from private schools located in a small municipality in the State of Goiás, Brazil.The schools were contacted via telephone and agreed to publicize the research link in the WhatsApp groups of parents in the elementary school (from the first to the fifth school year).The questionnaire link along with a link to publicize it was provided to the schools.The link was available for 20 days in November 2021.The pretest was applied to test the application format, i.e., whether children could fill out the online questionnaire alone or under their guardian's supervision just to help read the questions.For this, an orientation was written for parents to deliver their cell phones or computers for children to fill out the questionnaire alone, and whether necessary adults can ask them the questions without interfering with the child's answers.At the end of the questionnaire, the children were asked what they thought of the questionnaire, with response options on a five-point Likert scale (ranging from 1 "great" to 5 "very bad"), and children were also asked to fill in an open question for suggestions to improve the questionnaire.
. Results
As a result of the four stages, the instrument was developed for an illustrated self-reported recall, intended for 6-to 10-yearold Brazilian children.It contained a list of 33 groups of national food figures (Figure 1), with the option of adding 10 illustrations of regional fruits and vegetables (Figure 2).
In the online study to validate the content of QUACEB items (the second stage of the study), 32 parent-child dyads participated, most of them from the Federal District (59.38%).Most of the parents were women (93.75%), between 35 and 54 years old (71.87%), had a graduate degree or more (56.25%),were married/in a stable relationship (68.75%), and with a monthly family income above 10 minimum wages (equivalent to R$ 10,450.00 or U$ 2,061) (53.12%).Half of the children were girls (50.00%) and most studied in private schools (78.13%) (Table 1), with a mean age of 8 ± 0.85 years.
Of the 30 food groups initially listed in the instrument, stage 1 of the QUACEB development elucidated that only 15 had a minimum consumption frequency that would allow statistical tests for validation.Comparisons of the responses between children and their guardians indicated frequent consumption (more than 37.5%) of rice, beans, beef/pork/chicken, .
There was a low disagreement between the answers of the parents and children, with an average of 6.2% and a variation from 0 (broccoli/chayote/kale, egg, fish/shrimp, and soup) to a maximum of 25.0% (juice) (Table 2).
Sensitivity values, i.e., the probability that the children reported what they actually ate as presented by their parents, indicate an average of 88.5% for all food groups.The lowest sensitivity value (50.0%) was found in the soda group and maximum values (100.0%)occurred in the broccoli/chayote/kale, beef/pork/chicken, egg, fish/shrimp, sweets, apple/grape/banana/orange, and pasta groups (Table 2).
The specificity values (average of 92.02%) demonstrated that the questionnaire was able to detect foods that were not consumed when, in fact, there was no consumption.The beef/pork/chicken group had the lowest value for specificity (62.50%), while the beans, broccoli/chayote/kale, egg, fish/shrimp, and soda groups had the highest values (100%) (Table 2).
The indices of the area under the curve (AUC) were employed to verify the global accuracy of the questionnaire, as this parameter considers the simultaneous analysis of the specificity and sensitivity measures for each food item.As shown in Table 2, the egg and fish/shrimp food items had maximum values of sensitivity and specificity and, thus, the highest values for AUC (1.00).On the other hand, the soda and juice groups had the lowest value (0.75).The mean of the 15 groups analyzed was 0.90, indicating the good performance of the instrument for these food items.
The kappa test between the child's and the parent's reports was significant for all items that presented satisfactory consumption for validation (more than 6.25% consumption), with an average of k = 0.78 (38).Of the 15 food items analyzed, 7 groups (beans, broccoli/chayote/kale, egg, fish/shrimp, soup, sweets, and pasta) had an "almost perfect or perfect" agreement (k ≥ 0.81) and only juice obtained a kappa value with moderate classification (k = 0.41-0.60)(Table 2).
Based on the results found for the validity of the items, the following changes were made to the questionnaire.Ten illustrations of regional foods were added-five fruits and five vegetables from each Brazilian macroregion (northern region-cupuaçu/açaí/mangaba/jambo and jambu/chicory; northeast-cashew/jambo/sugar-apple/jocote and roselle/João-Gomes; southeast -strawberry/pineapple and taioba/arugula; central-west-pequi/jackfruit and heart of palm/herkin; and south-pine nut/peach/blackberry/plum and cabbage/chicory).This consequently caused the pequi to be removed from the squash and carrot group; reformulation of groups of raw vegetables, including tomato and chayote, in a specific grouping with cucumber; inclusion of groups of dark green vegetables, including broccoli and kale, and of light green vegetables, containing lettuce and cabbage.Toasted manioc, soup, nuggets, hamburgers, açaí preparation, popsicle, pasta, salami, and pigs in blanks were removed to make fewer items on the list, along with tangerine and avocado in the fruit grouping, potato chips, mayonnaise and ketchup, margarine, frozen tea, fried pastry, and baked pastry.With these changes, QUACEB now contains 43 illustrations of food groups, 33 of which are national food groups and 10 are regional food groups.
In the focus group for the illustration validation stage, the 43 updated illustrations were presented to the children.The figures that were difficult to recognize were raw cassava/manioc, jocote, açaí, and mangaba, due to the disproportionate size of the food in the drawing and the group of pine nuts.Thus, the students gave suggestions to improve the images, which were accepted, and the drawings were redone.Therefore, in the final version of QUACEB, a bowl of boiled cassava/manioc was used; jocote, açaí, and mangaba were resized; and only one pine nut was captured.Some images of regional foods including the heart of palm, gherkin, taioba, jambu, chicory, roselle, and João-Gomes were not recognized because children were unaware of the food itself.During the focus groups, the children also presented suggestions for the captions.From this, the following changes were made to the caption: the description of the term "pão de sal" was included in the image of rolls; including the two terms "biscoito" and "bolacha" (both of which regional words for cookie) in the corresponding image; and the nomenclature of industrialized yogurt was changed to flavored yogurt and from boxed chocolate to chocolate milk.With the focus groups, we concluded that the students satisfactorily understood most of the images, and the need to change some figures and legends was raised, providing final improvements to the questionnaire.
In the pretest, 15 children who participated had a mean age of 9 years ± 1.13, mostly boys (66.7%), 8 of whom studied in the public school and 7 in the two private schools.Most reported an average consumption of 4 ± 0.80 meals per day, and all reported eating breakfast and lunch.Of the 33 national food groups listed, 29 had a frequency of consumption reported on the previous day.Among these, the most consumed were beef, pork, or chicken (86.67%); rice (80.00%); beans (66.67%); and milk (66.67%).The least consumed foods (6.67%) were broccoli or kale; egg; packaged salty snacks or crackers; instant noodles, frozen lasagna or pizza; fried or baked snacks (coxinha, pastel, and empada); French fries; and cheese bread.Regarding regional foods, only one child (6.67%) reported consumption of the following groups: cupuaçu, açai, mangaba, or jambo; and pine nuts, peaches, blackberries, or plums (Table 3).Four children recorded the consumption of other foods that were not on the list, namely, water, cotton candy, cheese cracker, and macaroni.Regarding the evaluation of the questionnaire, 86.7% of the participating children judged the questionnaire as excellent or good and did not register possible suggestions.
. Discussion
The present study demonstrated that the illustrated Questionnaire on Food Consumption for Brazilian Schoolchildren (QUACEB) was valid for schoolchildren.Overall, the instrument achieved good performance, according to the sensitivity, specificity, kappa, and AUC indices, in addition to having a low discordance value.Furthermore, the graphic representations proved to be understandable and attractive to the children.
Currently, validated and illustrated questionnaires with children from southern Brazil in which the child is the respondent include the Previous Day Food Questionnaire (PDFQ) (31,40,41); the Typical Day of Physical Activity and Food Intake (DAFA) questionnaire (42) and its electronic version-the WEBDAFA (43); and the Food Intake and Physical Activity of School Children (Web-CAAFE) (31,44).PDFQ and DAFA contain a list of foods that are repeated in the six daily meals (breakfast, morning snack, lunch, afternoon snack, dinner, and bedtime snack).The difference is that the QUADA assesses the consumption of the previous day and the DAFA of a regular day (30,42).WEBDAFA has the same structure as the printed instrument but is hosted on a website; however, the interface is currently not available (43).Web-CAAFE has been enhanced from the PDFQ and DAFA experience, is a recall of the day before hosting, and is hosted on a website.However, it is necessary to register in the system to issue a password, and currently, the system only monitors schools in the municipal education network of Florianopolis (31).
Another aspect considered necessary for a food consumption assessment questionnaire for schoolchildren is that it allows the analysis of consumption according to the degree of food processing.The NOVA classification, adopted in the Dietary Guidelines for the Brazilian Population (7), has already been widely described in the literature as important for public health, considering that the consumption of ultra-processed foods has been associated with several chronic diseases at different stages of life (6,45,46) Among the ultra-processed foods evaluated, the consumption of industrialized juices, sweets, and soda was observed at lunch.The other foods in this category that were initially included in the list of 30 food groups in the questionnaire had no consumption reported by the participants.A study that evaluated the intake of ultra-processed foods in 105 schoolchildren of 7 to 10 years of age from a public district located in Teresina, Piaui, highlights the participation of these three groups in the list of the ultraprocessed foods most consumed by the public evaluated (45).This again reinforces the need for instruments such as the questionnaire developed and validated here, which can detect food consumption according to the degree of processing (30,48).
Although there are no population studies in Brazil with children aged 5 to 9 years, other studies find similar patterns of food consumption among schoolchildren.For example, a study carried out in a Brazilian municipality in 2007, with children aged 7 to 10 years, found that the most consumed foods at lunch and dinner were rice, beef or poultry, beans, soft drinks, and pasta (49).In another study conducted in Brazil in 2017, with children aged 7 to 13 years, the foods that had the highest average daily frequency of consumption were rice, bread, beef/chicken, and beans (50).Furthermore, the foods most commonly consumed by the Brazilian population are rice, beans, beef or poultry, and bread (32).In this study, the most consumed foods were beef, pork or chicken, rice, beans, and milk.
It is important to emphasize that comparisons between the results for the validation of this questionnaire with other validated instruments are limited due to methodological differences, especially in the reference method used and the age group covered.Even so, the kappa values were similar to the validation results of the last version of PDFQ (30), obtaining the same value of the agreement test for the fruit group (k = 0.78).Other foods, such as the meat and pasta group, were also similar in both studies for this variable, with k = 0.69 and k = 0.81 being found in PDFQ, respectively, for the items mentioned and k = 0.71 and k = 0.84 obtained in this validation.
The kappa value was lower for the juice group, and it had the greatest disagreement in responses between the reports of the children and their parents.The data found highlight a ./fpubh. .discrepancy, which could be the different interpretations of this drink for parents and children within the analyzed group.The caption of this group "juice" could be interpreted as encompassing various preparations, such as fresh juice, industrialized juice, and concentrated drink.The initial illustration only contained an image of a box of juice, which could be interpreted exclusively as this type of preparation.Thus, we understand that a more specific description of this item and the adequacy of the illustration were necessary to reduce different interpretations and produce better levels of agreement.The developed questionnaire is inexpensive and easy to apply, which has been demonstrated in previous studies with similar questionnaires (30,43).The computerized format saves application time, eliminates interviewer-related biases, and ensures automated storage of collected information.Few validated online instruments are available that assess food consumption, especially for schoolaged children (29, 30,43).
The access of Brazilian students to new information and communication technologies has increased in significant proportions (51), which makes online self-report instruments useful and promising to assess the eating habits of the public.Studies state that the application of an online questionnaire is a promising alternative that helps to keep children's attention on the research (52,53).
Another advantage of the developed and validated questionnaire is that, unlike traditional paper questionnaires, the online format enables data collection from Brazilian regions.This will allow the inclusion of multiple cultural spheres and facilitate the generalization of results to the general population in future studies with a larger sample size.The possibility of including fruits and vegetables typical of all Brazilian regions, such as pequi, jackfruit, avocado, the heart of palm, and gherkin from the central-west; cashew, jambo, jambu, sugar-apple, jocote, roselle, and João-Gomes from the northeast; cupuaçu, açaí, mangaba, jambo, jambu, and chicory from the north; strawberry, pineapple, avocado, taioba, and arugula from the southeast; and pine nut, peach, blackberry, plum, cabbage and chicory from the south, is also noteworthy to allow the regionalization of the questionnaire with the inclusion of regional foods (33,34).This work differs from other validation studies of children's questionnaires, as it proposes to evaluate children's food consumption when they are outside the school context, both in person and online, considering the classification of foods according to the degree of processing recommended by the Dietary Guidelines for the Brazilian Population (7), allows the inclusion of regional foods, and describes the food in the caption.Thus, the instrument can serve as support material for future epidemiological studies of health and nutrition and nutritional intervention programs for this age group.
When using QUACEB, food consumption can be analyzed according to different markers, for example by the NOVA score (46), the classification proposed by the Dietary Guidelines for the Brazilian Population (41), the food diversity score (54), markers of protective foods and risk of excess body fat (55)(56)(57)(58), nutritional profile represented by nutrient sources in food groups (19), food-based classification of eating episodes (FBCE) (42,59), identifying dietary patterns (60), or describing the consumption of regional foods.
The weaknesses of the developed and validated questionnaire include the lack of information about the portion size or the possibility of estimating the child's energy consumption.However, it allows a qualitative assessment of children's consumption, in a brief and straightforward way, in which the researcher can provide reliable data on food consumption, seeking to avoid biasing the child's memory.Other limitations of this work include the lack of presentation of internal validity, only external due to the small sample size and the presence of foods with infrequent consumption, but the most frequent foods were validated and this would possibly happen for other foods; the absence of sensitivity, specificity, AUC, and kappa analyzes by gender and age, due to the insufficient sample; and the lack of exploration of factors associated with disagreements due to the low prevalence and sample size.Furthermore, validation occurred only by testing a meal and a 1-day period.Thus, future studies should be carried out with a larger sample, all meals of the day (to assess consumption of items not reported at lunch), and also testretest style analysis, whereby participants fill it in for a number of multiple days to see how accurate it is over 1 day.Despite acknowledging that children within the study's age group lack purchasing power and parents are conscious of their children's dietary consumption, we recommend that future validation studies incorporate direct observation throughout the entire day to assess the child's food consumption.
In conclusion, the illustrated online food consumption questionnaire demonstrated adequate concordance, sensitivity, specificity, and area under the curve values to assess the Illustrated Questionnaire on Food Consumption for Brazilian Schoolchildren of 6-to 10-year-old when compared with the parents' report.The QUACEB is a valid, simple, brief, practical, easy-to-apply questionnaire available on the Internet in any Brazilian region, which can be adopted for epidemiological research to assess the diet of that population.This tool is specifically designed to be appropriate to Brazil because it represents the foods most consumed by Brazilian schoolchildren.statistical analyses.All authors helped with the data interpretation and writing of the article, reviewed and approved the final version, and verified that they were responsible for all aspects of the study in guaranteeing the accuracy and integrity of any part of the study.
TABLE Research stage to validate the content of the Illustrated Questionnarie on Food Consumption for Brazilian Schoolchildren (QUACEB), ., and lettuce/tomato.Of the food groups that had low consumption (<3.13%), 10 were not mentioned by parents or children (nuggets/hamburger/pizza/instant noodles, coffee, milk, cookie/packaged sweet cake, breakfast cereal, packaged bread, rolls, couscous/tapioca, cheese bread/coxinha/pig in blanket, and snack chips); three groups were reported only by children (cheese, chocolate milk boxed or powdered/industrialized yogurt, and salami/sausage/baloney/ham); one group was reported only by parents (mango/papaya); and one group was reported by both parents and children (soup) (Table * Minimum Wage at the time of the survey was R$1,045.00,equivalent to U$206.00.Source: compiled by authors.juice TABLE Analysis of disagreement, sensitivity, specificity, area under the curve, and kappa between the reports of children and their parents participating in the validation survey of the Illustrated Questionnarie on Food Consumption for Brazilian Schoolchildren, .
. The | 2023-09-24T15:12:54.676Z | 2023-09-22T00:00:00.000 | {
"year": 2023,
"sha1": "6a5452ff0d909434713eff32166e895a992fd62b",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpubh.2023.1051499/pdf?isPublishedV2=False",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c1c11c29280bde59c74c6dfed82ad2965dee1883",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": []
} |
19863123 | pes2o/s2orc | v3-fos-license | Controlling overestimation of error covariance in ensemble Kalman filters with sparse observations: A variance limiting Kalman filter
We consider the problem of an ensemble Kalman filter when only partial observations are available. In particular we consider the situation where the observational space consists of variables which are directly observable with known observational error, and of variables of which only their climatic variance and mean are given. To limit the variance of the latter poorly resolved variables we derive a variance limiting Kalman filter (VLKF) in a variational setting. We analyze the variance limiting Kalman filter for a simple linear toy model and determine its range of optimal performance. We explore the variance limiting Kalman filter in an ensemble transform setting for the Lorenz-96 system, and show that incorporating the information of the variance of some un-observable variables can improve the skill and also increase the stability of the data assimilation procedure.
Introduction
In data assimilation one seeks to find the best estimation of the state of a dynamical system given a forecast model with a possible model error and noisy observations at discrete observation intervals (Kalnay, 2002). This process is complicated on the one hand by the often chaotic nature of the underlying nonlinear dynamics leading to an increase of the variance of the forecast, and on the other hand by the fact that one often has only partial information of the observables. In this paper we address the latter issue. We consider situations whereby noisy observations are available for some variables but not for other unresolved variables.
However, for the latter we assume that some prior knowledge about their statistical climatic behaviour such as their variance and their mean is available.
A particularly attractive framework for data assimilation are ensemble Kalman filters (see for example Evensen (2006)). These straightforwardly implemented filters distinguish themselves from other Kalman filters in that the spatially and temporally varying background error covariance is estimated from an ensemble of nonlinear forecasts. Despite the ease of implementation and the flow-dependent estimation of the error covariance ensemble Kalman filters are subject to several errors and specific difficulties (see Ehrendorfer (2007) for a recent review). Besides the problems of estimating model error which is inherent to all filters, and inconsistencies between the filter assumptions and reality such as non-Gaussianity which render all Kalman filters suboptimal, ensemble based Kalman filters have the specific problem of sampling errors due to an insufficient size of the ensemble. These errors usually underestimate the error covariances which may ultimately lead to filter divergence when the filter trusts its own forecast and ignores the information given by the observations.
To counteract the associated small spread of the ensemble several techniques have been developed. To deal with errors in ensemble filters due to sampling errors we mention two of the main algorithms, covariance inflation and localisation. To avoid filter divergence due to an underestimation of error covariances the concept of covariance inflation was introduced whereby the prior forecast error covariance is increased by an inflation factor (Anderson and Anderson, 1999). This is usually done in a global fashion and involves careful and expensive tuning of the inflation factor; however recently methods have been devised to adaptively estimate the inflation factor from the innovation statistics (Anderson, 2007(Anderson, , 2009Li et al., 2009). Too small ensemble sizes also lead to spurious correlations associated with remote observations. To address this issue the concept of localization has been introduced Mitchell, 1998, 2001;Hamill et al., 2001;Ott et al., 2004;Szunyogh et al., 2005) whereby only spatially close observations are used for the innovations.
To take into account the uncertainty in the model representation we mention here isotropic model error parametrization (Mitchell and Houtekamer, 2000;, stochastic parametrizations (Buizza et al., 1999) and kinetic energy backscatter (Shutts, 2005). A recent comparison between those methods is given in Houtekamer et al. (2009) ;Charron et al. (2010), see also Hamill and Whitaker (2005). The problem of non-Gaussianity is for example discussed in Pires et al. (2010); Bocquet et al. (2010).
Whereas the underestimation of error covariances has received much attention, relatively little is done for a possible overestimation of error covariances. Overestimation of covariance is a finite ensemble size effect which typically occurs in sparse observation networks (see for example Liu et al. (2008); Whitaker et al. (2009)). Uncontrolled growth of error covariances which is not tempered by available observations may progressively spoil the overall analysis. This effect is even exacerbated when inflation is used; in regions where no observations influ-ence the analysis, inflation can lead to unrealistically large ensemble variances progressively degrading the overall analysis (see for example Whitaker et al. (2004)). This is particularly problematic when inappropriate uniform inflation is used. Moreover, it is well known that covariance localization can be a significant source of imblance in the analyzed fields (see for example ; Kepert (2009) ;Houtekamer et al. (2009)). Localization artificially generates unwanted gravity wave activity which in poorly resolved spatial regions may lead to an unrealistic overestimation of error covariances. Being able to control this should help filter performances considerably.
When assimilating current weather data in numerical schemes for the troposphere, the main problem is underestimation of error covariances rather than overestimation. This is due to the availability of radiosonde data which assures wide observational coverage. However, in the pre-radiosonde era there were severe data voids, particularly in the southern hemisphere and in vertical resolution since most observations were done on the surface level in the northern hemisphere. There is an increased interest in so called climate reanalysis (see for example (Bengtsson et al., 2007;Whitaker et al., 2004)), which has the challenge to deal with large unobserved regions. Historical atmospheric observations are reanalyzed by a fixed forecast scheme to provide a global homogeneous dataset covering troposphere and stratosphere for very long periods. A remarkable effort is the international Twentieth Century Reanalysis Project (20CR) (Compo et al., 2011), which produced a global estimate of the atmosphere for the entire 20th century (1871 to the present) using only synoptic surface pressure reports and monthly sea-surface temperature and sea-ice distributions. Such a dataset could help to analyze climate variations in the twentieth century or the multidecadal variations in the behaviour of the El-Niño-Southern Oscillation. An obstacle for reanalysis is the overestimation of error covariances if one chooses to employ ensemble filters (Whitaker et al. (2004) where multiplicative covariance inflation is employed).
Overestimation of error covariances occurs also in modern numerical weather forecast schemes for which the upper lid of the vertical domain is constantly pushed towards higher and higher levels to incorporate the mesosphere, with the aim to better resolve processes in the polar stratosphere (see for example Polavarapu et al. (2005); Sankey et al. (2007);Eckermann et al. (2009)). The energy spectrum in the mesosphere is, contrary to the troposphere, dominated by gravity waves. The high variability associated with these waves causes very large error covariances in the mesosphere which can be 2 orders of magnitude larger than at lower levels (Polavarapu et al., 2005), rendering the filter very sensitive to small uncertainties in the forecast covariances. Being able to control the variances of mesospheric gravity waves is therefore a big challenge.
The question we address in this work is how can the statistical information available for some data which are otherwise not observable, be effectively incorporated in data assimilation to control the potentially high error covariances associated with the data void. We will develop a framework to modify the familiar Kalman filter (see for example (Evensen, 2006;Simon, 2006)) for partial observations with only limited information on the mean and variance, with the effect that the error covariance of the unresolved variables cannot exceed their climatic variance and their mean is controlled by driving it towards the climatological value.
The paper is organized as follows. In Section 2 we will introduce the dynamical setting and briefly describe the ensemble transform Kalman filter (ETKF), a special form of an ensemble square root filter. In Section 3 we will derive the variance limiting Kalman filter (VLKF) in a variational setting. In Section 4 we illustrate the VLKF with a simple linear toy model for which the filter can be analyzed analytically. We will extract the parameter regimes where we expect VLKF to yield optimal performance. In Section 5 we apply the VLKF to the 40-dimensional Lorenz-96 system (Lorenz, 1996) and present numerical results illustrating the advantage of such a variance limiting filter. We conclude the paper with a discussion in section 6.
Setting
Assume an N-dimensional 1 dynamical system whose dynamics is given bẏ with the state variable z ∈ R N . We assume that the state space is decomposable according to z = (x, y) with x ∈ R n and y ∈ R m and n + m = N. Here x shall denote those variables for which direct observations are available, and y shall denote those variables for which only some integrated or statistical information is available. We will coin the former observables and the latter pseudo-observables. We do not incorporate model error here and assume that (1) describes the truth. We apply the notation of Ide et al. (1997) unless stated explicitly otherwise. Let us introduce an observation operator H : R N → R n which maps from the whole space into observation space spanned by the designated variables x. We assume that observations of the designated variables x are given at equally spaced discrete observation times t i with the observation interval ∆t obs . Since it is assumed that there is no model error, the observations y o ∈ R n at discrete times t i = i∆t obs are given by with independent and identically distributed (i.i.d.) observational Gaussian noise r o ∈ R n . The observational noise is assumed to be independent of the system state, and to have zero mean and constant covariance R o ∈ R n×n .
We further introduce an operator h : R N → R m which maps from the whole space into the space of the pseudo-observables spanned by y. We assume that the pseudo-observables have variance A clim ∈ R m×m and constant mean a clim ∈ R m . This is the only information available for the pseudo-observables, and may be estimated, for example, from climatic measurements. The error covariance of those pseudo-observations is denoted by R w ∈ R m×m .
The model forecast state z f at each observation interval is obtained by integrating the state variable with the full nonlinear dynamics (1) for the time interval ∆t obs . The background (or forecast) involves an error with covariance P f ∈ R N ×N .
Data assimilation aims to find the best estimation of the current state given the forecast z f with variance P f and observations y o of the designated variables with error covariance R o . Pseudo-observations can be included following the standard Bayesian approach once their mean a clim and error covariance R w are known. However, the error covariance R w of a pseudo-observation is in general not equal to A clim . In Section 3, we will show how to derive the error covariance R w in order to ensure that the forecast does not exceed the prescribed variance A clim . We do so in the framework of Kalman filters and shall now briefly summarize the basic ideas to construct such a filter for the case of an ensemble square root filter (Tippett et al., 2003), the ensemble transform filter (Wang et al., 2004).
Ensemble Kalman filter
In an ensemble Kalman filter (EnKF) (Evensen, 2006) an ensemble with k members z k Z = [z 1 , z 2 , . . . , z k ] ∈ R N ×k is propagated by the full nonlinear dynamics (1), which is written aṡ The ensemble is split into its mean where e = [1, . . . , 1] T ∈ R k , and its ensemble deviation matrix with the constant projection matrix The ensemble deviation matrix Z ′ can be used to approximate the ensemble forecast covariance matrix via Given the forecast ensemble Z f = Z(t i − ǫ) and the associated forecast error covariance matrix (or the prior ) P f (t i − ǫ), the actual Kalman analysis (Kalnay, 2002;Evensen, 2006;Simon, 2006) updates a forecast into a so-called analysis (or the posterior ). Variables at times t = t i − ǫ are evaluated before taking the observations (and/or pseudo-observations) into account in the analysis step, and variables at times t = t i + ǫ are evaluated after the analysis step when the observations (and/or pseudo-observations) have been taken into account. In the first step of the analysis the forecast mean, is updated to the analysis mean where the Kalman gain matrices are defined as The analysis covariance P a is given by the addition rule for variances, typical in linear Kalman filtering (Kalnay, 2002), To calculate an ensemble Z a which is consistent with the error covariance after the observation P a , and which therefore needs to satisfy we use the method of ensemble square root filters (Simon, 2006). In particular we use the method proposed in (Tippett et al., 2003;Wang et al., 2004), the so called ensemble transform Kalman filter (ETKF), which seeks a transformation S ∈ R k×k such that Alternatively one could have chosen the ensemble adjustment filter (Anderson, 2001) in which the ensemble deviation matrix Z ′ f is pre-multiplied with an appropriately determined matrix A ∈ R N ×N . However, since we are mainly interested in the case k ≪ N we shall use the ETKF. Note that the matrix S is not uniquely determined for k < N. The transformation matrix S can be obtained either by using continuous Kalman filters (Bergemann et al., 2009) or directly (Wang et al., 2004) by Here CΓC T is the singular value decomposition of The matrixC ∈ R k×(k−1) is obtained by erasing the last zero column from C ∈ R k×k , and Γ ∈ R (k−1)×(k−1) is the upper left (k − 1) × (k − 1) block of the diagonal matrix Γ ∈ R k×k . The deletion of the 0 eigenvalue and the associated columns in C assure that Z ′ a = Z ′ a S and therefore that the analysis mean is given byz a . Note that S is symmetric and ST = TS which assures that Z ′ a = Z ′ a S implying that the mean is preserved under the transformation. This is not necessarily true for general ensemble transform methods of the form (6).
A new forecast Z(t i+1 − ǫ) is then obtained by propagating Z a with the full nonlinear dynamics (2) to the next time of observation. The numerical results presented later in Sections 4 and 5 are obtained with this method.
In the next Section we will determine how the error covariance R w used in the Kalman filter is linked to the variance A clim of the pseudo-variables.
Derivation of the variance limiting Kalman filter
One may naively believe that the error covariance of the pseudo-observable R w is determined by the target variance of the pseudo-observables A clim simply by setting R w = A clim . In the following we will see that this is not true, and that the expression for R w which ensures that the variance of the pseudo-observables in the analysis is limited from above by A clim involves all error covariances.
We formulate the Kalman filter as a minimization problem of a cost function (e.g. Kalnay (2002)). The cost function for one analysis step as described in Section 2.1 with a given background z f and associated error covariance P f is typically written as where z is the state variable at one observation time t i = i∆t obs . Note that the part involving the pseudo-observables corresponds to the notion of weak constraints in variational data assimilation (Sasaki, 1970;Zupanski, 1997;Neef et al., 2006). The analysis step of the data assimilation procedure consists of finding the critical point of this cost function. The thereby obtained analysis z =z a and the associated variance P a are then subsequently propagated to the next observation time t i+1 to yield z f and P f at the next time step, at which a new analysis step can be performed. The equation for the critical point with ∇ z J(z) = 0 is readily evaluated to be and yields (3) for the analysis meanz a , and (5) for the analysis covariance P a with Kalman gain matrices given by (4).
To control the variance of the unresolved pseudo-observables a clim = hz we set Introducing and upon applying the Sherman-Morrison-Woodbury formula (see for example Golub and Loan (1996) which is yet again a reciprocal addition formula for variances. Note that the naive expectation that R w = A clim is true only for P f → ∞, but is not generally true. For sufficiently small background error covariance P f , the error covariance R w as defined in (11) is not positive semi-definite. In this case the information given by the pseudo-observables has to be discarded. In the language of variational data assimilation the criterion of positive definiteness of R −1 w determines whether the weak constraint is switched on or off. To determine those eigendirections for which the statistical information available can be incorporated, we diagonalize R −1 w = VDV T and defineD withD ii = D ii for D ii ≥ 0 andD ii = 0 for D ii < 0. The modified R −1 w = VDV T then uses information of the pseudo-observables only in those directions which potentially allow for improvement of the analysis. Noting that P denotes the analysis covariance of an ETKF (with R w = 0), we see that equation (11) states that the variance constraint switches on for those eigendirections whose corresponding singular eigenvalues of hPh T are larger than those of A clim . Hence the proposed VLKF as defined here incorporates the climatic information of the unresolved variables in order to restrict the posterior error covariance of those pseudo-observables to lie below their climatic variance and to drive the mean towards their climatological mean.
Analytical linear toy model
In this Section we study the VLKF for the following coupled linear skew product system for two oscillators where A, B and Λ are all skew-symmetric, σ x,y and Γ x,y are all symmetric, and dW t and dB t are independent two-dimensional Brownian processes 2 . We assume here for simplicity that with the identity matrix I, and with the skew-symmetric matrix Note that our particular choice for the matrices implies R w = R w I.
The system models two noisy coupled oscillators, x and y. We assume that we have access to observations of the variable x at discrete observation times t i = i∆t obs , but have only statistical information about the variable y. We assume knowledge of the climatic mean µ clim and the climatic covariance σ 2 clim of the unobserved variable y. The noise is of Ornstein-Uhlenbeck type (Gardiner, 2003), and may represent either model error or parametrize highly chaotic nonlinear dynamics. Without loss of generality, the coupling is chosen such that the y-dynamics drives the x-dynamics but not vice versa. The form of the coupling is not essential for our argument, and it may be oscillatory or damping with Λ = λI. We write this system in the more compact form for The solution of (12) can be obtained using Itô's formula and, introducing the propagator L(t) = exp ((M − Γ + C)t), which commutes with σ for our choice of the matrices, is given by and covariance where The climatic mean µ clim ∈ R 4 and covariance matrix Σ clim ∈ R 4×4 are then obtained in the limit t → ∞ as In order for the stochastic process (12) to have a stationary density and for Σ(t) to be a positive definite covariance matrix for all t, the coupling has to be sufficiently small with λ 2 < 4γ x γ y . Note that the skew product nature of the system (12) is not special in the sense that a non-skew product structure where x couples back to y would simply lead to a renormalization of C. However, it is pertinent to mention that although in the actual dynamics of the model (12) there is no back-coupling from x to y, the Kalman filter generically introduces back-coupling of all variables through the inversion of the covariance matrices (cf. (5)).
We will now investigate the variance limiting Kalman filter for this toy model. In particular we will first analyze under what conditions R w is positive definite and the variance constraint will be switched on, and second we will analyze when the VLKF yields a skill improvement when compared to the standard ETKF. We start with the positive definiteness of R w . When calculating the covariance of the forecast in an ensemble filter we need to interpret the solution of the linear toy model (12) as where z j (t i+1 ) is the forecast of ensemble member j at time t i+1 = t i + ∆t obs = (i + 1)∆t obs before the analysis propagated from its initial condition z j (t i ) =z a (t i ) + ξ j with ξ j ∼ N (0, P a (t i )) at the previous analysis. The equality here is in distribution only, i.e. members of the ensemble are not equal in a pathwise sense as their driving Brownian will be different, but they will have the same mean and variance. The covariance of the forecast can then be obtained by averaging with respect to the ensemble and with respect to realizations of the Brownian motion, and is readily computed as where L T (t) = exp (−M − Γ + C T ) t denotes the transpose of L(t). The forecast covariance of an ensemble with spread P a is typically larger than the forecast covariance Σ of one trajectory with a non-random initial condition z 0 . The difference is most pronounced for small observation intervals when the covariance of the ensemble P f will be close to the initial analysis covariance P a , whereas a single trajectory will not have acquired much variance Σ. In the long-time limit, both, P f and Σ, will approach the climatic covariance Σ clim (cf. (13)).
In the following we restrict ourselves to the limit of small observation intervals ∆t obs ≪ 1.
In this limit, we can approximate P a (t i ) ≈ P f (t i+1 ) and explicitly solve the forecast covariance matrix P f using (14). This assumption requires that the analysis is stationary in the sense that the filter has lost its memory of its initial background covariance provided by the user to start up the analysis. We have verified the validity of this assumption for small observation intervals and for a range of initial background variances. This assumption renders (14) a matrix equation for P f . To derive analytical expressions we further Taylor-expand the propagator L(∆t obs ) and the covariance Σ(∆t obs ) for small observation intervals ∆t obs . This is consistent with our stationarity assumption P a (t i ) ≈ P f (t i+1 ). The very lengthy analytical expression for P f (t i+1 ) can be obtained with the aid of Mathematica (Mathematica Version 7.0, 2008), but is omitted from this paper. In filtering one often uses variance inflation (Anderson and Anderson, 1999) to compensate for the loss of ensemble variance due to finite size effects, sampling errors and the effects of nonlinearities. We do so here by introducing an inflation factor δ > 1 multiplying the forecast variance P f . Having determined the forecast covariance matrix P f we are now able to write down an expression for the error covariance of the pseudo-observables R w . As before we limit the variance and the mean of our pseudo-observable y to be A clim = σ 2 clim and a clim = µ clim . Then, upon using the definitions (10) and (11), we find that the error covariance for the pseudo-observables R w is positive definite provided the observation interval ∆t obs is sufficiently large 3 . Particularly, in the limit of R o → ∞, we find that if the variance constraint will be switched on. Note that for δ > 1 the critical ∆t obs above which R w is positive definite can be negative, implying that the variance constraint will be switched on for all (positive) values of ∆t obs . If no inflation is applied, i.e. δ = 1, this simplifies to Because 4γ x γ y − λ 2 > 0 the critical observation interval ∆t obs is smaller for non-trivial inflation with δ > 1 than if no variance inflation is incorporated. This is intuitive, because the variance inflation will increase instances with |hP a h T | > |σ 2 clim |. We have numerically verified that inflation is beneficial for the variance constraint to be switched on. It is pertinent to mention that for sufficiently large coupling strength λ or sufficiently small values of γ x , Equation (16) may not be consistent with the assumption of small observation intervals ∆t obs ≪ 1. We have checked analytically that the derivative of R −1 w is positive at the critical observation interval ∆t obs , indicating that the frequency of occurrence when the variance constraint is switched on increases monotonically with the observation interval ∆t obs , in the limit of small ∆t obs . This has been verified numerically with the application of VLKF for (12) and is illustrated in Figure 1. At this stage it is important to mention effects due to finite size ensembles. For large observation intervals ∆t obs → ∞ and large observational noise R o → ∞, we have P f → Σ clim and our analytical formulae would indicate that the variance constraint should not be switched on (cf. (10) and (11)). However, in numerical simulations of the Kalman filter we observe that for large observation intervals the variance constraint is switched on for almost all analysis times. This is a finite ensemble size effect and is due to the mean of the forecast variance ensemble adopting values larger than the climatic value of σ clim implying positive definite values of R w . The closer the ensemble mean approaches the climatic variance, the more likely fluctuations will push the forecast covariance above the climatic value. However, we observe that the actual eigenvalues of R w decrease for ∆t obs → ∞ and for the size of the ensemble k → ∞.
The analytical results obtained above are for the ideal case with k → ∞. As mentioned in the introduction, in sparse observation networks finite ensemble sizes cause the overestimation of error covariances (Liu et al., 2008;Whitaker et al., 2009), implying that R w is positive definite and the variance limiting constraint will be switched on. This finite size effect is illustrated in Figure 2, where the maximal singular value of hP a h T , averaged over 50 realizations, is shown for ETKF as a function of ensemble size k for different observational noise variances. Here we used no inflation, i.e. δ = 1, in order to focus on the effect of finite ensemble sizes. It is clearly seen that the projected covariance decreases for large enough ensemble sizes. The variance will asymptote from above to hΣ clim h T in the limit k → ∞. For sufficiently small observational noise, the filter corrects too large forecast error covariances by incorporating the observations into the analysis leading to a decrease in the analysis error covariance.
However, the fact that the variance constraint is switched on does not necessarily imply that the variance limiting filter will perform better than the standard ETKF. In particular, for very large observation intervals ∆t obs when the ensemble will have acquired the climatic mean and covariances, VLKF and ETKF will have equal skill. We now turn to the question under what conditions VLKF is expected to yield improved skill compared to standard ETKF. To this end we introduce as skill indicator the (squared) RMS error between the truth z truth and the ensemble mean analysisz a (the square root is left out here for convenience of exposition). Here E t denotes the temporal average over analyzes cycles, and E dW denotes averaging over different realizations of the Brownian paths dW . We introduced the norm ab G = a T Gb to investigate the overall skill using G = I, the skill of the observed variables using G = H T H and the skill of the pseudo-observables using G = h T h. Using the Kalman filter equation (3) for the analysis mean with K w = 0, we (12) using standard ETKF without inflation, with R o = 0.25 (dashed curve) and R o = 2 (solid curve). Parameters are σ x = σ y = γ x = γ y = 1, λ = 0.2, ∆t obs = 1, for which the climatic variance is hΣh T ≈ 0.505. We used 50 realizations for the averaging.
obtain for the ETKF Solving the linear toy-model (12) for each member of the ensemble and then performing an ensemble average, we obtainz Substituting a particular realization of the truth z truth (t), and performing the average over the realizations, we finally arrive at with the mutually independent normally distributed random variables We have numerically verified the validity of our assumptions of the statistics of ξ t i and η t i . Note that for ξ t i to have mean zero and variance P a (t i ) filter divergence has to be excluded.
Similarly we obtain for the VLKF with the normally distributed random variable where we used that a clim = 0. Note that using our stationarity assumption to calculate P f we have ζ t i d ∼ (1/k)ξ t i−1 . Again we have numerically verified the statistics for ζ t i . The expression for the RMS error of the VLKF (21) can be considerably simplified. Since for large ensemble sizes k → ∞ the random variable ζ t i becomes a deterministic variable with mean zero, we may neglect all terms containing ζ t i . We summarize to For convenience we have omitted superscripts for K o and ξ t i−1 in (19) and (23) to denote whether they have been evaluated for ETKF and VLKF. But note that, although the expressions (19) and (23) are formally the same, one generally has E ETKF = E VLKF , because the analysis covariance matrices P a are calculated differently for both methods leading to different gain matrices K o and different statistics of ξ t in (19) and (23).
We can now estimate the skill improvement defined as S = E ETKF /E VLKF with values of S > 1 indicating skill improvement of VLKF over ETKF. We shall choose G = h T h from now on, and concentrate on the skill improvement for the pseudo-observables.
Recalling that E ETKF ≈ E VLKF for large observation intervals ∆t obs , we expect skill improvement for small ∆t obs . We perform again a Taylor expansion in small ∆t obs of the skill improvement S. The resulting analytical expressions are very lengthy and cumbersome, and are therefore omitted for convenience. We found that there is indeed skill improvement S > 1 in the limit of either γ y → ∞ or γ x → 0. This suggests that the skill is controlled by the ratio of the time scales of the observed and the unobserved variables. If the time scale of the pseudo-observables is much larger than the one of the observed variables, VLKF will exhibit superior performance over ETKF. This can be intuitively understood since 1/(2γ y ) is the time scale on which equilibrium -i.e. the climatic state -is reached for the pseudo-observables y. If the pseudo-observables have relaxed towards equilibrium within the observation interval ∆t obs , and their variance has acquired the climatic covariance hP a h T = σ 2 clim , we expect the variance limiting to be beneficial.
Furthermore, we found analytically that the skill improvement increases with increasing observational noise R obs (at least in the small observation interval approximation). In particular we found that ∂S/∂R obs > 0 at R obs = 0. The increase of skill with increasing observational noise can be understood phenomenologically in the following way. For R obs = 0 the filter trusts the observations, which as a time series carry the climatic covariance. This implies that there is a realization of the Wiener process such that the analysis can be reproduced by a model with the true values of γ x,y and σ x,y . Similarly, this is the case in the other extreme R obs → ∞, where the filter trusts the model. For 0 ≪ R obs ≪ ∞ the analysis reproducing system would have a larger covariance σ x than the true value. This slowed down relaxation towards equilibrium of the observed variables can be interpreted as an effective decrease of the damping coefficient γ x . This effectively increases the time scale separation between the observed and the unobserved variables, which was conjectured above to be beneficial for skill improvement.
As expected, the skill improves with increasing inflation factor δ > 1. The improvement is exactly linear for ∆t obs → 0. This is due to the variance inflation leading to an increase of instances with hP a h T > σ 2 clim , for which the variance constraint will be switched on.
In Figure 3 we present a comparison of the analytical results (19) and (23) with results from a numerical implementation of ETKF and VLKF for varying damping coefficient γ y . Since γ y controls the time-scale of the y-process, we cannot use the same ∆t obs for a wide range of γ y in order not to violate the small observation interval approximations used in our analytical expressions. We choose ∆t obs as a function of γ y such that the singular values of the first-order approximation of the forecast variance is a good approximation for this ∆t obs . For Figure 3 we have ∆t obs ∈ (0.005, 0.01) to preserve the validity of the Taylor expansion. Besides the increase of the skill with γ y , Figure 3 shows that the value of S increases significantly for larger values of the inflation factor δ > 1.
We will see in the next Section that the results we obtained for the simple linear toy model (12) hold as well for a more complicated higher-dimensional model, where the dynamic Brownian driving noise is replaced by nonlinear chaotic dynamics. Figure 3: Dependency of the skill improvement S of VLKF over ETKF on the damping coefficient γ y of the pseudo-observable. We show a comparison of direct numerical simulations (open circles) with analytical results using (21) (continuous curve) and the approximation of large ensemble size (23) (dashed curve). Parameters are γ x = 1, λ = 2, σ x = σ y = 1, R obs = 0.25. We used an ensemble size of k = 20 and averaged over 1000 realizations. Left: no inflation with δ = 1. Right: Inflation with δ = 1.02 2 .
Numerical results for the Lorenz-96 system
We illustrate our method with the Lorenz-96 system (Lorenz, 1996) and show its usefulness for sparse observations in improving the analysis skill and stabilizing the filter. In (Lorenz, 1996) Lorenz proposed the following model for the atmospherė with z = (z 1 , · · · , z D ) and periodic z i+D = z i . This system is a toy-model for midlatitude atmospheric dynamics, incorporating linear damping, forcing and nonlinear transport. The dynamical properties of the Lorenz-96 system have been investigated, for example, in (Lorenz and Emanuel, 1998;Orrell and Smith, 2003;Gottwald and Melbourne, 2005), and in the context of data assimilation it was investigated in, for example, (Ott et al., 2004;Fisher et al., 2005;Harlim and Majda, 2010). We use D = 40 modes and set the forcing to F = 8. These parameters correspond to a strongly chaotic regime (Lorenz, 1996). For these parameters one unit of time corresponds to 5 days in the earth's atmosphere as calculated by calibrating the e-folding time of the asymptotic growth rate of the most unstable mode with a time scale of 2.1 days (Lorenz, 1996). Assuming the length of a midlatitude belt to be about 30, 000km, the spatial scale corresponding to a discretization of the circumference of the earth along the midlatitudes in D = 40 grid points corresponds to a spacing between adjacent grid points z i of approximately 750km, roughly equalling the Rossby radius of deformation at midlatitudes. We estimated from simulations the advection velocity to be approximately 10.4 m/sec which compares well with typical wind velocities in the midlatitudes. In the following we will investigate the effect of using VLKF on improving the analysis skill when compared to a standard ensemble transform Kalman filter, and on stabilizing the filter and avoiding blow-up as discussed in (Ott et al., 2004;Kepert, 2004;Harlim and Majda, 2010). We perform twin experiments using a k = 41-member ETKF and VLKF with the same truth time series, the same set of observations and the same initial ensemble. We have chosen an ensemble with k > D in order to eliminate the effect that a finite-size ensemble can only fit as many observations as the number of its ensemble members (Lorenc, 2003).
Here we want to focus on the effect of limiting the variance.
The system is integrated using the implicit mid-point rule (see for example Leimkuhler and Reich (2005)) to a time T = 30 with a time step dt = 1/240. The total time of integration corresponds to an equivalent of 150 days, and the integration timestep dt corresponds to half an hour. We measured the approximate climatic mean and variance, µ clim and σ 2 clim , respectively, via a long time integration over a time interval of T = 2000 which corresponds roughly to 27.5 years. Because of the symmetry of the system (24), the mean and the standard deviation are the same for all variables z i , and are measured to be σ clim = 3.63 and µ clim = 2.34.
The initial ensemble at t = 0 is drawn from an ensemble with variance σ 2 clim ; the filter was then subsequently spun up for sufficiently many analysis cycles to ensure statistical stationarity. We assume Gaussian observational noise of the order of 25% of the climatological standard deviation σ clim , and set the observational error covariance matrix R o = (0.25σ clim ) 2 I. We find that for larger observational noise levels the variance limiting correction (11) is used more frequently. This is in accordance with our finding in the previous section for the toy model.
We study first the performance of the filter and its dependence on the time between observations ∆t obs and the proportion of the system observed 1/N obs . N obs = 2 means only every second variable is observed, N obs = 4 only every fourth, and so on.
We have used a constant variance inflation factor δ = 1.05 for both filters. We note that the optimal inflation factor at which the RMS error E is minimal, is different for VLKF and ETKF. For ∆t obs = 5/120 (5 hours) and N obs = 4 we find that δ = 1.06 produces minimal RMS errors for VLKF and δ = 1.04 produces minimal RMS errors for ETKF. For δ < 1.04 filter divergence occurs in ETKF, so we chose δ = 1.05 as a compromise between controlling filter divergence and minimizing the RMS errors of the analysis. Figure 4 shows a sample analysis using ETKF with N obs = 5, ∆t obs = 0.15 and R o = (0.25σ clim ) 2 I for an arbitrary unobserved component (top panel) and an arbitrary observed component (bottom panel) of the Lorenz-96 model. While the figure shows that the analysis (continuous grey line) tracks the truth (dashed line) reasonably well for the observed component, the analysis is quite poor for the unobserved component. Substantial improvements are seen for the VLKF when we incorporate information about the variance of the un-observed pseudo-observables, as can be seen in Figure 5. We set the mean and the variance of the pseudo-observables to be the climatic mean and variance, a clim = µ clim e and A clim = σ 2 clim I to filter the same truth with the same observations as used to produce Fig. 4. For these parameters (and in this realization) the quality of the analysis in both the observed and unobserved components is improved. As for the linear toy model (12), finite ensemble sizes exacerbate the overestimation of error covariances. In Figure 6 the maximal singular value of hP a h T , averaged over 150 realizations, is shown for ETKF as a function of ensemble size k. Again we use no inflation, i.e. δ = 1, in order to focus on the effect of finite ensemble sizes. The projected covariance clearly decreases for large enough ensemble sizes. However, here the limit of the maximal singular value of hP a h T for k → ∞ underestimates the climatic variance σ 2 clim = 13.18. To quantify the improvement of the VLKF filter we measure the site-averaged RMS error between the truth z truth and the ensemble meanz a with L = ⌊T /∆t obs ⌋ where the average is taken over 500 different realizations, and D o ≤ D denotes the length of the vectorsz a .
In tables 1 we display E for the ETKF and VLKF respectively, as a function of N obs and ∆t obs . The increased RMS error for larger observation intervals ∆t obs can be linked to the increased variance of the chaotic nonlinear dynamics generated during longer integration times between analyses. Figure 7 shows the average proportional improvement of the VLKF over ETKF, obtained from the values of tables 1. Figure 7 shows that the skill improvement is greatest when the system is observed frequently. For large observation intervals ∆t obs ETKF and VLKF yield very similar RMS. We checked that for large observation intervals ∆t obs both filters still produce tracking analyses. Note that the observation intervals ∆t obs considered here are all much smaller than the e-folding time of 2.1 days. The most significant improvement occurs when one quarter of the system is observed, that is for N obs = 4, and for small observation intervals ∆t obs . The dependency of the skill of VLKF on the observation interval is consistent with our analytical findings in Section 4. We have tested that the increase in skill as depicted in Figure 7 is not sensitive to incomplete knowledge of the statistical properties of the pseudo-observables by perturbing A clim and a clim and then monitoring the change in RMS error. We performed simulations where we drew A clim and a clim independently from uniform distributions (0.9 A clim , 1.1 A clim ) and (0.9 a clim , 1.1 a clim ). We found that for parameters N obs = 2, 4, 6, η = 0.05, 0.25, 0.5 (with η measuring the amount of the climatic variance used through R o = (η σ clim ) 2 I), and ∆t obs = 0.025, 0.05, 0.25 (corresponding to 3, 6 and 30 hours) over a number of simulations there was on average no more than 7% difference of the analysis mean and the singular values of the covariance matrices between the control run where A clim = σ 2 clim I and a clim = µ clim e is used, and when A clim and a clim are simultaneously perturbed.
An interesting question is how the relative skill improvement is distributed over the observed and unobserved variables. This is illustrated in Figure 8 and Figure 9. In Figure 8 we show the proportional skill improvement of VLKF over ETKF for the observed variables and the pseudo-observables, respectively. Figure 8 shows that the skill improvement is larger for the pseudo-observables than for the observables which is to be expected. In Figure 9 we show the actual RMS error E for ETKF and VLKF for the observed variables and the pseudoobservables. It is shown that the skill improvement is better for the unobserved pseudoobservables for all observation intervals ∆t obs . In contrast, VLKF exhibits an improved skill for the observed variables either for small observation intervals for all values of N obs or for all (sufficiently small) observation intervals when N obs = 4, 5. We have, however, checked that the analysis is still tracking the truth reasonably well, and the discrepancy with ETKF is not due to the analysis not tracking the truth anymore. As expected, the RMS error asymptotes for large observation intervals ∆t obs (not shown) to the standard deviation of the observational noise 0.25 σ clim ≈ 0.91 for the observables, and to the climatic standard deviation σ clim = 3.63 for the pseudo-observable (not shown), albeit slightly reduced for small values of N obs due to the impact of the surrounding observed variables (see Figure 10).
Note that there is an order of magnitude difference between the RMS errors for the observables and the pseudo-observables for large N obs (cf. Figures 9). This suggests that the information of the observed variables does not travel too far away from the observational sites. However, the nonlinear coupling in the Lorenz-96 system (24) allows for information of the observed components to influence the error statistics of the unobserved components. Therefore the RMS error of pseudo-observables adjacent to observables are better than those far away from observables. Moreover, the specific structure of the nonlinearity introduces a translational symmetry-breaking (one may think of the nonlinearity as a finite difference approximation of an advection term zz x ), which causes those pseudo-observables to the right of an observable to have a more reduced RMS error than those to the left of an observable. This is illustrated in Figure 10 where the RMS error is shown for each site when only one site is observed. The advective time scale of the Lorenz-96 system is much smaller than ∆t obs which explains why the skill is not equally distributed over the sites, and why, especially for large values of N obs , we observe a big difference between the site-averaged skills of the observed and unobserved variables.
In Figure 11 we show how the RMS error behaves as a function of the observational noise level. We see that for N obs = 4 VLKF always has a smaller RMS error than ETKF. , using standard ETKF without inflation. All other parameters are as in Figure 4. We used 150 realisations for the averaging. The results confirm again the results from our analysis of the toy model in Section 4, that VLKF yields best performance for small observation intervals ∆t obs and for large noise levels. For large observation intervals ETKF and VLKF perform equally well, since then the chaotic model dynamics will have lead the ensemble to have acquired the climatic variance during the time of propagation.
In (Ott et al., 2004) it was observed that if not all variables z i are observed the Kalman filter diverges exhibiting blow-up. Similar behaviour was observed in (Harlim and Majda, 2010). In (Ott et al., 2004) the authors suggested that the sparsity of observations leads to an inhomogeneous background error, which causes an underestimation of the error covariance. We study here this catastrophic blow-up divergence (as opposed to filter divergence when the analysis diverges from the truth) and its dependence on the time between observations ∆t obs and the proportion of the system observed 1/N obs . We note that blow-up divergence appears only in the case of sufficiently small observational noise and moderate values of ∆t obs . Once ∆t obs is large enough (in fact, larger than the e-folding time corresponding to the most unstable Lyapunov exponent, in our case 2.1 days) we notice that no catastrophic divergence occurs, independent of N obs . This probably occurs because for large observation intervals the ensemble acquires enough variance through the nonlinear propagation. We prescribe Gaussian observational noise of the order of 5% of the climatological standard deviation σ clim , and set the observational error covariance matrix to R o = (0.05 σ clim ) 2 I. The initial ensemble at t = 0 is drawn again from an ensemble with variance σ 2 clim . To study the performance of VLKF when blow-up occurs in ETKF simulations we count the number N b of blow-ups that occur before a total of 100 simulations have terminated without blow-up. The proportions of blow-ups for the respective filters is then given by N b /(N b +100). We tabulate this proportion in tables 2 for the ETKF and VLKF respectively and the proportional improvement in table 3. The 'x's' in the table represent cases where no successful simulations could be obtained due to blow-up.
Both filters suffer from severe filter instability for N obs = 6, i.e. for very sparse observational networks, at small observation intervals ∆t obs . No blow-up occurs for either filter when every variable is observed. Note the reduction in occurrences of blow-ups for large observation intervals ∆t obs as discussed above. We have checked that for all N obs there is no blow-up for ETKF (and VLKF) for sufficiently large ∆t obs (not shown); the larger N obs the smaller the upper bound of ∆t obs such that no blow-ups occur. Collapse is most prominent for ETKF (and for VLKF, but to a much lesser extent) for larger values of N obs and at intermediate observation intervals which depend on N obs . Tables 2 and 3 clearly show that incorporating information about the pseudo-observables strongly increases the stability of the filter and suppresses blow-up. However, we note that despite the gain in stability VLKF has a skill less than the purely observational skill in the cases when blow-up occurs for ETKF, because the solutions become non-tracking. Further research is under way to improve on this in the VLKF framework.
The fact that incorporating information about the variance of the un-observed variables improves the stability of the filter is in accordance with the interpretation of filter divergence of sparse observational networks provided in (Ott et al., 2004).
Discussion
We have developed a framework to include information about the variance of unobserved variables in a sparse observational network. The filter is designed to control overestimation of error covariances typical in sparse observation networks, and limits the posterior analysis covariance of the unresolved variables to stay below their climatic variance. We have done so in a variational setting and found a relationship between the error covariance of the variance constraint R w and the assumed target variance of the unobserved pseudo-observables A clim .
We illustrated the beneficial effects of the variance limiting filter in improving the analysis skill when compared to the standard ensemble square root Kalman filter. We expect the variance limiting constraint to improve data assimilation for ensemble Kalman filters when finite size effects of too small ensemble sizes overestimate the error covariances, in particular in sparse observational networks. In particular we found that the skill will improve for small observation intervals ∆t obs and sufficiently large observational noise. We found substantial skill improvement for both observed and unobserved variables. These effects can be understood with a simple linear toy model which allows for an analytical treatment. We further established numerically that VLKF reduces the probability of catastrophic filter divergence and improves the stability of the filter when compared to the standard ensemble square root Kalman filter.
We remark that the idea of the variance limiting Kalman filter is not restricted to ensemble Kalman filters but can also be used to modify the extended Kalman filter. However, for the examples we used here the nonlinearities were too strong and the extended Kalman filter did not yield satisfactory results, even in the variance limiting formulation.
The effect of the variance limiting filter to control unrealistically large error covariances of the poorly resolved variables due to finite ensemble sizes may find useful applications. We mention here that the variance constraint is able to adaptively damp unrealistic excitation of ensemble spread in underesolved spatial regions due to inappropriate uniform inflation. This may be an alternative to the spatially adaptive schemes which were recently developed (Anderson, 2007;Li et al., 2009). In addition, it is known that localization of covariance matrices in EnKF leads to imbalance in the analyzed fields (see, e.g., ; Kepert (2009) for a recent study). Filter localization typically excites unwanted gravity waves which when uncontrolled can substantially degrade filter performance. One may construct balance constraints as pseudo-observations and thereby potentially reduce this undesired aspect of covariance localization. As more specific applications, we mention climate reanalysis and data assimilation for the mesosphere. It would be interesting to see how the proposed variance limiting filter can be used in climate reanalysis schemes to deal with the vertical sparcity of observational data and the less dense observation network on the southern hemisphere in the pre-radiosonde era (see Whitaker et al. (2004)). One would need to establish though whether the historical observation intervals ∆t obs are sufficiently small to allow for a skill improvement. Similarly, it may help to control the dynamically dominant gravity wave activity in the mesosphere as the upper lid is pushed further and further (see for example Polavarapu et al. (2005)). However, a word of caution is required here. In some atmospheric data assimilation problems, it is not at all uncommon to have an ensemble prior variance for certain variables that is significantly larger than the climatological variance, when the atmosphere is locally far away from equilibrium. One relevant example would be in the vicinity of strong fronts over the southern ocean. In such a case, it may not be appropriate to limit the variance to the climatological value.
In this work we have studied systems where for sufficiently large observation intervals ∆t obs the variables acquire their true climatological mean and variance when the model is run. In particular we have not included model error. It would be interesting to see whether the variance limiting filter can help to control model error in the case that the free running model would produce unrealistically large forecast covariances. Usually numerical schemes do underestimate error covariances, but this is often caused by severe divergence damping (Durran, 1999) which is artificially introduced to the model to control unwanted gravity wave activity and to stabilize the numerical scheme. The stabiliziation may be achieved by a much smaller amount of divergence damping by implementing the variance limiting constraint in the data assimilation procedure. The VLKF would in this case act as an effective adaptive damping scheme, counteracting the model error. | 2011-08-30T02:34:43.000Z | 2011-08-01T00:00:00.000 | {
"year": 2011,
"sha1": "29595bca1c63a8470adcb4373b1a163bcc7770de",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1108.5801",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "29595bca1c63a8470adcb4373b1a163bcc7770de",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
} |
254641239 | pes2o/s2orc | v3-fos-license | Novel investigations in retinoic-acid-induced cleft palate about the gut microbiome of pregnant mice
Introduction Cleft palate (CP) is one of the most common congenital birth defects in the craniofacial region, retinoic acid (RA) gavage is the most common method for inducing cleft palate model. Although several mechanisms have been proposed to illuminate RA-induced cleft palate during embryonic development, these findings are far from enough. Many efforts remain to be devoted to studying the etiology and pathogenesis of cleft palate. Recent research is gradually shifting the focus to the effect of retinoic acid on gut microbiota. However, few reports focus on the relationship between the occurrence of CP in embryos and gut microbiota. Methods In our research, we used RA to induce cleft palate model for E10.5 the feces of 5 RA-treated pregnant mice and 5 control pregnant mice were respectively metagenomics analysis. Results Compared with the control group, Lactobacillus in the gut microbiome the RA group was significantly increased. GO, KEGG and CAZy analysis of differentially unigenes demonstrated the most abundant metabolic pathway in different groups, lipopolysaccharide biosynthesis, and histidine metabolism. Discussion Our findings indicated that changes in the maternal gut microbiome palatal development, which might be related to changes in Lactobacillus and These results provide a new direction in the pathogenesis of CP induced by RA.
Introduction
Cleft palate (CP) is one of the most common congenital birth defects in the craniofacial region, with an average occurrence of 1/1000 newborns around the world (Wang et al., 2019a). It is universally acknowledged that CP has a connection with genetic background and environmental factors. As similarities in palatogenesis between humans and mice have been noted, the mouse model considerably contributes to the study of the etiology of cleft palate in humans (Peng et al., 2020).
Nowadays, many studies have shown that retinoic acid (RA) gavage is the most common method for inducing cleft palate model except in knockout mice (Abbott and Birnbaum, 1990;Degitz et al., 1998). RA is one of the crucial trace elements in embryonic development, which plays an essential role in the regulation of morphology, cell proliferation and differentiation, and the production of extracellular matrix (Wang and Kirsch, 2002). The proliferation of palatal mesenchymal cells was inhibited by RA at embryo days (E) 10.5, resulting in cleft palate and no apoptosis of palatal epithelial cells (Goulding and Pratt, 1986). Although several mechanisms have been proposed to illuminate RA-induced cleft palate during embryonic development, these findings are focused on the changes in the embryo palate. As RA was first to work on pregnant mice which subsequently influence the embryos, further efforts about RA and the pregnant mice on the etiology and pathogenesis of cleft palate need to be studied. Current research is gradually shifting the focus to the effect of retinoic acid on gut microbiota. The "microbiota" consists of microbiota communities on the mucosal surface and lumen of the respiratory tract, gastrointestinal (GI) tract, urinary tract, and reproductive tract. The GI tract has the greatest density of microbiota, defined as the "gut microbiota" (Alipour et al., 2016;Sililas et al., 2021). Gut microbiota is essential for digesting food, producing short-chain fatty acids, synthesizing vitamins, and protecting the mucosal. To a large extent, host metabolism and immune response were due to the interaction between host cells and the gut microbiota (Quan et al., 2020). The intestinal microbiota imbalance was mainly manifested by an increase in harmful bacteria and a decrease in beneficial bacteria. Microbiota imbalances were associated with several underlying diseases, for instance, metabolic syndrome, allergic diseases, some kinds of cancer, and neurological diseases (Schroeder and Bäckhed, 2016;Wang et al., 2019b;Han et al., 2020). As an important modulator of innate immune cells, RA played an essential role in defending the intestinal immune system (Czarnewski et al., 2017). Research on Alzheimer's disease has found that RA could influence gut by regulating commensal microbiota. These in turn could also interfere with retinoid metabolism and via the gut-brain axis furthermore with Alzheimer's disease pathology within the brain (Endres, 2019).
However, the dangers of disrupted gut flora are not limited to these. Recent studies found that pregnant women with an imbalance in gut flora might be at risk not only for their health but also for their fetuses. During pregnancy, maternal gut environment could finetune energy homeostasis, which was a key factor to prevent metabolic syndrome for offspring (Kimura et al., 2020). In addition, the mother's gut flora was also a source of immunity for offspring. The neonatal mice lacking the ability to produce IgG could be protected against enterotoxigenic Escherichia coli infection by the mother's natural IgG antibody to Escherichia coli, which was transmitted through the placenta or breast milk (Zheng et al., 2020). For the past few years, the emergence of the metagenomics could associate the microflora with genes, thereby better illustrating the mechanism of regulation of diseases by microflora disorders (Wang and Jia, 2016). For example, the maternal gut microbiome is proven to be a vital signal in developing brain neurons through microbialregulated metabolites, which promote fetal-thalamic cortex axons (Vuong et al., 2020).
Recent studies suggest that RA disrupts the intestinal flora, while alterations in maternal intestinal flora are closely related to embryonic development (Zhou et al., 2021), but it is not known whether such a relationship exists during palatal development. Therefore, we selected E10.5 pregnant mice to construct the fetal cleft palate model by intragastric administration of RA and collected feces samples of all pregnant mice at E16.5 for metagenomics analysis. Our work aims to provide a new direction in the pathogenesis of CP induced by RA.
Materials and methods
Animals and sample collection C57BL/6J mice were purchased from the Sibeifu Company (Beijing, China). Female C57BL/6J mice (age, 9-10 weeks; weight, 20-25 g) and mature male mice (age, 9-10 weeks; weight, 20-25g) mated overnight. The noon following mating with detection of a vaginal plug was designated as E0.5. Ten pregnant mice were equally divided into the control group and the RA group. The sample size calculation followed ARRIVE guidelines. As 100 mg/kg RA on pregnant mice by oral gavage was reported to have almost 100% mouse embryo CP without maternal deaths which was considered as a common dose in inducing embryo CP (Campbell et al., 2004;Dong et al., 2017;Gao et al., 2017;Peng et al., 2020), female mice at E10.5 were administered RA (100 mg/kg; Sigma-Aldrich; Merck KGaA, Darmstadt, Germany) dissolved in corn oil by oral gavage. Control mice were given an equal amount of corn oil. RAtreated and untreated pregnant mice were sacrificed via cervical dislocation at E16.5. All mouse experiments were taken place in Beijing Key Laboratory of Tooth Regeneration and Function Reconstruction, Capital Medical University School of Stomatology, and were approved by the Animal Care and Use Committee of the School of Stomatology, Capital Medical University (Beijing, China, permit number: KQYY-202109-006), and all experiments met the relevant regulatory standards.
To ensure that the initial flora of each group of mice was consistent, specific pathogen-free (SPF) mice were selected and raised in a barrier system. And we fed mice using sterilized litter, food, and water in the hospital's experimental animal rooms which was SPF standard. In addition, we raised the mice individually in cages and collected the mice's feces separately. All eligible feces samples were sent to the laboratory immediately after self-sampling of feces samples (Santiago et al., 2014), each sample was divided into 3 parts, loaded into 3 cryopreservation tubes, and stored at −80°C after overnight freezing of liquid nitrogen.
Stereomicroscope observing and hematoxylin-eosin staining
Embryos from RA-treated mice and control mice were isolated at E16.5. Palate tissues of half of the embryos were detached by ophthalmic shears and then observed under a stereomicroscope (Olympus, Japan). Other embryos were fixed in 4% paraformaldehyde at room temperature for 24 h. All fixed samples were dehydrated by the ethanol gradient, and after 4 h of preservation of n-butanol, they were embedded with paraffin wax and sliced at 5 mm intervals to make tissue sections. After dewaxing, the structure was observed by H&E staining.
Total RNA exteacted and DNA library construction DNA from 5 RA samples and 5 control samples was extracted by the E.Z.N.A. ® Stool DNA Kit (D4015-02, Omega, Inc., USA). Sample blanks composed of unused swabs were processed through DNA extraction and tested to confirm no DNA amplicons contamination. The total DNA was eluted in 50 µl elution buffer by manufacturer's (QIAGEN) production and measured in the PCR by LC-BIO TECHNOLOGIES (HANGZHOU) CO., LTD., Hang Zhou, Zhejiang Province. Then, DNA library was constructed by TruSeq Nano DNA LT Library Preparation Kit (FC-121-4001). First, DNA was randomly broken into 200-500 bp fragments and DNA ends were repaired. Second, an A base was added to the 3' end of DNA fragment and a connector was added to the end of DNA fragment. At last, the ligation product was purified and amplified by PCR.
Metagenomic sequencing and data analysis
After passing the quality inspection of the library, NovaSeq 6000 was used for high-throughput sequencing. The sequencing mode was PE150, and the sequencing kit was TruSeqNano DNA LT Library Preparation Kit-Set A(FC-121-4001). The valid data was obtained by preprocessing sequenced raw data (de-coupled and de-processed). Once quality-filtered reads were obtained, they were de novo assembled to construct the metagenome for each sample. All coding regions (CDS) of metagenomic contigs were predicted in order to obtain unigenes. The specific data analysis procedures were listed in Supplementary Table 1. Subsequently, compare unigenes with NR_mate libraries to obtain species annotation information and compare unigenes with the protein sequence of GO/KEGG/CAZy database, to obtain function annotation information. The database information was listed in Supplementary Table 2.
Lactate amount test
The lactate amount in plasma, amniotic fluid, and palatal tissue was tested by CheKine ™ Lactate Colorimetric Assay Kit (Abbkine Scientific Co, China). Plasma and amniotic fluid were extracted from E16.5 pregnant mice. Palatal tissue was extracted from E16.5 fetal mice. According to the weight of the palatal tissue (1 mL lactate assay buffer/0.1 g tissue), the extraction liquid was prepared and homogenized on ice, centrifuged at 12,000 g for 5 min at 4℃, then the supernatant was used for assay. The optical absorbance values were measured by a SpectraMax Paradigm microplate reader (Molecular Devices, CA, US) at 450 nm.
Statistical analyses
For metagenomic sequencing, the statistical method of alpha diversity (observed species, Shannon, Simpson, and Chao1) was Mann Whitney U test (R, v3.4.4, p<0.05 were considered statistically significant). Beta diversity (PCA and PCoA) was analyzed by ANOSIM analysis (-1≤R ≤ 1, R value close to 1 indicated that the difference in between groups was greater than within group, and R value close to -1 indicated that there was no significant difference between and within groups; p<0.05 were considered statistically significant). The Linear discriminant analysis effect size (LEfSe) method was analyzed by pathyon. Based on the taxonomic and functional annotation of unigenes, the differential analysis was carried out at each taxonomic (family, genus, and species) or enrichment analysis (GO and KEGG) by Mann Whitney U test (R, v3.4.4, p<0.05 were considered statistically significant). As for gene expression differences, significantly different gene default thresholds was | log2(fold_change)|≥1 by Mann Whitney U test (R, v3.4.4), p<0.05 were considered statistically significant.
Statistical analyses of lactate amount test were performed with GraphPad Prism software (version 9, by MacKiev Software, Boston, MA, USA). The variance between the two groups was compared by independent two-tailed student's t-test, p < 0.05 were considered statistically significant. The D'Agostino-Pearson test was used to verify whether the samples data came from a normal distribution. The biological replicates of metagenomic sequencing were 5 in each group. The samples of amniotic fluid, plasma, and palate tissue were separately collected from 3 different pregnant mice. And for each sample, we replicated 3 times, then calculated all of data together which get 9 points for each sample.
Results
The successful construction of RA-induced cleft palate model E10.5 pregnant mice were used to construct the fetal cleft palate model by intragastric administration of RA (100 mg/kg) (Dong et al., 2017;Gao et al., 2017;Peng et al., 2020). At E16.5, when the normal palate was primarily formed, the palate of fetal mice in the control group was normally fused, and the incidence of cleft palate in the RA group was approximately 97% (38/39). In the palate shelf tissue and histological sections of control E16.5 embryos, the opposed palatal shelves had come into contact and fused during normal development ( Figures 1A, C). In the meanwhile, RA-treated palatal shelves remained small in volume, failed to rise and fuse with the contralateral side, and were vertically oriented beside the tongue ( Figures 1B, D, Supplementary Figure 1) The biodiversity of the pregnant mice microbiome had no difference between RA and control groups To detect the composition and structure of the microbial community in RA-treated pregnant mice and controls, we conducted the analyses of alpha and beta diversity of the microbiome in the pregnant mice feces samples. Alpha diversity reflects the diversity of intestinal microflora in individuals and does not involve the comparison between individuals. Results on alpha diversity verified that compared with the control group, the indices of observed species, Shannon, Simpson, and Chao1 were no differences in the RA group (Figures 2A-D). Beta diversity is adopted to illustrate phylogenetic differences in microbial communities between the diseased and controls. This method can present the bacterial difference between two groups based on the distance. PCA analysis failed to demonstrate the significant difference in distribution between the two groups, with the principal components of 91.5% and 5.65% ( Figure 2E). The result of PCoA revealed that there was a similar bacterial environment between controls and RA-treated pregnant mice (R=-0.04, P=0.531, Figure 2F).
The expression of Lactobacillus was increased in the RA group Subsequently, the relative abundance of microbial taxa at the phylum, class, order, family genus, and species levels were confirmed. Among them, Bacteroidetes, Firmicutes, Proteobacteria, and Actinobacteria occupied the main position in both two groups at the phylum level. (Figures 3A, B). At the family and genus levels, Lactobacillales and Lactobacillaceae were the most abundant in the RA group ( Figures 4A, B), other up-regulation bacteria were listed in Supplementary Table 3 and Supplementary Table 4. At the species level, the expression of Lactobacillus intestinalis, Lactobacillus paragasseri, Lactobacillus sp. ASF360 and Lactobacillus amylovorus were increased in the RA group ( Figure 4C), other abundant bacteria were listed in Supplementary Table 5. To further explore the influence of bacteria on the control and RA group, LEfSe method was used to revealed the influence of significantly different bacteria on the two groups. Lactobacillales, Lactobacillaceae, and Lactobacillus were enriched in the RAtreated group compared with the control ( Figure 4D).
Metabolism-related functions played a major role in the RA and control groups
Next, we investigated the total gene expression of faeces samples in two groups. Compared with the control group, the total gene expression level was changed dramatically under RA treated which affected metabolic pathway via gene expression regulation. Figure 5A displayed that in RA versus (vs) control group, 5824 unigenes were up-regulated and 11467 were downregulated in the differentially expressed genes.
The gene ontology (GO) analysis provided controlled vocabularies of defined terms representing gene product properties. Molecular functions and biological processes were clearly influenced by RA compared to the control groups ( Figure 5B). The results of kyoto encyclopedia of genes and genomes (KEGG) analysis ( Figure 5C) suggested that metabolism-related pathways took the lead, including global and overview maps, carbohydrate metabolism, and amino acid metabolism. So, we focused on carbohydrate metabolism. Dysregulation of the microbiota caused alterations in carbohydrate-active enzymes (CAZymes), which interfere with carbohydrate metabolism (Onyango et al., 2021). The main role of CAZymes was to generate and break down complex carbohydrates and glycoconjugates, allowing them to exert a huge number of biological effects (Cantarel et al., 2009). CAZymes are primarily studied through the CAZy database. Our experiments illustrated that the order of the proportion of the experimental group and the control group from high to low was glycoside hydrolase (GH) > glycosyltransferase (GT) > carbohydrate-binding module (CBM) > carbohydrate esterifying enzyme (CE) > polysaccharide lyase (PL) > auxiliary redox enzyme (AA) ( Figure 5D).
FIGURE 3
Gut microbiome structure analysis. Component proportions of bacterial phylum in the RA and control groups by heatmap (A) and stacked bar (B); n = 5 for the RA group and n = 5 for the control group. The relative abundance of microbial taxa at the family, genus, and species levels.
Metabolically related pathways were enriched in the RA and control groups
Since there was a study found that Lactobacillus could cause metabolic disorders (Tokarek et al., 2021), our results detected that Lactobacillus enriched in the RA-treated group compared with the control, and metabolism-related pathways were the most differentially expressed pathways between two groups by KEGG, we then analyzed the relevant pathways which manifested the changes in the digestive tracts in order to further investigate the relationship between microbiota and metabolism in both RA-induced and control pregnant mice. We integrated the function information related to each gene in GO and KEGG databases. In GO enrichment analysis, tricarboxylic acid (TCA) cycle, succinate dehydrogenase activity, porin activity, and anaerobic respiration enriched in both RA and control groups ( Figure 6A); In KEGG enrichment analysis, ribosome, metabolic pathways, lipopolysaccharide biosynthesis, and histidine metabolism enriched in both RA and control groups ( Figure 6B). Analysis of differential gene B A
FIGURE 6
The enrichment analysis in RA and control group. (A) GO enrichment analysis of differentially expressed unigenes between RA and control groups. (B) Pathway classification based on KEGG enrichment analysis of differentially expressed unigenes between RA and control groups. Rich factor, is the ratio of the number of differentially expressed genes (DEGs) to the number of total genes in this pathway. function (GO, CAZy, and KEGG pathway) and gene set enrichment analysis (GO and KEGG pathway) manifested significant changes in carbohydrate metabolism and energy metabolism in both the RA and control groups. The reason might be due to the number of Lactobacillus changes that could cause metabolic disorders, which might play a key role in the potential interaction effects between CP in fetal mice and gut microbiome in pregnant mice.
The amount of lactate, product of Lactobacillus, was up-regulated in pregnant mice plasma, amniotic fluid and fetal palatal tissue As a product of Lactobacillus, lactate amount might reflect metabolic function of Lactobacillus (Mikelsaar et al., 2016). Thus, we tested lactate amount in plasma, amniotic fluid, and palatal tissues. Plasma and amniotic fluid were extracted from E16.5 pregnant mice. Palatal tissues were extracted from E16.5 fetal mice. It was investigated lactate amount was up-regulated in palatal tissue, plasma, and amniotic fluid in the RA group ( Figures 7A-C), which were consist with our metagenomic sequencing analysis.
Discussion
The gut microbiota was one of the causative factors affecting metabolic syndrome. It acted as a critical part in regulating dietary fat absorption and lipid metabolism by influencing bile acid metabolism, producing short-chain fatty acids, and regulating the intestinal endocrine system (Yu et al., 2019). Current research indicated that the administration of RA to pregnant sows ameliorated developmental defects in Hoxa1 -/-fetal pigs, and maternal RA administration restored bacterial ecological dysbiosis in Hoxa1 -/neonates and altered the bacterial composition of the small intestine in non-Hoxa1 -/neonates (Zhou et al., 2021). However, it remains unclear whether fetal CP induced by RA is related to the maternal gut microbiome without genetic disorders.As one of the main drugs causing CP, RA was reported to increase the relative abundance of Lactobacillus spp. in the gut (Abdelhamid and Luo, 2018). Consistently, in our research, we firstly set up the RA-induced CP model in fetal mice whose palate shelves failed to elevate into a horizontal position completely, which was accordant with previous reports (Campbell et al., 2004;Dong et al., 2017;Gao et al., 2017;Wang et al., 2017;Peng et al., 2020), then we found that the expression of Lactobacillus was significantly increased in the RA group, including Lactobacillus intestinalis, and Lactobacillus sp. ASF360, Lactobacillus paragasseri, and Lactobacillus amylovorus. These results indicated that the formation of fetal cleft palate might associate with the excess increase of Lactobacillus in the gut microbiota of RA-treated pregnant mice.
It is well-known that a major characteristic of Lactobacilli is their ability to metabolize glycogen-derived products under anaerobic conditions to produce lactate (Witkin and Linhares, 2017). That might the path that affected the whole body. A retrospective observational study showed that various peripartal risk factors (e.g., uterine rupture, placental abruption, chorioamnionitis, and pre-eclampsia) might have contributed to higher lactate values and that lactate levels in maternal cord blood were associated with a mixed metabolic acidosis in the fetus after birth (Gaertner et al., 2021). Also, maternal lactate and umbilical arterial and venous lactate concentrations were significantly higher in intrauterine growth-retarded infants compared with normal infants (Marconi et al., 1990). A further novel finding was that, as an end product of glycolysis, lactate had a controlling function in the fate of mouse embryonic B C A
FIGURE 7
The lactate amount in RA and control group. (A) The lactate amount in palate tissue between RA and control groups (n=3). (B) The lactate amount in plasma between RA and control groups (n=3). (C) The lactate content in amniotic fluid between RA and control groups (n=3). *p < 0.05, ** p < 0.01 compared with control group. stem cells (Tian and Zhou, 2022). In our results, it was notable that the production of lactate was up-regulated in both pregnant mice and fetal mice treated with RA.
Studies have found that measuring molecular types in maternal and fetal blood, as well as in the fetal brain, revealed that when gut flora was deficient during pregnancy, specific metabolites were often also reduced or absent, which subsequently affected fetal brain development (Vuong et al., 2020). In addition, maternal gut microbiota was associated with offspring metabolic phenotype. During pregnancy, the SCFAGPR41 and SCFA-GPR43 axes could pass on the mother's gut microbiota to offspring to make them resistant to obesity. GPR41 and GPR43 in the sympathetic nerve, intestinal tract, and pancreas of the embryo could sense SCFAs in the maternal gut microbiota, thereby affecting the prenatal development of the metabolic and neural system (Kimura et al., 2020). Our research also detected that the increase of Lactobacillus in the gut of pregnant mice in the RA group promoted lactate amount in the palatal tissue of fetal mice via the "gut-plasma-amniotic fluid" pathway, which then caused cleft palate. However, the specific mechanism still needs to be further investigated.
Metagenomics is an effective way to clarify the relationship between gut microbiome and pathogenesis. In our work, to further explore the mechanisms involved in CP formation between pregnant mice and fetuses, the GO and KEGG enrichment analysis implied that metabolic-related pathways were significantly enriched in the gut microbiome of pregnant mice, including metabolic pathways, TCA cycle, anaerobic, and so on. Different sequencing results revealed an association between Lactobacillus and metabolism. Functional proteomics and metaproteomics showed that Lactobacillus altered metabolic pathways (e.g., carbohydrate transport and metabolism, pyruvate metabolism, proteolytic system, amino acid metabolism, and protein synthesis) to a large extent (De Angelis et al., 2016). 16SRNA results also showed a correlation between Lactobacillus and glycolysis enzyme (Brandt and Barrangou, 2018). In addition, an earlier study on chicken and mouse embryos confirmed that energy metabolism was tightly regulated during development (Oginuma et al., 2020). Mouse early preimplantation embryos did not rely on glucose as their primary energy source, but participated in the TCA cycle and produce ATP using pyruvate and lactate (Beńazeŕaf et al., 2010). An important metabolic shift occurs during embryo implantation, resulting in increased glucose uptake and enhanced glycolysis activity. At this point, most of the glycolysis activity co-exists with an active TCA cycle and oxidative phosphorylation, causing the production of lactic acid. However, with the formation of organs, the intense glycolysis activity of embryos declined, and respiration became the main way of energy generation (Henrique et al., 2015;Oginuma et al., 2017). CAZy analysis demonstrated that the percentage of GH and GT family enzymes in the control group and experimental group were higher than other enzymes. GT and GH play important roles in the formation of glycosylation (Cantarel et al., 2009). In some genetic disorders, individuals had abnormalities in glycosylation caused by genetic mutations that could lead to a variety of symptoms, including epilepsy, cleft palate, and heart defects (Lukacs et al., 2019). A report showed that in the process of mammalian organ formation, golgin subfamily B member 1 (Golgb1) mutant embryos caused cleft palate in mice, which was due to reduce hyaluronan accumulation and impair protein glycosylation in the palatal mesenchyme (Lan et al., 2016). All of these results indicated that the formation of CP by RA was related to metabolism, and the maternal environment also affected the development of the fetal palate.
To sum up, our results suggest that RA-induced maternal gut microbiome alterations in Lactobacillus cause lactate variation in embryo palate shelves which may affect fetal palate development through metabolic change. This study has some limitations. Firstly, the number of mice samples of each group was five. If more samples were included the results might be more in-depth. Secondly, we didn't validate the sequencing results directly as we can't find the commercial Lactobacillus which was increased in the RA group. That was why we measured the amount of lactate, the product of Lactobacillus, in pregnant mice plasma, amniotic fluid, and fetal palatal tissue to confirm the results indirectly. Further studies about microbiome and metabolism on CP are needed to fully clarify which we focus on now. And it will be better that the relationship between human maternal gut microbiome and CP can be explored if available.
Data availability statement
The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/Supplementary Material.
Ethics statement
The animal study was reviewed and approved by Animal Care and Use Committee of the School of Stomatology, Capital Medical University (Beijing, China, permit number: KQYY-202109-006). | 2022-12-15T14:10:34.675Z | 2022-12-15T00:00:00.000 | {
"year": 2022,
"sha1": "14027e5e0e0a7b4ca1de2681383ed421cecfaea3",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "14027e5e0e0a7b4ca1de2681383ed421cecfaea3",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
218859512 | pes2o/s2orc | v3-fos-license | Does Motor Cortex Engagement During Movement Preparation Differentially Inhibit Nociceptive Processing in Patients with Chronic Whiplash Associated Disorders, Chronic Fatigue Syndrome and Healthy Controls? An Experimental Study
Background: Patients with chronic fatigue syndrome (CFS) and chronic whiplash associated disorders (cWAD) present a reduced ability to activate central descending nociceptive inhibition after exercise, compared to measurements before exercise. It was hypothesised that a dysfunctional motor-induced inhibition of nociception partly explains this dysfunctional exercise-induced hypoalgesia. This study investigates if engagement of the motor system during movement preparation inhibits nociception-evoked brain responses in these patients as compared to healthy controls (HC). Methods: The experiment used laser-evoked potentials (LEPs) during three conditions (no task, mental task, movement preparation) while recording brain activity with a 32-channel electroencephalogram in 21 patients with cWAD, 20 patients with CFS and 18 HC. Two-factor mixed design Analysis of variance were used to evaluate differences in LEP amplitudes and latencies. Results: No differences in N1, N2, N2P2, and P2 LEP amplitudes were found between the HC, CFS, and cWAD groups. After nociceptive stimulation, N1, N2 (only at hand location), N2P2, and P2 LEP amplitudes significantly decreased during movement preparation compared to no task (within group differences). Conclusion: Movement preparation induces a similar attenuation of LEPs in patients with CFS, patients with cWAD and HC. These findings do not support reduced motor-induced nociceptive inhibition in these patients.
Introduction
The experience of pain can limit the ability to perform motor tasks, which is caused by nociception-motor interactions [1]. It has been demonstrated that proprioception, force steadiness, muscle activity and coordination are altered by the experimental induction of pain [2,3]. Further, the level of pain sensitivity can be decreased by motor cortex stimulation through epidural electrodes, repetitive transcranial magnetic stimulation, or performance of physical exercise [4][5][6][7][8]. Engagement of the motor cortex however does not only occur when carrying out movements, but also during the movement-preparation phase [9][10][11]. Previous research has indicated that engagement of the primary motor cortex during movement preparation reduces pain and nociception-evoked potentials, suggesting that engagement of the motor cortex exerts an inhibitory effect on nociception [12].
Two syndromes with a generalised hyper-responsiveness of the central nervous system to a variety of stimuli are chronic fatigue syndrome (CFS) and chronic whiplash associated disorders (cWAD). Patients with cWAD suffer from a variety of clinical manifestations such as pain, fatigue, concentration difficulties, and headaches [13,14]. CFS is characterised by a medically unexplained disabling fatigue that persists for more than 6 months (primary symptom) [15]. Besides fatigue, also multi-joint pain, impaired concentration, and post-exertional malaise are frequently occurring [16,17]. The influence of pain on motor function in chronic pain patients can also be seen in the dysfunctional responses to exercise. These patients appear to exhibit reduced central descending nociceptive inhibition while performing exercises (assumed due to differences in measurements before and after exercise), which is normally seen in healthy controls [18,19]. Up to now, the exact mechanism of this impaired exercise-induced hypoalgesia is unknown. However, it is suggested that an impaired inhibition of nociception induced by the motor cortex can possibly partly explain this dysfunctional response [20].
De Pauw et al. (2019) tested the hypothesis that changes in brain morphology are an underlying process of motor impairment in patients with WAD. They revealed a decrease in gray matter volume of the precentral gyrus, which is part of the motor cortex, compared to healthy controls. Additionally, an association was found between the volume of the precentral gyrus and both neuromuscular control and strength [21]. Another study evaluated motor cortical excitability in patients with CFS during repetitive finger movements. It was found that patients with CFS do not show normal fluctuations of motor cortical excitability during and after the exercise [22]. Additionally, patients with CFS might have a deficit in motor preparatory areas of the brain, a hypothesis that was developed based on the slowness of simple reaction times in these patients [23].
Evaluating whether nociception-motor interactions are disrupted in patients with CFS and cWAD, has not yet been performed. To address this existing gap in knowledge, the goal of this study was to investigate if motor cortex engagement during movement preparation inhibits the cortical responses to nociceptive stimuli in patients with CFS and patients with cWAD compared to healthy controls. Nociceptive laser stimuli were used as nociceptive stimuli. Laser-evoked potentials (LEPs) can be used to conduct a functional evaluation of the nociceptive afferent pathways [24]. Up to now, findings regarding LEPs in patients with enhanced pain sensitivity are still inconclusive. However, in some chronic pain patients, enhanced LEPs are suggested to be related to central sensitisation, at least in part [25,26]. In patients with fibromyalgia, studies already reported increased amplitudes of the N2 and P2 waves of LEPs [27]. In this study, it is hypothesised that movement preparation would only result in an attenuation of LEP amplitudes (i.e., inhibitory effect on nociceptive system) in healthy controls and not in-patient populations (impaired exercise-induced hypoalgesia). Therefore, the main outcome measure was the difference in LEP N2P2 amplitude between rest and movement preparation in both patient populations. Similarly, it was hypothesised that pain intensity ratings for nociceptive stimuli during movement preparation would be reduced in healthy controls and not in-patient population. In addition to comparing the elicited responses in the time domain, a time-frequency analysis was conducted to characterize transient stimulus-evoked modulations of oscillatory electroencephalogram activity. Therefore, an experiment was set-up using laser-evoked potentials (LEPs) in 62 study participants (patients with cWAD, patients with CFS and healthy controls (HC)).
Participants
Twenty-one patients with cWAD, 21 patients with CFS and 20 HC participated in this study. Before study participation, all patients provided written informed consent. The study was conducted according to the revised Declaration of Helsinki (1998). Approval of the study protocol was obtained from the Ethics Committee of the University of Antwerp (approval number B300201214521).
Patients were recruited through advertisement on the website of Pain in Motion (international research group; http://www.paininmotion.be), from a medical database obtained from previous studies, from the medical database of the local Red Cross medical care unit (for patients with cWAD) and via private physician practices (for patients with CFS). Patients with cWAD were only eligible for inclusion if they experienced chronic symptoms resulting from a whiplash trauma (e.g., motor vehicle accident or fall) and fulfilling diagnostic criteria of WAD grade I to III as defined by the Quebec Task Force classification [28]. Chronicity was defined as complaints persisting for at least 3 months. Patients who were classified as WAD grade IV [28], were excluded. Patients with CFS were only eligible if they were diagnosed by a physician, according to the 1994 Center for Disease Control and Prevention criteria [15]. This implies that any other medical condition (cardiovascular, neurological, psychiatric or haematological) possibly explaining the debilitating fatigue and pain was excluded prior to establishing the diagnosis of CFS.
Healthy controls were recruited among the university college staff, family members and acquaintances of the researchers. Participants were not eligible if they previously experienced a whiplash trauma, suffered from persistent pain or neck-shoulder-arm symptoms, or had sought medical help for neck-shoulder-arm symptoms in the past 6 months. Additionally, healthy participants were excluded if they were suffering from an acute or chronic disease, or when they were experiencing pain on the day of the assessment.
Participants were excluded if they were pregnant, or if they suffered from any cardiovascular or neurological disease. All participants were asked to discontinue non-opioid analgesic and anti-inflammatory drugs 48 h before testing. Additionally, participants were asked to avoid physical exertion and to refrain from consuming nicotine, alcohol and caffeine 24 h before the assessment. To limit confounding of the study findings, we aimed to recruit 3 groups with a comparable age distribution.
Demographics of the 3 groups were compared with Kruskal-Wallis tests and chi-squared tests, depending on the normality and variance of the data.
Experimental Design
Study participation required one study visit at the Institute of Neuroscience (Université catholique de Louvain, Brussels, Belgium) during which LEPs were recorded. Before the start of the assessment, participants completed a demographic questionnaire, the Pain Catastrophizing Scale (PCS), the Beck Depression Inventory (BDI) and the Pain Disability Index (PDI). Then, the electroencephalographic (EEG) recording started. In total, participants received 180 stimuli, of which 60 were given on the left hand and 60 on the left foot. On each location (hand and foot), three different conditions were applied (resting versus calculation versus movement task). In each condition, three blocks of 10 laser stimuli were delivered. During the rest task, no specific action was required from the participants. During the calculation task, participants performed a mental calculation task out loud (counting backwards from 1000 to 0 in steps of 9). The calculation task was added to the protocol, to evaluate the effect of distraction. During the movement task, participants raised the right index finger as fast as possible. Participants were informed about the type of the required task at the beginning of each condition. A randomisation procedure was used whether the conditions of the hand or foot were tested first. Further within the locations, the order of the three experimental conditions (containing 30 stimuli) was also randomised using a simple randomisation procedure (throwing a dice) [29]. A schematic overview of the experimental setup with the different stimuli and interval times can be found in Figure 1.
J. Clin. Med. 2020, 9, x 60 were given on the left hand and 60 on the left foot. On each location (hand and foot), three different conditions were applied (resting versus calculation versus movement task). In each condition, three blocks of 10 laser stimuli were delivered. During the rest task, no specific action was required from the participants. During the calculation task, participants performed a mental calculation task out loud (counting backwards from 1000 to 0 in steps of 9). The calculation task was added to the protocol, to evaluate the effect of distraction. During the movement task, participants raised the right index finger as fast as possible. Participants were informed about the type of the required task at the beginning of each condition. A randomisation procedure was used whether the conditions of the hand or foot were tested first. Further within the locations, the order of the three experimental conditions (containing 30 stimuli) was also randomised using a simple randomisation procedure (throwing a dice) [29]. A schematic overview of the experimental setup with the different stimuli and interval times can be found in Figure 1. After 800 ms, a second visual stimulus was provided. The second visual stimulus was followed by a resting period, a movement preparation task or a calculation task (randomised order). Abbreviations: rand: randomisation.
A visual warning signal (i.e., a light diode) lasting 200 ms preceded each laser stimulus and was initiated 1 s before stimulus administration ( Figure 1). Eight hundred ms after the laser stimulus, a second visual signal of 200 ms was delivered. The first visual stimulus represented a warning stimulus and the second visual stimulus an imperative stimulus [11,12,30]. After this second visual stimulus the action depended on the test task. The inter-stimulus interval ranged between 10 and 15 s in order to avoid habituation effects to nociceptive stimulation. The protocol of the current study is based on the study of Le Pera et al. (2007) among healthy volunteers [12].
At the end of each block, participants were asked to rate the intensity elicited by the laser stimuli using a visual analogue scale (VAS) ranging from "no detection" (VAS = 0) to "maximum pain" (VAS = 100) at the appropriate ends by drawing a vertical line on separate 100-mm horizontal lines. At the middle of the scale (VAS = 50) an anchor marked the borderline between non-painful and painful domains of sensation [31]. Two-factor mixed design ANOVAs were used to compare VAS values between the three conditions and populations.
Sample Size Calculation
An a priori sample size calculation was performed for the entire protocol, based on results from previous studies [12,27]. At least 20 subjects per group (total of 60 subjects) were required for reaching an effect size of 0.5 with a repeated measures ANOVA at the 5% level with a power of 80%.
Demographic Characteristics
Information on participant's age, sex, time since accident (cWAD), pain level and medication use were evaluated. Medication is divided in non-opioids, opioids, tricyclic antidepressants and selective serotonin reuptake inhibitor (SSRI)/serotonin and norepinephrine reuptake inhibitor (SNRI). Pain intensity at the moment of study participation and pain intensity during the last seven After 800 ms, a second visual stimulus was provided. The second visual stimulus was followed by a resting period, a movement preparation task or a calculation task (randomised order). Abbreviations: rand: randomisation.
A visual warning signal (i.e., a light diode) lasting 200 ms preceded each laser stimulus and was initiated 1 s before stimulus administration ( Figure 1). Eight hundred ms after the laser stimulus, a second visual signal of 200 ms was delivered. The first visual stimulus represented a warning stimulus and the second visual stimulus an imperative stimulus [11,12,30]. After this second visual stimulus the action depended on the test task. The inter-stimulus interval ranged between 10 and 15 s in order to avoid habituation effects to nociceptive stimulation. The protocol of the current study is based on the study of Le Pera et al. (2007) among healthy volunteers [12].
At the end of each block, participants were asked to rate the intensity elicited by the laser stimuli using a visual analogue scale (VAS) ranging from "no detection" (VAS = 0) to "maximum pain" (VAS = 100) at the appropriate ends by drawing a vertical line on separate 100-mm horizontal lines. At the middle of the scale (VAS = 50) an anchor marked the borderline between non-painful and painful domains of sensation [31]. Two-factor mixed design ANOVAs were used to compare VAS values between the three conditions and populations.
Sample Size Calculation
An a priori sample size calculation was performed for the entire protocol, based on results from previous studies [12,27]. At least 20 subjects per group (total of 60 subjects) were required for reaching an effect size of 0.5 with a repeated measures ANOVA at the 5% level with a power of 80%.
Demographic Characteristics
Information on participant's age, sex, time since accident (cWAD), pain level and medication use were evaluated. Medication is divided in non-opioids, opioids, tricyclic antidepressants and selective serotonin reuptake inhibitor (SSRI)/serotonin and norepinephrine reuptake inhibitor (SNRI). Pain intensity at the moment of study participation and pain intensity during the last seven days were measured using a VAS (i.e., by drawing a vertical line on separate 100-mm horizontal lines).
Questionnaires
The Pain Catastrophizing Scale was used to measure participant's level of pain catastrophizing thoughts. This questionnaire consists of 13 pain-related cognitive items that need to be scored on a 5-point Likert scale (0 = not at all, 4 = all the time) [32]. Scores ≥30/52 indicate a clinically relevant level of pain catastrophizing [33]. The internal consistency, test-retest reliability and validity are found to be acceptable [34,35].
The Beck Depression Inventory was used to evaluate depressive thoughts. Total score ranges between 0 and 63, with a higher score indicating more severe depression. This questionnaire is a reliable and valid tool for the assessment of depressive symptoms in chronic pain patients [36].
The Pain Disability Index provides an indication of the impact of pain on daily living activities. This questionnaire consists of 7 items scored on an 11-point Likert scale (0 = no disability, 10 = completely disabled). Total scores range from 0 to 70, with a higher score indicating higher levels of perceived disability. Differences of 8.5 to 9.5 points are considered to be clinically relevant [37]. The Dutch version of this questionnaire is a valid tool with good internal consistency and test re-test reliability [38].
Total scores on the questionnaires were used in the analyses. Comparisons between patient groups were made with Independent Samples t-tests and Mann-Whitney U tests depending on the normality and variance of the data.
Laser Stimulation
Laser stimuli were delivered by a CO 2 laser designed and built in the Department of Physics of the Université catholique de Louvain, Louvain-La-Neuve, Belgium [31]. Stimuli were applied on the dorsum of the left hand (C6-C7 skin dermatomes) and left foot (L5-S1 skin dermatomes). The CO 2 laser system generates a highly collimated infrared beam (wavelength: 10.6 um). The power output is continuously adjustable between 1 and 25 W. Heat pulse duration was 20 ms. Laser beam diameter was 4 mm. The laser stimulus is highly reproducible (variation <1%). The stimulation site was visualised with a He-Ne laser beam aligned with the CO 2 laser beam. To avoid skin burns and nociceptor fatigue [39], the location on the skin where the laser stimulus was provided, was moved slightly between 2 successive stimulations. For each subject, the intensity threshold to elicit detections related to the activation of Aδ fibers was determined by measuring the reaction time to the laser stimulation [40]. A series of increasing and decreasing stimulus intensities were delivered to the left-hand dorsum. Participants were asked to push a button as fast as possible when they felt the laser stimulation. Laser stimulus intensity was set at the intensity that repetitively provoked a reaction time below 600 ms (minimum 3 consecutive stimuli). With this procedure, the stimulus intensity was supraliminal for Aδ fiber activation, as confirmed by the reaction times compatible with peripheral nerve conduction velocities within the range of myelinated small fibers. This preliminary session of stimulation was only applied to the dorsum of the left hand since it had been shown with healthy subjects that the threshold for Aδ-nociceptors was somewhat lower at the foot dorsum [41]. Thus, a supramaximal stimulus intensity for activation of Aδ-nociceptors at the level of the hand should be adequate for the activation of Aδ-nociceptors at the level of the foot dorsum.
LEP Recording
LEPs were recorded from 32 Ag-AgCl scalp electrodes, placed according to the International 10-20 system for electrode positioning. Signals were digitized at a sampling rate of 1000 Hz. The signals were referenced to the average of all scalp electrodes. Eyeblinks were simultaneously recorded with a pair of surface electrodes positioned diagonally over the right eye. Electrode impedance was kept below 10 kΩ with a target below 5kΩ. During the assessment, participants were seated in a comfortable chair in a silent room. They were asked to relax muscles, sit as still as possible and gaze at a light diode.
Time-Domain Analysis of LEPs
Offline data pre-processing was performed with the Letswave 6 EEG toolbox (http://letswave.org). EEG recordings were filtered (0.3-30 Hz, Butterworth filter) and segmented into epochs of 6 s (−3 to +3 s relative to the laser stimulus onset). Electroculographic artefacts were removed using Independent Component Analysis. Independent Components having a frontal scalp distribution and a time course compatible with eyeblink artefacts were removed (1)(2)(3)(4)(5). Afterwards, all epochs were visually inspected to remove remaining artefacts (± 100 µV). Finally, epochs were baseline corrected, using the time interval from −1.5 to −1 s as reference (i.e., before the onset of the warning visual stimulus). For each subject, epochs were averaged according to the location (foot versus hand) and condition (rest versus movement versus calculation). LEP components were identified on the basis of their latency and polarity and labelled according to Valeriani et al. 2012 [26]. Peak latencies of N2 and P2 components amplitudes were measured at the vertex (Cz) and defined as the largest negative and positive deflection, between respectively 150-350 ms and 300-500 ms [42]. The LEP N1 component was evaluated at the T8 electrode with the Fz electrode as reference within a targeted time frame of 100-300 ms post-stimulus [42,43].
The amplitude and latencies of the N1, N2 and P2 waves of LEPs obtained following stimulation of the hand and foot during the three conditions (rest, calculation and movement task) between the three groups (HC, CFS, and cWAD) were compared with two-factor mixed design ANOVAs. In addition to comparing latencies and amplitudes of the N1, N2 and P2 waves of LEPs, also the entire LEP waveforms obtained in the different groups were compared using point-by-point ANOVAs. These were performed separately in each patient group for each location to evaluate the effect of the different conditions within that specific group (total number of tests: 6). The steps are described in van den Broeke et al. [44][45][46] and briefly reviewed here. As a first step, LEP waveforms of the three conditions were compared by a point-by-point F-statistic. Then, adjacent samples in time above the critical F-value for parametric two-sided tests were identified and clustered. Additionally, an estimate of the magnitude of each cluster was calculated by summing up the F-values constituting each cluster. Then, a reference distribution of maximum cluster magnitude was obtained by random permutation testing (100 times) of the LEP waveforms of the different conditions. The last step entails calculating the proportion of random partitions that has a larger cluster-level statistic than the observed one. Post-hoc testing with paired-t-tests was applied to determine which condition was significantly different in each patient population. This analysis was conducted separately for hand and foot stimulation, as the latencies of the elicited responses may be expected to differ when stimulating the hand and foot, because of the differences in peripheral conduction distance. Clusters were considered significant if p <0.05.
Time-Frequency Analysis of LEPs
A time-frequency analysis of the recorded EEG signals was performed to characterize and compare non-phase-locked stimulus-induced changes in the power of ongoing EEG oscillations. A short-time fast Fourier transform (STFFT) [47] with a fixed Hanning window of 500 ms was used. The analysis was performed using the signal recorded at Cz vs. average reference and T8 vs. Fz. The obtained single-trial time-frequency maps were then averaged across trials. A baseline correction was then performed using the interval ranging from −1750 to −1250 ms relative to stimulus onset (i.e., before the onset of the first visual stimulus), to identify decreases (event-related desynchronisation, ERD) and increases (event-related synchronisation, ERS) of oscillation amplitude relative to baseline [48].
Comparison of the time-frequency maps obtained in the different groups was performed using 6 point-by-point ANOVAs (without permutation testing) to evaluate differences in condition within each patient group. In parallel to the analysis in the time-domain, this analysis was conducted separately for hand and foot stimulation. Clusters were considered significant if p < 0.05.
All statistical analyses were performed with Letswave 6/7 and R Studio version 0.99.903. Normality was controlled with the Shapiro Wilk test and QQ-plots and equality of variances by Levene's tests.
Results
Sixty-two participants (21 patients with CFS, 21 patients with cWAD and 20 HC) took part in this experimental study. Data from one person with CFS and one HC were lost due to recording/processing issues with the EEG data. One HC was excluded after study participation as he reported neck pain (VAS of 31) on the day of testing. One patient with CFS terminated the experiment due to upcoming headache, wherefore not all conditions were present for that patient. For other patients with CFS no triggers were detected during one condition, resulting in data for only one location. In one patient from the CFS group, the EEG signal of one condition contained too many artifacts. As a consequence, we interpolated P7 and P8 with 3 neighbouring electrodes to keep this person in the analysis. Therefore, the analysis was performed on 18 HC, 20 patients with CFS and 21 patients with cWAD. Figure 2 is representing the flow chart of this study.
All statistical analyses were performed with Letswave 6/7 and R Studio version 0.99.903. Normality was controlled with the Shapiro Wilk test and QQ-plots and equality of variances by Levene's tests.
Results
Sixty-two participants (21 patients with CFS, 21 patients with cWAD and 20 HC) took part in this experimental study. Data from one person with CFS and one HC were lost due to recording/processing issues with the EEG data. One HC was excluded after study participation as he reported neck pain (VAS of 31) on the day of testing. One patient with CFS terminated the experiment due to upcoming headache, wherefore not all conditions were present for that patient. For other patients with CFS no triggers were detected during one condition, resulting in data for only one location. In one patient from the CFS group, the EEG signal of one condition contained too many artifacts. As a consequence, we interpolated P7 and P8 with 3 neighbouring electrodes to keep this person in the analysis. Therefore, the analysis was performed on 18 HC, 20 patients with CFS and 21 patients with cWAD. Figure 2 is representing the flow chart of this study.
Group characteristics
Participants had a median age of 46.8 years in the healthy group, 43.4 years in the CFS group and 45.8 years in the cWAD group. In the healthy, CFS and cWAD group, respectively 38.9%, 10% and 52.4% male participants were included. Concerning medication use, in the cWAD group 10 patients took non-opioids (paracetamol, NSAID, benzodiazepine), 3 patients took opioids, 1 patient tricyclic antidepressants and 6 patients SSRI/SNRI. In the CFS group 9 patients took non-opioids (paracetamol, NSAID, benzodiazepine), 1 patient took opioids and 9 patients SSRI/SNRI. Other group characteristics and self-reported measurements are listed in Table 1.
Group characteristics
Participants had a median age of 46.8 years in the healthy group, 43.4 years in the CFS group and 45.8 years in the cWAD group. In the healthy, CFS and cWAD group, respectively 38.9%, 10% and 52.4% male participants were included. Concerning medication use, in the cWAD group 10 patients took non-opioids (paracetamol, NSAID, benzodiazepine), 3 patients took opioids, 1 patient tricyclic antidepressants and 6 patients SSRI/SNRI. In the CFS group 9 patients took non-opioids (paracetamol, NSAID, benzodiazepine), 1 patient took opioids and 9 patients SSRI/SNRI. Other group characteristics and self-reported measurements are listed in Table 1.
Pain Intensity Ratings
A two-factor mixed ANOVA was conducted to evaluate the effect of condition and population on VAS pain intensity ratings following laser stimulation on both locations. For the hand location, no statistically significant interaction effect was found between population and condition (F = 0.803, p = 0.525). Main effect analyses for condition revealed a significant effect (F = 10.366, p <0.001), while the main effect for population was not significant (F = 0.450, p = 0.640). Simple main effects analysis for condition showed that during the resting condition (41.06 (95% CI 35.85 to 46.27)) VAS ratings were significantly higher than during the counting condition (36.25 (95% CI 30.91 to 41.58)) (p <0.001). Pain intensity ratings during the movement condition (40.40 (95% CI 35.26 to 45.55)) were significantly higher than during the counting condition (p = 0.001).
For the foot location, no statistically significant interaction between population and condition was found (F = 0.730, p = 0.569). Main effect analysis for condition revealed a significant effect (F = 4.675, p = 0.012). Main effect for population was not significant (F = 0.353, p = 0.704). Simple main effect analysis for condition revealed a higher pain intensity rating for the resting condition (
Laser-Evoked Brain Potentials: Time Domain Analysis
Grand average LEP waveforms recorded at the vertex (Cz) and at the contralateral temporal electrode (T8) are presented in Figure 3. In all three conditions, the laser stimulus elicited a clear negative-positive complex (N2-P2) maximal at the scalp vertex. This complex was flanked by additional responses triggered by the two visual stimuli, which were presented 1000 ms before the laser stimulus and 800 ms after the laser stimulus. The magnitude of the second visual ERP was considerably reduced as compared to the first visual ERP, both for hand and foot location during all conditions. The N2-P2 complex of LEPs was preceded by an earlier N1 wave, less clearly, but visible at electrode T8. Visual stimuli have a dominant negative peak that is highest in the occipital region, while the negative peak for the laser stimulus is highest at the vertex. Table 2 and Figure 4 present the LEP amplitudes and latencies, separated by condition and stimulus location. Table 2. LEP latencies, amplitudes and VAS ratings, separated by type of condition, location and population for nociceptive laser stimulation. Two-factor mixed ANOVAs were performed to reveal differences in LEP amplitudes and latencies between the three conditions (resting versus counting versus movement preparation) and population (HC versus CFS versus cWAD) ( Table 3). Two-factor mixed ANOVAs were performed to reveal differences in LEP amplitudes and latencies between the three conditions (resting versus counting versus movement preparation) and population (HC versus CFS versus cWAD) ( Table 3). Table 3. Two-factor mixed design ANOVAs for LEP latencies and amplitudes, separated by type of condition, location and population for nociceptive laser stimulation.
Peak
Interaction Effect (Condition x Population) The results of the point-by-point analysis of the LEP waveforms after stimulating the hand in HC revealed two significant clusters at Cz, extending between 294 and 448 ms (p = 0.001) and between 468 and 482 ms (p = 0.008) post-stimulus. Post-hoc testing with paired-t-tests for the first cluster revealed a significant difference between the rest and movement condition (t = -4.64, p = 0.0002) and between rest and counting (t = −2.72, p = 0.018). For the second cluster, a significant difference was found between rest and movement (t = −4.48, p = 0.0004). In the CFS group, a significant cluster was found at Cz between 327 and 449 ms (p = 0.0097). Paired t-tests revealed a significant difference between rest and movement (t = −3.26, p = 0.005) and between rest and counting (t = −3.39, p = 0.004). In the cWAD group, two clusters were revealed, among which one at T8 between 258 and 273 ms post-stimulus (p = 0.008) and one at Cz that was extending between 297 and 446 ms post-stimulus (p = 0.0126). At T8, a significant difference was revealed between rest and counting (t = −2.46, p = 0.025) and between counting and movement (t = 3.42, p = 0.003). At Cz, a significant difference was found between rest and counting (t = −3.17, p = 0.024) and between rest and movement (t = −3.009, p = 0.01).
Effect of Population
The results of the point −by-point analysis of the LEP waveforms after foot stimulation revealed a significant cluster at Cz between 361 and 523 ms post-stimulus (p = 0.014). Post-hoc testing by paired sample-t-tests revealed a significant difference between rest and movement (t = −2.91, p = 0.024) and between rest and counting (t = −2.49, p = 0.025). In the CFS group, two clusters were found at Cz between 345 and 414 ms (p = 0.027) and between 427 and 481 ms post-stimulus (p = 0.026). Post-hoc testing indicated one significant difference between rest and counting (t = −2.566, p = 0.024) which was extending both clusters. In the cWAD group, a significant difference was detected between 345 and 513 ms post-stimulus at Cz (p = 0.0007). Post-hoc testing indicated differences between rest and counting (t = −2.6001, p = 0.044), between rest and movement (t = −2.86, p = 0.022) and between counting and movement (t = −2.52, p = 0.025).
Laser-Evoked Brain Potentials: Time-Frequency Analysis
The grand average time-frequency maps of the amplitude of ongoing EEG oscillations recorded at the scalp vertex (Cz) are presented in Figure 5. These maps show marked increases of low-frequency activity (<8 Hz) when stimulating the hand between −950 and −550 ms (activity related to the first visual stimulus) and between 150 and 500 ms post-stimulus (activity related to the laser stimulus). Furthermore, a long-lasting increase was revealed from 850 ms post-stimulus onwards. Additionally, a decrease of alpha-band oscillations was observed between −500 and −50 ms and from 1050 ms post-stimulus onwards. For stimulation of the foot, increases of low-frequency activity were revealed between −880 and −650 ms (visual-evoked activity) and between 200 ms and 450 ms (laser-evoked activity). Furthermore, a long-lasting increase was revealed from 950 ms post-stimulus onwards. Additionally, decreases of alpha-band oscillations could be detected between −700 ms and −50 ms and from 1000 ms onwards. When stimulating the hand, the point-by-point comparison revealed a significant cluster in the HC group at Cz (p = 0.001) located at lower frequencies (< 8 Hz) between 200 and 400 ms post-stimulus ( Figure 6). Post-hoc testing revealed significant differences between rest and counting (t = −3.41, p = 0.004) and between rest and movement (t = −4.04, p = 0.001). Additionally, a significant cluster was revealed around 23-25 Hz from 330 to 550 ms post-stimulus. Post-hoc testing revealed significant differences between rest and counting (t = 3.55, p = 0.004) and between counting and movement (t = −3.52, p = 0.003). In the CFS group, a significant cluster was revealed with a p-value of 0.008 at Cz between 180 and 380 ms post-stimulus in the lower frequencies (< 5 Hz). Post-hoc testing revealed a significant difference between rest and counting (t = −4.82, p = 0.001). In the cWAD group, a significant cluster (p = 0.003) was found with a frequency below 5 Hz extending between 350 and 450 ms poststimulus. Post-hoc testing revealed a significant difference between rest and counting (t = −3.36, p = 0.003) and between rest and movement (t = −3.20, p = 0.01). A second significant cluster was visible between 225 and 400 ms post-stimulus with a frequency around 17 Hz (p = 0.0015). Significant differences were revealed between rest and counting (t = 3.18, p = 0.005) and between counting and movement (t = −3.87, p = 0.002). When stimulating the hand, the point-by-point comparison revealed a significant cluster in the HC group at Cz (p = 0.001) located at lower frequencies (< 8 Hz) between 200 and 400 ms post-stimulus ( Figure 6). Post-hoc testing revealed significant differences between rest and counting (t = −3.41, p = 0.004) and between rest and movement (t = −4.04, p = 0.001). Additionally, a significant cluster was revealed around 23-25 Hz from 330 to 550 ms post-stimulus. Post-hoc testing revealed significant differences between rest and counting (t = 3.55, p = 0.004) and between counting and movement (t = −3.52, p = 0.003). In the CFS group, a significant cluster was revealed with a p-value of 0.008 at Cz between 180 and 380 ms post-stimulus in the lower frequencies (< 5 Hz). Post-hoc testing revealed a significant difference between rest and counting (t = −4.82, p = 0.001). In the cWAD group, a significant cluster (p = 0.003) was found with a frequency below 5 Hz extending between 350 and 450 ms post-stimulus. Post-hoc testing revealed a significant difference between rest and counting (t = −3.36, p = 0.003) and between rest and movement (t = −3.20, p = 0.01). A second significant cluster was visible between 225 and 400 ms post-stimulus with a frequency around 17 Hz (p = 0.0015). Significant differences were revealed between rest and counting (t = 3.18, p = 0.005) and between counting and movement (t = −3.87, p = 0.002). When stimulating the foot, a significant cluster was revealed (p = 0.016) between 110 and 400 ms post-stimulus with a frequency below 5 Hz in the HC group at Cz. Post-hoc testing indicated a significant difference between rest and movement (t = −2.56, p = 0.022). A second cluster was revealed (p = 0.015) with a frequency of around 24-26 Hz between −0.050 and 400 ms post-stimulus. There was a significant difference between movement and counting in the second cluster (t = −3.22, p = 0.007). No significant clusters were revealed for the CFS group. For the cWAD group, three clusters were revealed at Cz. The first cluster was located at a low frequency (< 5 Hz) between 280 and 315 ms poststimulus (p = 0.012), the second was located at a frequency of around 17-21 Hz (p = 0.014) between 70 and 400 ms post-stimulus and the third cluster was located at 22 and 25 Hz (p = 0.013), extending between 100 and 400 ms post-stimulus. Post-hoc testing found significant differences between rest and counting (t = −2.98, p = 0.008) and between rest and movement (t = −2.54, p = 0.02) for the first cluster. For the second cluster, a significant difference was revealed between rest and movement (t = −2.94, p = 0.009). Post-hoc testing for the third cluster revealed a difference between counting and movement (t = −3.31, p = 0.004).
Discussion
This is the first study that evaluated whether nociception-motor interactions (i.e., the effect of motor activation during motor preparation on the cortical responses to nociceptive stimuli) differ in patients with cWAD and CFS, compared to HC, using LEPs. EEG recordings performed just before the execution of a movement show a negative drift, which is associated with motor preparation and expectancy [10,12]. This slow negative cortical potential, elicited between a warning stimulus and an imperative stimulus, is thought to reflect the engagement of cortical motor areas related to movement preparation [49,50]. In this study, the protocol was constructed in such a way that laser stimuli were provided in a time interval that is able to elicit a contingent negative variation [49]. It was found that When stimulating the foot, a significant cluster was revealed (p = 0.016) between 110 and 400 ms post-stimulus with a frequency below 5 Hz in the HC group at Cz. Post-hoc testing indicated a significant difference between rest and movement (t = −2.56, p = 0.022). A second cluster was revealed (p = 0.015) with a frequency of around 24-26 Hz between −0.050 and 400 ms post-stimulus. There was a significant difference between movement and counting in the second cluster (t = −3.22, p = 0.007). No significant clusters were revealed for the CFS group. For the cWAD group, three clusters were revealed at Cz. The first cluster was located at a low frequency (< 5 Hz) between 280 and 315 ms post-stimulus (p = 0.012), the second was located at a frequency of around 17-21 Hz (p = 0.014) between 70 and 400 ms post-stimulus and the third cluster was located at 22 and 25 Hz (p = 0.013), extending between 100 and 400 ms post-stimulus. Post-hoc testing found significant differences between rest and counting (t = −2.98, p = 0.008) and between rest and movement (t = −2.54, p = 0.02) for the first cluster. For the second cluster, a significant difference was revealed between rest and movement (t = −2.94, p = 0.009). Post-hoc testing for the third cluster revealed a difference between counting and movement (t = −3.31, p = 0.004).
Discussion
This is the first study that evaluated whether nociception-motor interactions (i.e., the effect of motor activation during motor preparation on the cortical responses to nociceptive stimuli) differ in patients with cWAD and CFS, compared to HC, using LEPs. EEG recordings performed just before the execution of a movement show a negative drift, which is associated with motor preparation and expectancy [10,12]. This slow negative cortical potential, elicited between a warning stimulus and an imperative stimulus, is thought to reflect the engagement of cortical motor areas related to movement preparation [49,50]. In this study, the protocol was constructed in such a way that laser stimuli were provided in a time interval that is able to elicit a contingent negative variation [49]. It was found that motor cortex engagement through movement preparation, exerts an inhibitory effect on the nociceptive system, visible as an attenuation of LEP amplitudes in all three populations. No differences were found in the inhibitory effect through movement preparation between the three populations. Additionally, time-frequency maps revealed significant decreases in amplitude during movement preparation compared to rest.
No Reduced Motor-Induced Inhibition of Nociception in Patients Compared to Healthy Controls
Patients with CFS have been suggested to have reduced exercise-induced endogenous hypoalgesia, i.e., the activation of brain-orchestrated descending nociceptive inhibition in response to exercise [51]. The exact underlying mechanisms of exercise-induced endogenous hypoalgesia remains to be explored. However, several possible explanations such as the hormonal system, cardiovascular system and activation of the primary motor cortex are suggested to play a role in this phenomenon [20]. In this study, we focused on the possible reduced inhibitory effect of engagement of the primary motor cortex on nociception as underlying mechanism. This was evaluated by a movement preparation task namely lifting of the index finger and not during the performance of a real physical exercise. During movement preparation of the right hand, the contralateral cortical motor area is engaged [11], which has neural connections with the anterior cingulate cortex (ACC) [52], possibly leading to cingulate cortex inhibition during this phase. Cingulate cortex inhibition could be responsible for the attenuated N2 and P2 components of LEPs, as these are believed to originate, at least in part, from the ACC [53]. Our finding that motor preparation induced a similar reduction of LEPs in patients with cWAD and patients with CFS compared to HC does not support the hypothesis of reduced motor-induced inhibition of nociception in these patients.
Influence of Movement Preparation on LEP Components
LEPs consist of an early N1 component, which is presumably generated in the primary and secondary somatosensory area and the insular cortex [54,55]. Due to its earlier latency, the magnitude of the N1 component could be more directly related to the ascending nociceptive input than the later N2 and P2 components. Movement preparation is clearly able to reduce the P2 component after nociceptive laser stimulation of the hand and foot. This finding is in line with the findings of Le Pera et al. [12] in healthy volunteers, on which the protocol of the current study is based. Our results also indicate that motor preparation reduced the earlier N1 component. However, a study in 10 healthy volunteers could not reveal an effect of movement preparation on the N1 component [12]. Further research is necessary to unravel the influence of motor cortex activation on the N1 component.
Influence of Distraction on LEP Components
In this study, a mental task (backwards counting) was used to evaluate the effect of distraction. In line with results of previous studies [56,57], N2-P2 amplitude diminished during this mental task by drawing attention away from the LEP stimulus. This effect was detected in all three groups, meaning that shifting attention from a nociceptive stimulus in patients with chronic pain, will attenuate cortical responses in a similar way as in healthy volunteers. It may be possible that the attenuation of LEPs induced by movement preparation, is also due to the distraction effect, wherefore one can probably not solely assign the attenuation of LEPs to motor cortex inhibition.
Time-Frequency Analyses
Brief noxious stimuli are known to elicit increased neuronal activity at frequencies below 10 Hz, between 150 ms and 400 ms post-stimulus [54], which is in line with the results in this study. It is suggested that this activity originates from the sensorimotor cortex, insula, secondary somatosensory cortex and mid-/anterior cingulate cortex [58]. In this study significant differences in transient stimulus-evoked modulations were found between rest and movement preparation in healthy controls and in patients with cWAD in the lower frequencies during this time frame. Based on these results, it might be suggested that besides contextual modulations [59], also movement preparation is able to impact stimulus processing in healthy controls and patients with cWAD. These findings were not found in patients with CFS, potentially revealing another impact of motor cortex activation on stimulus processing in this patient group. This phenomenon is worthy of further confirmation as it could have a potential implication on the construction of rehabilitation strategies for patients with CFS.
Motor Cortex Activation And Pain Relief
Finally, VAS ratings for pain intensity were slightly lower during movement preparation compared to rest when stimulating the foot. However, pain intensity during movement preparation still remains categorised as moderate pain according to the ICD-11 [60]. In a similar experiment with motor cortex activation in patients with fibromyalgia, VAS ratings did not differ between a rest condition and movement task [61], which is in line with the unchanged VAS ratings during stimulation of the hand. Nevertheless, repetitive transcranial magnetic stimulation over the motor cortex is associated with a significant pain relief [62]. Previously, a hypothesis was raised that the analgesic effects induced by experimental stimulation of the motor cortex could be allocated to actions far from the stimulation site [63]. Apparently, the motor cortex activation induced by a simple task such as tapping or lifting a finger is not able to reduce the pain-related cortical responses [61]. Presumably, more physically demanding task are necessary to activate the remote processes that are related to motor cortex activation and responsible for the pain-relieving effect.
Clinical Implications
Patients with cWAD often present themselves with motor system dysfunctions in clinical practice [64]. Impaired cervical movement control is also well-documented in these patients [65][66][67]. Additionally, a broad range of studies explored altered central pain processing in patients with cWAD [68][69][70] and CFS [51,71]. However, studies evaluating the interaction between both systems are scarce. Therefore, this study focused on nociception-motor interactions and could not reveal differences compared to healthy volunteers when evaluating the effect of movement preparation after nociceptive laser stimulation. Moreover, nociception-motor interactions can be activated when preparing a movement on the contralateral side of the painful area, but also on a remote side. The finding of similar responses in healthy volunteers and patients with cWAD and CFS further supports the use of regular body movement and exercise interventions for managing these conditions (due to their inhibitory effect on pain), which is in line with current treatment guidelines for both conditions [72][73][74][75].
Study Limitations
Previous studies already revealed that the motor cortex is engaged during the performance of movements as well as during the movement-preparation phase [11]. However, future studies could use a verification tool (for example measuring motor-evoked potentials) to ensure motor cortex engagement after movement. Alternatively, non-invasive brain stimulation techniques such as transcranial direct current stimulation could be used to more directly evaluate motor cortex inhibition of nociception in these populations. Additionally, the latency of the N1 LEP component elicited by stimulation of the foot is shorter than the latency of the N1 component by stimulation of the hand. Previously, it was already stated that the N1 LEP component to foot stimulation is difficult to detect, which might have influenced these results [55]. To counter this aspect, both regular peak analyses as well as point-by-point analyses were performed with results pointing into the same direction. In this study, there was no equal sex distribution, with a male/female ratio in favor of females in the CFS group. This inequality is in line with the knowledge that CFS primarily affects women with percentages ranging from 65% up to 80% in favor of females [76]. A study in healthy volunteers concluded that there are no differences in amplitude nor latency of LEPs according to sex [77]. However, more recently, a smaller study found lower LEP amplitudes in males compared to females [78]. Future studies are needed to elucidate the exact role of sex on LEPs. Furthermore, patients received no specific instructions about medication use (except for a discontinuation of non-opioid analgesic and anti-inflammatory drugs 48 h before testing) and continued to take their current medication due to ethical considerations and to avoid withdrawal symptoms. Potentially this could have influenced our results since previous studies already revealed an opioid-related N1, N2, and P2 amplitude reduction [79][80][81], as well as a decreased P2 amplitude caused by non-opioids [82].
Conclusions
Movement preparation induced an attenuation of the LEP waveform after nociceptive laser stimulation of the hand and foot. No significant differences in nociception-motor interactions, evaluated by activating the motor cortex through voluntary movement preparation, were found between a group of patients with cWAD, CFS, and healthy volunteers. | 2020-05-21T09:18:29.273Z | 2020-05-01T00:00:00.000 | {
"year": 2020,
"sha1": "b946b358acfd912a1b902434006a564d7fbd6638",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0383/9/5/1520/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c3f61fc6984a0d7bf387ba2b3acceff736dbaee9",
"s2fieldsofstudy": [
"Psychology",
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
231966503 | pes2o/s2orc | v3-fos-license | Awareness and prevalence of e-cigarette use among Chinese adults: policy implications
Objective To assess the awareness and prevalence of electronic cigarettes (e-cigarettes) and associated factors among Chinese adults (15 years and older). Method This study examined data from Global Adults Tobacco Survey China Project, which was nationally representative and used stratified multiphase cluster randomised sampling design. Data were collected in 2018 through a household survey with in-person interviews using tablet computers. Complex sampling weighted analysis method was used. Results 48.5% of Chinese adults had heard of e-cigarettes. The proportions of Chinese adults who had ever used, had used in the last 12 months, and currently used e-cigarettes were 5.0%, 2.2% and 0.9%, respectively; people in the 15–24 years group showed the highest rates of ever use, last 12-month use and current use at 7.6%, 4.4%, and 1.5%, respectively. Among males, higher e-cigarette use was associated with 15–24 years age group, college/university or above education, and daily use of combustible cigarettes. Among all e-cigarette users, 90.6% also used combustible cigarettes. The most common reason for e-cigarette use was smoking cessation (46.2%) while among ever smokers, 9.5% of ever e-cigarette users had quit smoking and 21.8% of never e-cigarette users had quit smoking (adjusted OR 0.454, 95% CI 0.290 to 0.712). Conclusion Prevalence of e-cigarettes among Chinese adults had increased since 2015, especially among young people aged 15–24. The high level of dual use and lower quit rate among e-cigarette users indicated e-cigarettes had not shown cessation utility at the population level in China. Regulation of e-cigarettes is needed to protect youth and minimise health risks.
BACKGROUND
Since 2003, 1 electronic cigarettes (e-cigarettes) have swept across the world, with sales having increased from US$20 million in 2008 to US$15 billion in 2018. 2 3 Although the long-term health effects of e-cigarette use are not yet clear, many studies have shown that e-cigarettes can expose users to toxic chemicals, including nicotine, carbonyl compounds and volatile organic compounds, which are known to have adverse health effects for both users and non-users. 4-7 E-cigarettes on their own are associated with increased risk of cardiovascular diseases, lung disorders and adverse effects on the development of the fetus during pregnancy. 8 9 Despite that the number and level of known toxicants generated by the typical use of unadulterated electronic cigarettes are on average lower than in cigarette smoke, the levels of toxicants can vary enormously across and within brands and sometimes reach higher levels than in tobacco smoke. 10 Dual use of e-cigarettes and combustible cigarettes, which is the use pattern of a considerable number of e-cigarette users, 11-13 is increasingly found to be associated with critical short-term and long-term adverse health impacts. [14][15][16][17][18] It is of great public health concern that children and adolescents are increasingly taking up the use of e-cigarettes in some countries. [19][20][21] The addictive nature of nicotine can lead to dependence and may harm adolescents' brain development. 22 There is also a growing body of evidence showing that non-smoking adolescents who use e-cigarettes increase their chance of starting to smoke cigarettes. [23][24][25] Although most e-cigarettes are currently consumed in countries such as the USA and the UK, 26 e-cigarette use may be growing rapidly in China for various reasons, including the large number of smokers, the growing concerns about harms of cigarette smoking, the implementation of stronger subnational smoke-free legislations, the aggressive marketing activities of e-cigarettes, and the lack of strong laws and regulations over e-cigarettes. Thus, monitoring the use of e-cigarettes among both youth and adults is important to inform public health policies on e-cigarettes and tobacco control in general. According to iiMedia, a third-party data mining and analysis organisation for new industries, the total domestic sales revenue of e-cigarettes was ¥3.2 billion (US$0.46 billion) in 2015 and had increased to ¥5.06 billion (US$720 million) in 2018 and ¥7.86 billion (US$1.12 billion) in 2019. In 2020 and 2021, it was expected to reach ¥8.38 billion (US$1.2 billion) and over ¥9 billion (US$1.29 billion), respectively. 27 28 Several studies have been conducted in China using city-level data and have provided valuable insights into the awareness and use of e-cigarettes. For example, International Tobacco Control Survey in China, which was conducted in 10 cities, found that less than a third of Chinese adults were aware of e-cigarettes and 2% had tried the product. 29 Zhao et al looked at the prevalence of e-cigarettes in 14 major Chinese cities and analysed the correlates of e-cigarette awareness and use among urban residents. 30 Huang et al examined the awareness and use patterns of e-cigarettes, as well as the associated factors of e-cigarette use in five Chinese cities. 31 Zhao et al examined the perception of e-cigarette use among adult users in Shanghai. 32 At the national level, Xiao et al assessed the prevalence of e-cigarettes among middle school students using data from China's Youth Tobacco Survey, and examined the factors associated with awareness and use. 33 Wang et al used a mobile app survey and analysed the perception and use of e-cigarettes by different smoking status of young Chinese adults. 34 Our study used the most recent nationally representative household survey data from 2018 and examined the awareness and prevalence of e-cigarettes, as well as reasons, patterns and associated factors of e-cigarette use among Chinese adults (15 years and older). We also examined the change in e-cigarette prevalence between 2015 and 2018.
METHODOLOGY Study design and participants
Data used in this paper were from Global Adult Tobacco Survey China Project, which used a global standardised methodology and was conducted in 2018 by Chinese Center for Disease Control and Prevention. 35 A multistaged, geographically clustered sample design was used to produce nationally representative data. In total, 200 counties/districts from 31 provincial-level administrative jurisdictions of Mainland China were sampled. Nationally, a household survey method was used, with a total of 24 370 households sampled by randomly selecting one individual from each participating household to complete the survey. The investigators used a handheld digital tablet to collect data through in-person interviews.
The subjects of this survey were Chinese residents aged 15 and older who used the household as their primary residence in the previous month before the survey. The survey excluded those who lived collectively in places like student dormitories, nursing homes, military camps, prisons or hospitals.
Measures
Awareness of e-cigarette was measured by the question: 'Before today, have you ever heard of electronic cigarettes?' If the answer was 'yes', then 'Where did you hear about electronic cigarette?' was asked. 'Have you ever, even once, used an electronic cigarette?', 'During the past 12 months, have you ever, even once, used an electronic cigarette?' and 'Do you currently use electronic cigarettes daily, less than daily, or not at all?' were asked to measure e-cigarette use. In addition, 'What was the main reason that you used electronic cigarettes?' was asked for those who used e-cigarettes. And information on family income was collected in four income levels: 'less than ¥29 999', '¥30 000-¥49 999','¥50 000-99 999', and '¥100 000 and above'. Perception of harms around smoking was measured by three questions: 'Based on what you know or believe, does smoking tobacco cause stroke?', 'Based on what you know or believe, does smoking tobacco cause heart disease?' and 'Based on what you know or believe, does smoking tobacco cause lung cancer?' If a participant answered 'yes' for all three questions, this indicator was coded 'yes'; and 'no' if otherwise.
Statistical analysis
Due to the complex sample design for the survey, each responding unit was assigned a unique survey weight that was used to produce estimates of population parameters. The 2018 population data from the National Statistics Bureau of China were used for poststratification. All computations were performed using the SAS V.9.4 complex survey data analysis procedure. Percentage or proportion was used for descriptive statistics. Logistic regression was conducted to explore factors associated with current e-cigarette use and the association between e-cigarette use and smoking cessation. A p<0.05 was considered statistically significant. The survey passed the review of the Ethical Committee of the Chinese Center for Disease Control and Prevention.
Demographic characteristics
Out of a total of 24 370 selected households, 3193 empty households were eliminated. Finally, 19 640 households completed the survey and a total of 19 376 people completed the individual survey. The overall response rate was 91.5%. The surveyed 19 376 individuals represented 1 156 987 000 males and females aged 15 and above in China. Of these, 50.6% were male and 49.4% were female; 59.9% were from urban areas and 40.1% were from rural areas. Among surveyed participants, 34.0% had middle school as their highest level of education, 32.6% attained elementary school education. People with high school education accounted for 16.4% and those with college/university or above education were 17.0% (table 1).
Awareness of e-cigarettes
In 2018, 48.5% of adults aged 15 and older had heard of e-cigarettes (95% CI, 46.0% to 51.0%), with the proportion higher among males (59.1%) than females (37.7%), higher among young people (69.9% for participants aged 15-24) than other age groups (62.5% for people aged 25-44, 37.2% for people aged 45-64, and 16.9% for people aged 65 and older), higher among people with higher education (77.0% of those with college/university or above) than with lower education (61.7% for those with high school education, 48.0% of those with Original research middle school education, and 17.1% of those with elementary school or below education), higher among urban residents (56.3%) than rural residents (37.0%) and higher among current smokers (62.3%) than never smokers (42.8%). Regarding the information sources of hearing of e-cigarettes, the most common source was friends (63.9%), followed by internet (44.8%) and television (42.7%); other sources were relatively less common, including shops (18.6%), newspapers/magazines (12.4%) and radio (7.4%) ( Since the rate of current e-cigarette use among female participants was too low, logistic regression was used to explore the factors associated with current use among male participants. As shown in table 4, among male participants, being in the 15-24 age group, having a college/university or above education, and using combustible cigarettes daily were associated with a higher rate of e-cigarette use. There was no association between e-cigarette use and place of residence (urban vs rural) or income levels. In addition, among current e-cigarette users, 90.6% of them used both e-cigarettes and combustible cigarettes.
Reasons for using e-cigarettes
The most common reason for e-cigarette use was smoking cessation, with 46.2% (95% CI 35.9% to 56.5%) of current users saying they used e-cigarettes to quit smoking 11.7% (95% CI 6.0% to 17.4%) of current users reported that they used e-cigarettes because they were less harmful, and 10.5% (95% CI 1.9% to 19.1%) of users indicated e-cigarettes were fashionable, and 9.2% (95% CI 1.6% to 16.8%) reported that they used e-cigarettes because they liked the flavours. From the most common to the least common, these reasons were chosen in the same order by ever users and last 12-month users. Among people aged 15-24 who currently used e-cigarettes, the main reason for using e-cigarettes was smoking cessation (43.4%), followed by believing it was fashionable (27.9%).
Role of e-cigarettes in smoking cessation at population level
Among ever smokers, 9.5% of ever e-cigarette users had quit smoking, while the proportion of never e-cigarette users who had quit smoking was 21.8%. Controlling variables such as gender, education level, age group, urban/rural residence, income and perception of harms around smoking by logistic regression, e-cigarette users had a lower quitting rate compared with none-cigarette users (adjusted OR 0.454, 95% CI 0.290 to 0.712).
DISCUSSION
Between 2015 and 2018, awareness of e-cigarettes among Chinese adults had increased from 40.5% to 48.5%. Current use of e-cigarettes had almost doubled, from 0.5% to 0.9%, and ever-use had increased from 3.1% to 5.0%. 36 Among current users, the percentage of daily user increased from 3.1% in 2015 to 10.7% in 2018. These rates indicate that between 2015 and 2018, there were an additional 4.4 million current users of e-cigarettes, bringing the total number of current adult e-cigarette users in China to over 10 million. Among them, an estimated 1.1 million people used e-cigarettes daily.
There is an increasing body of evidence showing that young people who use e-cigarettes, who have never smoked before and are considered low risk for later taking up smoking, increase their chance of smoking combustible cigarettes later in life by two to four folds. [23][24][25] It is particularly concerning that children and adolescents are increasingly taking up the use of e-cigarettes in some countries [19][20][21] and this concern is merited in China as shown in this study. We found the rates of e-cigarette use among young people aged 15-24 by measures of current use and ever use were consistently higher than among other age groups. This age group had also shown the most significant increase in prevalence since 2015. Similar to Western countries, 37 China's e-cigarette marketers target the younger generation. 38 E-cigarette companies often promote their products as fashionable accessories and emphasise their products' modern and stylish design. 38 The perception among young people that e-cigarette use was fashionable suggests that marketing of e-cigarettes may have played a role in such belief and in the higher use rates among young people. Previous studies have also revealed that flavours had played an important role in the increase of e-cigarette use among youths. 39 40 Many of these same flavours for e-cigarettes are available in China, such as tobacco, mint/menthol, coffee, fruit or candy. 41 Thus, effectively regulating and limiting flavours, probably through the issuance of national product standards for e-cigarettes, as well as national laws or regulations banning the marketing (including advertising, promotion and sponsorship) of e-cigarettes are urgently needed in China to prevent further increase of e-cigarette use among young people. In August 2018, the State Tobacco Monopoly Administration and the State Administration for Market Regulation coissued a notice banning e-cigarette sales to minors. 42 In November 2019, the two government agencies coreleased another notice, further urging e-cigarette manufactures and sellers to stop selling and advertising e-cigarettes through online channels. 43 In early November of 2019, eight government agencies, including National Health Commission, Propaganda Department of the Communist Party of China, Ministry of Education, State Administration for Market Regulation, National Radio and Television Administration, State Tobacco Monopoly Administration, Central Committee of the Communist Youth League and All-China Women's Federation jointly issued a notice on further strengthening youth tobacco control work. 44 In this notice, relevant agencies were discouraged to promote e-cigarettes as a cessation tool. The notice also promoted the prohibition of e-cigarette use in public places and reiterated the ban of e-cigarette sales to minors, especially through the Internet. However, how these policies are being implemented and their effects are unclear. For example, according to iiMedia, many e-cigarette companies had turned to offline sales and marketing channels, such as convenience stores and shopping mall stores. 28 Future studies can examine the effects of these policies and offer learnings to other countries regarding policy interventions to prevent youth from using e-cigarettes.
Our study showed the association between e-cigarette use and daily use of combustible cigarettes among male users. This was consistent with previous studies. 30 33 For example, Wang et al 34 found that both current and former smokers had a higher OR of knowing and using e-cigarettes than never smokers. 33 Zhao et al 30 found that among male, current smokers, those who smoked Original research more than 15 cigarettes per day were more likely to use e-cigarettes. 30 Similarly, our study found a high percentage of dual use: 90.6% of current e-cigarette users used both e-cigarettes and combustible cigarettes. This is also consistent with findings from previous studies. [31][32][33] Additionally, we found that the most common reason for using e-cigarettes was smoking cessation among people of all age groups who currently used e-cigarettes. Wanget al 34 also found current smokers who had tried to quit were much more likely to use e-cigarettes than never smokers. 30 Zhao et al 32 found that male current smokers, who tried to quit in the past 12 months, were more likely to use e-cigarettes. 33 Although there are some studies indicating e-cigarettes might be helpful for smokers to quit smoking combustible cigarettes, scientific evidence regarding the effectiveness of e-cigarettes as a smoking cessation aid is still being debated. [45][46][47] In its 2019 report on the global tobacco epidemic, 48 WHO stated that there is 'insufficient independent evidence to support the use of e-cigarettes as a population-level tobacco cessation intervention to help people quit conventional tobacco use.' In our research, although the most common reason of e-cigarette use in China was for cessation, never e-cigarette users had a much higher rate of quitting than ever e-cigarette users. This finding, along with the high percentage of dual users (90.6%) shown in our study, indicates that e-cigarettes had not shown cessation efficacy at the population level in China. Government agencies are currently discouraged to promote e-cigarettes as a cessation tool, but there is no regulation prohibiting e-cigarette manufactures and sellers from claiming or promoting the cessation utility of their products.
China remains the world's largest tobacco producer and consumer. 35 In 2018, 26.6% of Chinese adults (aged 15 and older) currently smoked. In 2019, the Chinese government published the Healthy China 2030 Action Plan, specifying tobacco control actions to decrease the smoking prevalence among people aged 15 and older to below 20% by 2030. 49 Achieving this target requires effective measures to help smokers quit and prevent children and young people from becoming smokers. The increase in e-cigarette use among Chinese adults, especially among younger adults, and the high prevalence of dual use have made the tobacco control situation even more complex. Without effective regulations, e-cigarette use could exacerbate the nicotine addiction epidemic, which may erode progress made in tobacco control. The findings of this paper call for policies such as banning the marketing of e-cigarettes, enforcing the ban of sales to minors and the ban of e-cigarette advertising and sales on the Internet, prohibiting the claim or promotion of the cessation utility of e-cigarettes before solid evidence is available, and regulating flavours to minimise the risks of e-cigarettes at the population level, especially among young people.
Limitations
In the household survey, one individual was randomly selected for each household to complete the survey. Because of urbanisation, many young people had moved to larger cities and as a result, young people were under-represented in the sample, especially in rural areas. To account for this, weighting and poststratification adjustment were used in this study. In addition, this survey excluded those who lived collectively in places such as student dormitories, nursing homes, military camps, prisons or hospitals, there may be few college students covered by the survey. Our study showed that people with a college/university or above education level were more likely to use e-cigarettes. The prevalence of e-cigarettes among people aged 15-24 shown in our study might be underestimated. What this paper adds ⇒ This paper describes the awareness and prevalence of ecigarettes and analysed the associated factors of e-cigarette use among Chinese adults, using nationally representative data. ⇒ Consistent with the trend of increasing e-cigarette use in some other countries, our paper reveals an increase of ecigarette use between 2015 and 2018 in China, especially among young people aged 15-24. ⇒ Considering the findings on the high percentage of dual users and the lower quit rate of ever e-cigarette users compared with never e-cigarette users, we raise questions on the effectiveness of e-cigarettes on smoking cessation at the population level. ⇒ Our findings suggest future research areas and call for effective regulations to protect youth and minimise the risks of e-cigarettes on public health. Finally, the study offers national-level data analysis that can be compared across countries. | 2021-02-20T14:07:30.925Z | 2021-02-19T00:00:00.000 | {
"year": 2021,
"sha1": "b657863620956a289ef3902f45ece440ddd7d55e",
"oa_license": "CCBYNC",
"oa_url": "https://tobaccocontrol.bmj.com/content/tobaccocontrol/early/2021/02/18/tobaccocontrol-2020-056114.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "379bacf6f122b4768e93ad67b8ff0eb81cc154f0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
125954305 | pes2o/s2orc | v3-fos-license | 2 0 1 5 ) 1 2 2 4 All-Flavor Searches for Dark Matter with the IceCube Neutrino Observatory
Dark matter particles can be trapped in massive celestial bodies, such as the Sun or the Earth and their self-annihilations may produce standard model particles, including neutrinos of all flavors. So far, IceCube dark matter searches have focused on muon neutrinos due to their track-like topology and the resulting good pointing resolution. However, recent developments of reconstruction tools have allowed us to reconstruct electron and tau neutrino interactions with sufficiently good angle and energy resolutions and to estimate the corresponding uncertainties. IceCube’s in-fill array DeepCore, when using the outer IceCube detector as a veto, permits us to extend all-flavor dark matter searches to energies well below neutrino energies of 100 GeV. This is particularly important for the search of Weakly Interacting Massive Particles (WIMPs) that accumulate in the center of the Earth, as their annihilation rate is expected to be enhanced for WIMP masses around 50 GeV/c2. All-flavor neutrino searches, in principle, enhance IceCube’s sensitivity with respect to previous searches based solely on muon neutrinos. While this paper primarily focuses on demonstrating the applied methods, we will also present a sensitivity and discovery potential for an Earth WIMP search as well as data selection efficiencies for an ongoing solar analysis. We find that efficient neutrino flavor identification is challenging at low energies given that the signatures for tracks and showers are very similar. For the proposed event selection the signal rate increases by a factor of two. However, the worse angular resolution for cascades gives rise to a larger background in the signal region.
Introduction
The IceCube Neutrino Observatory, located at the geographic South Pole, consists of the IceCube neutrino telescope and the IceTop air shower array [1].In the ice, a volume of one cubic kilometer is instrumented with 5160 digital optical modules (DOMs), deployed at depths between 1450 m and 2450 m.The denser low energy infill, DeepCore, considerably improves neutrino detection with energies below 100 GeV due to the higher sensor granularity and the veto capacity of the surrounding IceCube strings.
IceCube can detect all flavors of active neutrinos through Cherenkov light emission from secondary particles created when a neutrino interacts in the ice.The primary background in the search for neutrinos originates from cosmic ray hadronic air showers produced in the Earth's upper atmosphere.The decay of pions, kaons (and charmed mesons) results in a continuous stream of neutrinos and penetrating muons.High energy muons are capable of traveling long distances through matter before they eventually decay, resulting in a downgoing muon flux at the IceCube detector.In contrast to earlier solar analyses we do not restrict ourselves to periods where the Sun is below the horizon and the Earth forms a shield against cosmic ray muons.Instead we take advantage of IceCube's veto capabilities, which limits the accessible WIMP mass range below 1 TeV/c 2 .
Muon neutrinos with extended track-like topologies are relatively easy to reconstruct with degree pointing precision.The reconstruction of electron and tau neutrino interactions, leaving cascade-like signatures in the detector, is more challenging.Due to this better angular resolution, Earth WIMP searches [2] as well as solar WIMP searches [3] with IceCube have until now aimed at extracting solely ν µ events from the dataset.However, all-neutrino searches have become more important recently.The reasons are obvious [4]: the measured flux is enhanced, the neutrino energies may be determined to a better precision, backgrounds from atmospheric ν e and ν τ are smaller and cosmic ray muons tend to be rejected better by requiring that the events have a cascadelike signature.
This paper discusses a study of IceCube's sensitivity to WIMP annihilations in the centers of the Earth and Sun with an analysis that is sensitive to all flavors of neutrinos.The methods presented in sections 3 and 4 have been developed for application on one year of 86-string configuration IceCube data from the 2011 season.Note that while both Earth and Sun may trap WIMPs, only the Sun is expected to be in an equilibrium making the annihilation rate directly proportional to the WIMP-Nucleon scattering cross section.The lower mass of the Earth prohibits this equilibrium in most cases.Similar to the moderation process in nuclear reactors, WIMPs are most efficiently slowed down if their energy is in the mass range of their nuclear scattering partners.As iron is very abundant in the Earth's core, 50 GeV/c 2 mass WIMPs are most easily captured, thereby enhancing the annihilation rate.For the simulation of a WIMP-induced neutrino signal we use the WimpSIM package [5], which takes care of neutrino generation, propagation and oscillations.
Low Energy Cascade Reconstruction
Unlike the extended tracks caused by muons from charged-current (CC) ν µ events, ν e and ν τ leave an almost spherical pattern of hit DOMs in the detector.The e ± produced in CC ν e interactions are subject to successive bremsstrahlung energy losses and lead to electromagnetic cascades.
PoS(ICRC2015)1224
All-Flavor Searches for Dark Matter Klaus Wiebe ν τ interactions and τ decays predominantly produce hadronic cascades, as do neutral-current interactions from all neutrino flavors.While the energy reconstruction benefits from the confined event signature, a good directional reconstruction of the spherically shaped cascade events demands significant computing resources and also an excellent description of the ice properties [6].Energy E, position and orientation are estimated [7] by minimizing the negative log-likelihood , where ρ i is the expected number of noise photons.
The number of photons per unit energy for an assumed orientation and vertex Λ i incorporates detailed information on the position dependent absorption and scattering of photons in the ice.This information is available in the form of spline-fitted [8] tables obtained from a photon-tracking simulation using a ray tracing algorithm modeling scattering and absorption.When iterating the minimization chain (in this analysis 32 times) and optimizing minimization parameters, the resulting angular resolutions are similar to the ones seeded by the true direction and vertex.ν e from solar WIMPs, casc.hypothesis ν τ from solar WIMPs, casc.hypothesis ν µ from solar WIMPs, casc.hypothesis ν µ from solar WIMPs, track hypothesis The energy dependent median spatial angle resolution is shown in Fig. 1.The analyses presented in the following sections focus on rather low energies where the discussed methods are competitive, but an efficient particle identification of cascades and tracks is not possible.Therefore all flavors -including muon (see Fig. 1, solid orange line) -are first reconstructed with a cascade event hypothesis.
Individual event resolutions may vary from the average resolution (see Fig. 1) dependent on the event's exact topology and the amount of light deposited in the detector volume.Since event-based resolutions allow for a reconstruction quality based event weighting, a resolution estimator, based on the Cramer-Rao upper bound on the variance, was coded.Assuming a set of parameters θ = (x 0 , y 0 , z 0 , θ , φ ), the vertex and directional angles, we formulate a Poissonian likelihood1 with µ h being the expected number of photons in module i for time bin j (µ nh for a non-hit module k respectively) and n the actually measured number of photons.
In order to obtain resolution expectations for individual cascades, one can either scan the likelihood around its minimum or take advantage of the Cramer-Rao bound.Under certain conditions,
PoS(ICRC2015)1224
All-Flavor Searches for Dark Matter Klaus Wiebe the latter provides the relation , where F is the Fisher Information matrix, the inverse of the covariance matrix.Applying the second derivative and exploiting n(i, j) = µ h ( θ , i, j) we obtain: The expected value for the number of registered photons for DOM i and time bin j, µ h ( θ , i, j), is obtained from the spline fitted tables discussed above.In order to accelerate the algorithm, optical modules not hit and the actual amount of detected photons as well as their timing information are currently ignored.Standard deviations calculated from the diagonal entries of the covariance matrix correlate well with the actual resolutions.The estimate on the spatial resolution is approximated by , where σ θ /ϕ are the zenith and azimuth uncertainties and θ reco is the reconstructed zenith angle.Figure 2 shows the relation between the median resolution taken from the difference of reconstructed and true event direction and the estimate.
Search for dark matter annihilation in the center of the Earth
Selection Level As a test case, we restrict ourselves in this study to 50 GeV/c2 WIMPs.This has the advantage that we can make use of the enhanced cross section.The disadvantage is that neutrinos from the annihilation are very low in energy and thus are at the threshold of detection.The most energetic neutrinos at these energies stem from the annihilation in tau pairs and for this reason we concentrate first on this channel.In order to extract this signal from the dataset, an event selection favoring potentially well reconstructed, low energetic neutrino events mainly from the northern hemisphere was developed.For example, the reconstructed energy was required to PoS(ICRC2015)1224 be in the 5 to 30 GeV range.This selection is subdivided into levels, each combining requirements pursuing a similar objective.
The progress with the selection levels of signal and background rates from Monte Carlo (MC) simulations and the data rates is shown in Fig. 3. Data here refers to a subset of the full experimental dataset 2 .The applied event selection achieves a signal efficiency of about 4% while the atmospheric muon background is reduced by seven orders of magnitude.As can be seen in the lower portion of Fig. 3, showing the data-MC-agreement, the data rate falls short of the MC rate by about 25% on the final selection level.Outliers on intermediate levels can mostly be explained by unsimulated noise.This discrepancy is removed by subsequent noise-rejecting vetos.The precise MC description of experimental data in the low energy regime is a non-trivial issue which is currently subject to collaboration-wide investigations.
Since the event simulation and the simulation of the detector response near the detector's energy threshold is challenging, the method to analyze a potential WIMP signal should not rely on precise MC background predictions.A further difficulty is the particle identification at very low energies as well as the directional reconstruction of low energy events in general and cascades in particular.A likelihood fitting procedure takes advantage of distinguishing features of cascade and track signatures and the angular distribution of signal and backgrounds.The basic input for the algorithm are two-dimensional histograms of two reconstructed zenith angles -one developed to reconstruct cascade-like events the other well suited for the reconstruction of tracks.For the case of signal events coming from the direction of the Earth's core, one would e.g.expect the cascade likelihood to recover the direction of the cascade-like events while the algorithm using the incorrect track hypothesis smears this directional peak.This concept also reveals distributional differences between atmospheric electron and muon neutrinos and discriminates the atmospheric muon background.In order to pass this distributional information efficiently to the fitter, the twodimensional histograms are binned such that signal regions are well resolved while the background dominated parts are merged.For the sake of simplicity and in order to increase the MC statistics, the three signal channels are combined into one all-flavor flux χ.Fig. 4 shows a comparison of the histograms of WIMP-induced neutrinos and atmospheric neutrinos and demonstrates the discrimination potential of this method.The histogram for experimental data is analyzed by an algorithm maximizing the Poissonian likelihood with the MC prediction and data k i in bin i and the physical fit parameter α revealing which annihilation rate is compatible with the experimental data.Nuisance parameters included in this likelihood account for the relevant systematic uncertainties.These are the absolute flux normalizations of the atmospheric backgrounds and the pion to kaon ratio in the generation of atmospheric neutrinos which is contained in the weight ∆r π/K,ν in Eq. 3.2.Thus, the final log-likelihood function, including Gaussian
PoS(ICRC2015)1224
All-Flavor Searches for Dark Matter Klaus Wiebe penalty factors for the nuisance parameters, reads: The incorporated nuisance parameters together with their priors and uncertainties are summarized in Table 1 To determine the sensitivity, evidence and discovery potential, a likelihood-ratio test is performed on simulated data using the test statistic w = 2 • ln maxL max H 0 L , where max H 0 L denotes the maximum likelihood under the null hypothesis while maxL refers to the maximum likelihood under the alternative, i.e. signal, hypothesis.From the comparison of the test statistic distribution for many simulated data realizations containing a certain signal strength with simulated background only data realizations the sensitivity, evidence and discovery potential were calculated for one year of IceCube data (see Table 2).
Since the Earth is not expected to have reached equilibrium of WIMP capture and annihilation, the annihilation rate is not directly correlated with the WIMP-nucleon scattering cross section.For this reason, the sensitivities are given in terms of annihilation rates which maintains as much model independence as possible.
The sensitivity determined in this analysis is not competitive yet with a dedicated study for Earth WIMPs solely based on ν µ events [2].The implementation of multivariate techniques promises the required enhancement of the efficiency, as can be seen in the following section.
PoS(ICRC2015)1224
All The indirect detection of solar WIMP particles with IceCube places some of the most stringent limits on the spin-dependent nucleon-WIMP scattering cross-section [9,10].Here we are exploring the capabilities of the all-flavor approach, by taking advantage of the cascade reconstruction methods discussed in section 2.
Figure 5 shows the development of data and MC rates throughout the event selection.For the signal MC, a showcase WIMP candidate mass of 100 GeV/c 2 is chosen, while the complete analysis will cover a candidate mass range from 50 to 1000 GeV/c 2 .Cut levels 3 and 4 aim at significantly reducing the dominant background from atmospheric muons, followed by filters that effectively remove noise clusters and coincident events.Similar to the Earth WIMP analysis, we see differences between data and Monte Carlo which we suspect to be due to a non-perfect description of the optical module noise for the particular time period.Concerted efforts are under way in the collaboration to remedy this situation.
Candidate mass
Hard channel efficiency Soft channel efficiency Following level 5, a set of Boosted Decision Trees (BDTs) is trained to discriminate signallike from background-like events, leading to level 6 by selecting events with a BDT score larger than 0.04 (see Fig. 6).Twelve variables are used as BDT input, including reconstructed direction, energy and vertex, reconstruction quality parameters as well as veto and geometrical quantities.An overtraining check was performed, showing good agreement of training and testing score distributions.The efficiency at level 6, compared to level 2, is ≈ 10 −6 for atmospheric muons and 5.0% PoS(ICRC2015)1224 (8.4%) for signal neutrinos from the W + W − (bb) channel.Signal efficiencies are shown in Table 3 and are competitive to the final-level efficiencies obtained with a track-based approach [10].Ultimately, we want to include WIMP masses up to 1 TeV/c 2 , but concentrate here on the low energetic 50 and 100 GeV/c 2 candidates.a spatial angle deviation from the solar position larger than 40 degrees are defined as "off-source".Such data (black) are used to train against the signal simulation (blue, shown here for a 100 GeV/c 2 candidate).We reserve 50% of the data and the Monte Carlo signal events for overtraining checks.Shown for comparison are atmospheric muons (purple) and ν µ (ν e ) in red (orange).The total background MC rate is depicted in green.The vertical dashed line shows the applied cut value at a BDT score of 0.04.
For the determination of limits we will choose a likelihood approach [11] that will consider direction, directional uncertainty estimates as well as the reconstructed energy.
Figure 1 :
Figure 1: Angular resolution of the cascade reconstruction algorithm versus the energy of neutrinos.The resolution improves with energy for cascade signatures.For comparison, the dashed orange line shows the resolution potential for tracks reconstructed by an algorithm using the correct track hypothesis.
Figure 2 :
Figure 2: Cascade Resolution Estimator: a successful quantitative modeling of the actually achieved resolution.
Figure 3 :
Figure 3: Rates of signal (arbitrarily scaled for comparison), background MC and data versus selection levels.The lower subplot shows the ratio of data over the sum of the background MC rates and its statistical uncertainty.The arrow indicates an outlier not visible at the scale chosen.
Figure 4 :
Figure 4: Two-dimensional histogram of reconstructed zenith angles with apt binning for atmospheric electron neutrinos (left) and for WIMP induced neutrinos (right).The bin contents are normalized to the bin size to emphasize the relative contributions.
Figure 5 :
Figure 5: Data and Monte-Carlo (MC) rates versus selection level.The simulated all-flavor signal rate is shown for a 100 GeV/c 2 WIMP mass, arbitrarily scaled assuming 10 25 annihilations/s.The bottom sub-plot shows the ratio of data and the sum of background MC including statistical uncertainties.
Figure 6 :
Figure 6: Score distribution after BDT training.Events with
Table 1 :
Summary of the nuisance parameters explicitly implemented in the likelihood function together with their priors and uncertainties.
Table 2 :
-Flavor Searches for Dark Matter Sensitivity, evidence and discovery potential for one year of IceCube data taken with its 86-string configuration assuming a 50 GeV/c 2 mass WIMP annihilating into τ + τ − .4.Search for dark matter annihilation in the center of the Sun | 2018-12-05T06:12:11.076Z | 2016-08-18T00:00:00.000 | {
"year": 2016,
"sha1": "2f02e2d553cb17d1708a54c79313a63fa5e67d99",
"oa_license": "CCBYNCSA",
"oa_url": "https://pos.sissa.it/236/1224/pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "2f02e2d553cb17d1708a54c79313a63fa5e67d99",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
221358425 | pes2o/s2orc | v3-fos-license | Predicting Inpatient Length of Stay in Iranian Hospital: Conceptualization and Validation
Objective: The length of stay is an important indicator of hospital performance and efficiency. Regarding the importance of the length of stay, this study aimed to design a structural model of the inpatients’ length of stay in the educational and therapeutic health care facilities of Iran in order to identify the influencing dimensions. Methods: The present study was an analytical and applied study. The face validity of the data gathering tool was investigated by the expert judgment and the construct validity was examined by using the exploratory factor analysis. In order to verify the reliability of the tool, the internal consistency was also trialed by using the Cronbach’s alpha. For ranking the influencing dimensions and factors and also in order to examine the causal relationships between the variables in a coherent manner and presenting the final model, the structural equation modeling technique was used in AMOS software at a significant level of 0.05. Results: The mentioned structural model consists of 4 dimensions and 29 factors influencing the length of stay of hospitalized patients. The independent variables are based on priority and importance as follows: patients’ conditions, the underlying factors, the clinical staff performance, and hospitals’ service delivery, which were examined by second-order factor analysis in order to study the relationship between them and the inpatients’ length of stay. Conclusion: Considering the importance of each one of the proposed dimensions from the point of view of service providers in some therapeutic centers of the country by paying attention to the role of each one of them in preventing prolonged hospitalization can be essential in the effectiveness of the treatment and cost reduction.
Introduction
Hospital is one of the components of the health care system, that its' performance can lead to the health of the community in coordination with other factors. Actually, hospitals have a key role in providing health services because of their impact on the health system's efficiency (Khosravizadeh et al., 2016). In this regard, in the health care system, a kind of management can be productive which provides high quality and cost-effective services (Mohebbifar et al., 2014), achieving this goal will be possible by the correct and reasonable use of resources, controlling hospital admissions and the length of stay of hospitals patients and also the appropriate use of diagnostic and therapeutic services (Ravangard et al., 2010). The length of hospital stay is often considered as a measure and an indicator of the efficiency and effectiveness of hospital services, as well as the effectiveness of used treatments by the physicians (Austin et al., 2002). The presence of indices such as bed occupancy rates and bed turnover or bed turnover interval make the length of stay one of the most important functional indicators of the hospital (Gohari et al., 2012). In fact, the length of stay is defined as the time between admission and discharge in the hospital, which measures the rate of bed use and the efficiency of the admissions (Jimenez et al., 1999). Therefore, the mentioned performance indicator is dependent on the health care delivery variables, including the availability of hospital beds, payment methods, and hospital discharge policies, as well as the variables of the demand for health services, including the severity of illness, direct and indirect costs and simultaneous illnesses (Clarke, 1996). Hence, the length of stay more or less than the actual need of patients can affect the cost and quality of the provided care (Mawajdeh et al., 1997). So, improving the patient's length of stay not only reduces costs and improves hospital performance, but also reduces the unnecessary beds' occupancy and increases the hospitals' productivity (Haghgoshaei et al., 2012). In addition to the fact that the length of stay is a crucial factor in analyzing the performance of both clinical and para-clinical units, it can also be measured as the clinical performance during the comparison of the two or more factors in different units, such as surgery, gynecology, emergency, and pediatrics (Weingarten et al., 1994). Various studies have also shown that length of stay can be affected by several factors such as age, sex, income, status, education, marital status, severity of disease, type of insurance, breed, number of beds, hospital size, the area in which the hospital is located and the type of physician's activity as a family physician, private physician (in office) or a hospital physician (Mawajdeh et al., 1997;Toyabe et al., 2006). In other studies, some other factors such as birthplace, place of residence, admission time, type of admission, hospitalization history, patient status at the time of discharge, and severity of illness have been mentioned as influencing factors on hospital length of stay (Gholivahidi et al., 2006;Rajaeefard and Rafiee, 2006). Also, findings from a conducted study in some hospitals in Japan indicated that an indirect relationship exists between the length of stay and accessibility to the human resources during the hospitalization and there is a direct relationship between the length of stay and the hospitalization capacity and rate of unnecessary and unwanted admissions (Imai et al., 2005). As noted, studies on the patients' length of stay have generally reviewed the factors including clinical and non-clinical factors affecting it with a general overview. In the majority of these studies, the variables of the patients' admission aren't considered especially in a model. Also, in these studies, a comprehensive model hasn't presented by considering all the clinical and non-clinical factors affecting the patients' length of stay. In the current study, a model has been presented with a wider perspective and with consideration of all factors affecting the length of stay. The results can be useful for better planning of services and appropriate decision making in the field of hospital services, especially about the length of stay, the proper use of resources and maximal productivity at the hospital level.
Materials and Methods
From the point of view of purpose, this study was descriptive-analytic and cross-sectional. Due to the fact that many appropriate solutions and approaches can be presented and applied based on the results of the study, the nature of the research is also applicable. The present study was composed of three phases.
A comprehensive review of studies
At this phase, we reviewed the previous studies on the length of stay of hospitalized patients using databases. The tool used at this stage was a data collection form.
This form was used in order to gain integrity, reduce bias, and increase the reliability and validity. Required data were collected from various databases. The databases include Irandoc, Embase, Google scholar IranMedex, SID, Magiran, PubMed, Scopus, WOS and Keywords were used such as length of stay, Hospital, patient admission, Affective factors and their equivalent in Persian. Then the articles were reviewed in terms of quality. Then factors influencing the length of stay were classified. Also, the most relevant factors were tried in one category. The proposed basic model of this study is presented. The current model is obtained by systematically analyzing the results of a comprehensive review of studies. In this model, the components of the service delivery, medical personnel, patients, and background are conceptualized as independent dimensions and the length of stay of inpatients is considered as dependent dimensions.
Designing a tool and data collection
The questionnaire was compiled from the obtained information from the first phase as well as the opinion of the supervisors and counselors. In this study, the expert judgment was used for evaluating the face validity. Exploratory factor analysis was used to assess the construct validity. Cronbach's alpha was used to measure the reliability of the questionnaire and its rate was 0.917. The five-section Likert scale was used to determine the importance of factors affecting the length of stay. So that the importance of the factors was categorized from very low, low, medium, high and very high. After collecting the data, the factors influencing the length of stay were ranked. The study population has consisted of all the staffs of the clinical units including physicians, nurses, paramedics and financial experts and clerks of educational and therapeutic centers in Tehran province, and the quadrat sampling was performed for each hospital. Since the size of the study population is unlimited, the following formula was used for determining the sample size. In the following formula, α is the estimated error or estimation error equal to 0.05, and ε is the possible error rate or accuracy required in the survey, which is equal to 0.05. Therefore, taking into account 95% confidence level, the standard deviation of 0.5 and margin of error (+ -5%), the sample size was calculated to be 384 patients. Finally, 390 questionnaires were distributed. Descriptive statistics were also analyzed by using the AMOS software.
Validation and presentation of the final model
At this Phase, the final model was presented based on the quantitative findings. In order to study the causal relationships between variables in a coherent way and presenting the final model, the technique of the structural equation model was used. This technique consists of five stages as follows: Model expression: (including constructing the original model), model estimation: Data Collection and Creation of Variables Matrices), Goodness of Fit Assessment: (Including a general review of the model's appropriateness and its feasibility, and the measurement of the need for reform), model modification, model interpretation. The above steps were done through AMOS software; the final validated model was presented. Predicting Inpatient Length of Stay in Iranian Hospital second-order factor analysis of the variables of patients' length of stay. All paths are at the significant level. According to the table 1, the obtained values for the indices such as Qi-2 divided to the degrees of freedom, 1GFI, 2 RMSEA and 3 CFIs weren't within the defined range. Therefore, it was concluded that the fitness of the model wasn't appropriate at this phase, so applying some changes to the model were considered necessary for better fitness. These changes were implemented in the proposed model and the results of the fitness indices improved ( Figure 2).
Also, the effects of each factor on the mentioned Results 42.5% of the participants were male and 57.95% of them were female. Most of the respondents were in the age group of 31-40. Also, most of them had bachelor's degree in nursing. The work experience of the majority of respondents was about 10 to 20 years. Reviewing the findings in the field of the independent variables of the study showed that the highest mean of dimensions related to the patients' conditions and the lowest mean related to the performance of the care providers in the studied educational and therapeutic centers. To evaluate the compatibility and consistency of the model with the research data, the fitness of the model was verified; the fitness of the conceptual model was studied in two steps. The first step was the assessment of the measurements of the model and the second is the assessment of the fitness of the structural part of the model. In order to assess the measurements of the model, the validity and reliability of the model were evaluated. The second-order factor analysis was performed to determine the relationship between patient's length of stay and its dimensions. variable were prioritized according to the standard estimation coefficients of the second-order confirmation factor analysis of the patient's length of stay variable. The significance level of the patient's length of stay and the influencing factors presented in Table 2. The correlation coefficient between these four factors and the patient's length of stay indicated the effect of these variables on the length of the patient's stay. based on the priority and the extent of the effect of these dimensions on the length of stay, the variable of patients with a value of 643.9 which is greater than 1.96 indicated that the relationship between the variable of the patient's length of stay and the patients is significant at 95% confidence level and ranked as the first important variable among others. Then the variable of the background and variables of the clinical staff and hospital services obtained the most importance and priority. Also, by considering the linear and significant relationships between independent and dependent dimensions, by using the structural equation technique, the final model included 4 independent dimensions and 29 factors influencing the length of stay of inpatients which is presented in Figure 3 Discussion Increasing patients' population, lack of resources and high treatment expenditures attract the hospital administrators and patients' attention to the length of stay in hospital (Ameri et al., 2015). The length of stay in the hospital is an important indicator that demonstrates the hospital's performance and efficiency and is influenced by various clinical and non-clinical factors (Xiao et al., 1997). In the present study, these factors are conceptualized and classified into four categories based on their priority and importance, including patients' condition, background, the performance of the clinical staff, the provision of hospital services and service delivery in Iran.
Component of the patients' conditions
The variable of the patients' conditions included four elements: gender, age, race; type and severity of illness; clinical status of patients at admission; the ability to accept medical treatments; background and record of admission and hospitalization; levels of satisfaction and trust in the clinical staff; the correct decision of patients in the choice of hospital services. In confirming the conceptualized elements, Deister et al., (2017) also pointed out that the patient's age determines the length of his stay in a hospital. Therefore, it can be claimed that older patients need more time to improve their disease, because most of them have chronic illnesses, while younger patients are more likely to be affected with acute illnesses, with shorter treatment periods (Lim and Tongkumchum, 2009) Statistics released by the US Department of Statistics revealed that the incidence of various types of diseases is not equal in men and women (Zand et al., 2010), and the length of stay varies according to gender. In confirming the impact of race on the length of stay, in 2017 Turgeman also acknowledged that race has an effect on the length of the patient's stay (Turgeman et al., 2017). The type and severity of illness are one of those factors affecting the length of stay in the hospital and in line with this conclusion, Paul Carter considered the severity of the illness to be effective over the length of stay in the hospital (Carter et al., 2016). In Japan, Junko Niimura et al. noted that the severity of the illness is one of the predictors of the length of stay of patients (Niimura et al., 2017). The clinical status of patients on admission is one of the factors influencing the length of stay in the hospital. The clinical status of the patient on admission including the patient's vital signs, physical examination, level of consciousness and background are underlying conditions that determine the length of stay in the hospital (Wuerz et al., 2000). In concordance with the conceptualized element, a study by Carter's Paul demonstrated that the patient's clinical status on admission can influence the length of stay (Carter et al., 2016). Liver Turgeman et al. have also emphasized that the patient's clinical condition on admission was one of the factors influencing the length of stay (Turgeman et al., 2017). Ability to accept a medical treatment means the patient's informed decision and agreement with the medical advice and treatment procedures with the condition that refusing the treatment can lead to the exacerbation and increase in the severity of a disease or its signs and symptoms that are significantly associated with the prolonged patients' stay in the hospital. In confirmation of this conceptualized component, Alosco's study suggested that refusing some therapeutic procedures, especially in patients with heart failure, increases the length of stay in the hospital (Alosco et al., 2014). History of previous admission and hospitalization should be recognized on the admission. Several studies aimed to confirm this conceptualized element, for example, Jordan et al, approved that the history of a disease and background affects the patients' length of stay (Gruskay et al., 2015). The level of satisfaction and trust in the clinical staff is also an important component and patients are at the heart and main focus of the hospital and all hospital services are performed for them, so their satisfaction is an important indicator of the quality of health care services. In this regard, a study by Kazemi et al., (2010) confirmed that facilities and services of the operating room and other units have the most effect on patients' satisfaction and there was a significant relationship between the length of stay and total patient satisfaction. Correct and specific patients' decision making in choosing health care services is also important and granting him/her the right to decide on his or her health along with providing training and necessary information for the correct actions is one of the basic principles of the patient support. Research and experience have shown that informing the patients and contributing them in decision-making in relation to their treatment and respecting their rights improve their health status and wellbeing and also reduce the patient's length of stay in the hospital (Leenen, 1996). Studies in this area demonstrated that knowledge of the services influences on the patients' decision making and can improve the effectiveness of the therapeutic processes (Khosravizadeh et al., 2017).
Components of Infrastructures and underlying
The Infrastructures and underlying elements include the proper and sustainable financing of health care provision, cooperation with insurances, the support of other public sector organizations for the health care system, cultural factors of the target community associated with utilization of health care services, awareness of the target community in using healthcare services, income level and employment of the target community. On the whole, proper financing of health care services has a unique role, and financing and payment methods for treatment expenditures are one of those factors influencing the length of stay of patients. In confirmation of the conceptualized element, a study by Yin et al. indicated that there is a significant relationship between the level of activity-based financing for service delivery and the inpatients' length of stay in the hospital (Yin et al., 2013).
Cooperation with insurance for paying the health care expenditures is also one of the conceptualized elements. One of the desirable goals of any healthcare system is to provide appropriate mechanisms for cooperating with insurers to support households in demand for health services. In the confirmation of the component, the results of the study by Sepehri et al. demonstrated that the plan of compulsory insurance and the insurance plan for the poor will increase the length of stay of hospitalized patients (Sepehri et al., 2006).
The support of other public sector organizations from the provision of the healthcare services is also a priority. Among the factors influencing the length of stay of patients, it is noteworthy to mention the cooperation, and support of government organizations such as health insurance, armed forces, the Imam Khomeini relief committee and other insurance organizations which can affect the utilization and availability of the health care services for the people and consequently, influences on the length of stay. The results of a study by Vatankhah et al., (2004) proved that the length of stay of patients varies with the types of health insurance, so that patients under-coverage of the relief committee insurance had longer length of stay, and the reason may be the free treatment of patients by using this insurance. On the other hand, people without insurance had the minimum length of stay in order to avoid catastrophic payments due to the fact that they are forced to pay the healthcare expenditures individually (Vahidi et al., 2006). The culture of the target community toward the utilization of the healthcare services is also another conceptualized notion because health services are formed and shaped by cultures, traditions, payment mechanisms and patient expectations. Various studies have shown that the culture of individuals, their education, and their knowledge has had an important impact on the access of individuals to treatment services (Khayatan et al., 2011). Also, informing patients about the provided and offered healthcare services and their participation in decisions about their own treatment improves their health status and well-being and affects the inpatient's length of stay. Schaffer stated that preoperative training reduces the length of stay (Sheaffer et al., 2018). On the other hand, based on the research evidence, among the socioeconomic factors, the financial resources of the patient and his/her type of job and occupation can be considered as an obstacle or facilitator (Rahimian and Besharat, 2010). A study by Arab et al. emphasized that there was a significant relationship between the average length of stay in the hospital and individuals' occupation .
Functional components of the clinical staff
The function and performance of the clinical staff include the following: the expertise and experience of the medical staff, the planning and effective decision making of clinical processes and procedures, the use of new and effective therapies, coordination and collaboration with other levels of service delivery, compliance with clinical guidelines, avoiding the supplier-induced demand and moral hazard, and communication with patients. The expertise and experience of the medical staff are one of those factors influencing the length of stay. By upgrading the experience and expertise of the clinical staff, the patients' recovery will improve, and subsequently, the length of stay will reduce. Studies indicated that the expertise and experience of physicians and clinical staff affect the length of stay (Ravangard et al., 2010;Ghiyasvandian et al., 2013).
Effective planning and decision making of clinical processes are also important. In confirming this component, a study by Joy at al. revealed the impact of decision making on the effectiveness of the service delivery and treatment (Choy et al., 2007). The utilization of new and effective therapies not only reduces the length of stay but also, improves the patient satisfaction. Xiao et al., (1997) has considered the effectiveness and efficiency of treatments as an effective factor in the length of stay of patients in Australian hospitals.
Coordination and collaboration with other levels of service delivery affect the efficiency of service delivery and reduce the unnecessary and prolonged inpatients' length of stay. In acknowledgment of this conceptual component, Wee and Hopman, (2005) Pointed out that the variety of procedures and processes in different parts of the hospital, the quality and quantity of communication and inter-sectoral collaborations, including the link between admission and discharge, and support of units are among the influential factors on the length of stay, which has been proved in various studies.
Observing clinical guidelines is an important factor in reducing the unnecessary and prolonged length of stay. In confirmation of this conceptualized component, the study of Deister et al., (2017) suggested that administrative delays in admission increase the length of stay in a hospital.
Avoiding the supplier-induced demand reduces the unnecessary expenses and, on the other hand, increases the capacity for admitting actual patients. This result is consistent with the study by lux et al. (Lux et al., 2011). Communication with patients is also an important component in determining the length of stay. A study by Baggs et al. showed that effective communication between the patient and the caregiver leads to a reduction in the length of stay in the hospital (Baggs and Ryan, 1990).
Components of Hospital Service delivery
The components of health service delivery include the structure and capacity of health services provision, the type of provided health care services, the treatment equipment and facilities, the process of admission and discharge, the status of financial and performance indicators, the quantity and quality of clinical and non-clinical health services, observance of the patient rights charter and the patient safety culture, physical environment and amenities and respecting the rules governing the health care system. The structure and capacity of providing health care is an important factor in responding to needs, which in turn has an impact on the length of stay. Jama Minana et al. also considered the hospital's capacity an influencing factor on the patients' length of stay in their study (Minana et al., 2017). The type of provided health care services is also dominant and should be based on the needs and opinions of the patients in the healthcare organizations. In 2017, Junko Niimura et al. stated that there was a significant relationship between the type of services and length of stay of inpatients in Japan, they also demonstrated that providing home-care services can also reduce the length of stay in the hospital (Niimura et al., 2017). The equipment and facilities are also among those influencing factors in reducing the unnecessary and prolonged length of stays of patients. The hospital should also evaluate the quality of the provided services continuously, in order to determine and guarantee the existence of sufficient required equipment and materials in the units. A recent study by Ameri et al. indicated that the lack of sufficient diagnostic facilities or lack of adequate training for patients after discharge can lead to patient re-admission and longer length of stay (Ameri et al., 2015).
The sequence of admission, hospitalization, and discharge is also important and, any disruption in these processes will lead to the unnecessary and prolonged stay in the hospital. Falana et al., (2018) indicated that unnecessary diagnostic tests for the diagnosis and follow-up of the treatment increase the length of stay. The status of financial and operational or performance indicators is also an important factor. What can be deduced from similar studies is that performance indicators are representatives of the efficacy, effectiveness and productivity of an educational and therapeutic hospital, each of them can have a two-way effect on the length of stay of patients. The quality and quantity of clinical and non-clinical care services are also paramount, and if treatment procedures are not performed or are provided with poor quality, it will cause damage and, as a result, recursion, re-admission and prolonged length of hospitalization. García-Romero et al., (2017) proved that the quality improvement in clinical services and promotion of the medical research can reduce the length of stay in a hospital.
In order to ensure the quality of healthcare services, observance of the standards of medical ethics and patients 'rights in the provision of health services is inevitable. Increasing the observance of patients' rights will improve the quality of health care services. In this regard, in the study of Nasiri-Pour et al., a significant relationship was found between patient safety and ethics and the length of stay as a performance indicator of the hospital (Nasiripour and Jafari, 2016).
Also, the physical environment can be a good basis for approaching the medical standards and gaining the satisfaction of staff and patients. Compliance with the rules governing the provision of healthcare services has also had an impact on reducing the length of unnecessary stay in the hospital and has improved the effectiveness of the patient treatment. In confirmation of this conceptualized component, a study by Ameri et al., (2015) demonstrated that non-compliance with therapeutic protocols and guidelines leads to patients re-admission. Also, Minana et al., (2017)indicated that there was a significant relationship between the careful implementation of health care policies and the length of stay.
In conclusion, the current study has conceptualized the influencing dimensions and factors on inpatients' length of stay in the therapeutic centers of Iran. At last, the structural model consisted of 4 dimensions and 29 factors influencing the length of stay of hospitalized patients. According to the statistical analysis, the relationship between the inpatients length of stay and these four dimensions and their components was significant and on the other hand, by considering the importance and priority among these four (patients' conditions, background and underlying components, hospital service delivery, the clinical staff performance), the patients' condition has the most influence on the length of stay of patients in medical centers of the country. The proposed structural model has also approved in the form of a general structure of the components. Therefore, this model can be used as an appropriate tool for assessing the importance of different factors affecting the length of stay of patients in Iranian health care centers in order to make effective decisions at the political and executive levels. | 2020-08-29T13:01:48.702Z | 2020-08-01T00:00:00.000 | {
"year": 2020,
"sha1": "e1e02917bead3f47ab40d375385840511ed32b9d",
"oa_license": "CCBY",
"oa_url": "http://journal.waocp.org/article_89238_245402dcaeee75b9d7a39415545c2883.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ad66bfa48d7d96d0cce69011e58b26c1f5e1d2ba",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
} |
16615126 | pes2o/s2orc | v3-fos-license | Metformin Improves Insulin Signaling in Obese Rats via Reduced IKKβ Action in a Fiber-Type Specific Manner
Metformin is a widely used insulin-sensitizing drug, though its mechanisms are not fully understood. Metformin has been shown to activate AMPK in skeletal muscle; however, its effects on the inhibitor of κB kinaseβ (IKKβ) in this same tissue are unknown. The aim of this study was to (1) determine the ability of metformin to attenuate IKKβ action, (2) determine whether changes in AMPK activity are associated with changes in IKKβ action in skeletal muscle, and (3) examine whether changes in AMPK and IKKβ function are consistent with improved insulin signaling. Lean and obese male Zuckers received either vehicle or metformin by oral gavage daily for four weeks (four groups of eight). Proteins were measured in white gastrocnemius (WG), red gastrocnemius (RG), and soleus. AMPK phosphorylation increased (P < .05) in WG in both lean (57%) and obese (106%), and this was supported by an increase in phospho-ACC in WG. Further, metformin increased IκBα levels in both WG (150%) and RG (67%) of obese rats, indicative of reduced IKKβ activity (P < .05), and was associated with reduced IRS1-pSer307 (30%) in the WG of obese rats (P < .02). From these data we conclude that metformin treatment appears to exert an inhibitory influence on skeletal muscle IKKβ activity, as evidenced by elevated IκBα levels and reduced IRS1-Ser307 phosphorylation in a fiber-type specific manner.
Introduction
Insulin resistance is considered a characteristic feature of a clustering of diseases referred to often as the metabolic syndrome [1]. Such associated pathologies include not only cardiovascular disease but also Type II Diabetes Mellitus and, more recently, inflammatory-related metabolic diseases. In this regard, obesity is often associated with insulin resistance, notably in skeletal muscle [2,3]. This has immense clinical and economic significance as obesity has reached epidemic proportions not only in the US, but in Western Society at large [4]. Although increased physical activity and nutritional interventions have been used to reduce the progression or reverse obesity (and hence improve insulin action), it has often been met with limited success due to low, long-term patient compliance. In addition, many obese individuals do not posses the physiological capacity for increased physical activity. These circumstances leave pharmacological interventions as an attractive adjunct or alternative therapeutic intervention for treating not only obesity but also associated type II diabetes. Currently, the insulin-sensitizing drug metformin, a member of the biguanide drug class, is widely prescribed. Yet despite its widespread prescription, its basic mechanism of action is poorly understood, slowing the development of even more effective insulin-sensitizing molecules.
The link between obesity and insulin resistance can be partly explained by the NF-κB pathway and its associated upstream kinases, whose activity is stimulated as a result of the chronic and excessive lipid circulation and accumulation 2 Journal of Obesity in nonadipose tissue sites, a prominent feature of obesity [5,6]. Lipid accumulation in skeletal muscle has been shown to be associated with insulin resistance [7], and weight loss appears to reverse this effect [8,9]. Ectopic fat deposition can lead to activation of the inhibitor of κB kinase β (IKKβ) and induction of its downstream substrate, the inflammatory transcription factor NF-κB [3]. Specifically, NF-κB is sequestered in the cytoplasm in an inactive state while complexed with the inhibitor protein called inhibitor κBα (IκBα). Upon stimulation by IKKβ, which is itself stimulated by both cytokines and lipids [10], IκBα is phosphorylated and degraded, resulting in the liberation of NF-κB, which migrates into the nucleus and activates transcription of inflammatory cytokine genes. Interestingly, in addition to activating NF-κB, IKKβ has also been shown to directly phosphorylate serine 312(Human)/307(rodent) on insulin receptor substrate (IRS)-1, leading to a decrease in insulin signal transduction [2,[11][12][13].
In contrast to the insulin-resistant effects of excessive IKKβ activity, AMP-activated protein kinase (AMPK), a prominent metabolic enzyme, acts to improve insulin sensitivity, potentially counter-acting the deleterious effects of lipid excess. Additionally, many commonly prescribed antidiabetic medications, such as metformin and the thiazolidinediones, are known to increase AMPK activity [14,15]. In the past few years, research has revealed the role of AMPK as an inhibitor of IKKβ activity in certain cell types. AMPK has been shown to inhibit both fatty acidand TNFα-induced increases in IKKβ activity in cultured endothelial cells [16,17], macrophages [18], and astrocytes [19]. Furthermore, metformin treatment has also displayed anti-IKKβ properties in the liver and has been shown to prevent lipid-induced insulin resistance [20].
Inasmuch as skeletal muscle is the main consumer of glucose in vivo, studying the effects of metformin on IKKβ activity in skeletal muscle, and the role of AMPK in possibly mediating this response, holds great potential for addressing the obesity-diabetes relationship in skeletal muscle. Metformin has been shown to upregulate AMPK activity in both human and rat skeletal muscle [15,21]. Work in endothelial cells and liver has shown that metformin can reduce IKKβ activity, raising the possibility that activation of AMPK may be a mechanism by which metformin improves insulin sensitivity in skeletal muscle. Therefore, the aim of this study was to (1) determine whether metformin attenuates IKKβ activity in skeletal muscle of obese rats, (2) determine whether changes in AMPK are associated with reduced IKKβ activity in skeletal muscle, and (3) examine whether a decrease in IKKβ activity coincides with reduced phosphorylation of Ser 307 on IRS1 in skeletal muscle and whether this translates into increased glucose tolerance.
Materials.
All antibodies used in this investigation were obtained from commercial sources. Antirabbit monoclonal IRS-1 IP antibody for immunoprecipitation was purchased from Santa Cruz Biochtech (Santa Cruz, CA). IRS-1 and Phospho-IRS1 Ser 307 antibodies were purchased from Millipore (Billerica, MA). Antibodies for phospho-AMPK, AMPK, phospho-ACC, and ACC were purchased from Cell Signaling Technologies (Beverly, MA) and IκBα and actin from Santa Cruz Biochtech (Santa Cruz, CA). Horseradish peroxidase-conjugated antirabbit secondary antibodies were purchased from Cell Signaling Technologies (Beverly, MA).
Animals and Housing.
All protocols for animal use were approved by the Animal Care and Use Committee at East Carolina University. Lean and obese male Zucker rats were obtained from Harlan (Indianapolis, IN) and were housed under controlled temperature (23 • C) and lighting (12 hours of light, 0600-1800 hours; 12 hours of dark; 1800-0600 hours) with free access to water and standard rat chow. Animals were fasted 10 hours before the oral glucose tolerance tests (OGTTs).
Metformin Treatment and Oral Glucose Tolerance Testing.
Obese and lean Zucker rats were randomly assigned to receive either control (saline) or metformin (320 mg/kg/day) by daily gavage for four weeks (N = 8 per group) with the final dose given 4 hours prior to sacrifice. On experimental days, rats were anesthetized with 0.1 mL/100 g body wt of a mixture containing 90 mg/mL ketamine and 10 mg/mL xylazine. With blood flow intact, white gastrocnemius (WG), red gastrocnemius (RG), and soleus muscles were harvested from hind limbs. Samples were rapidly dissected, cleaned, and frozen within seconds in liquid nitrogen and stored at −80 • C until analysis.
Preparation of Skeletal Muscle Homogenates.
Frozen muscle samples (50-80 mg) were homogenized in ice-cold lysis buffer 50 mM HEPES, 50 mM Na + pyrophosphate, 100 mM Na + fluoride, 10 mM EDTA, 10 mM Na + orthovanadate, 1% Triton X-100, and protease and phosphatase (1 and 2) inhibitor cocktails (Sigma, St. Louis, MO). Homogenates were sonicated for 10 seconds then rotated for 2 hours at 4 • C. After centrifugation for 25 minutes at 14,000 g, supernatants were extracted and protein content was detected using a BCA protein assay (Pierce, Rockford, IL) and individual homogenate volumes were separated into 50 μg of protein before being frozen in liquid nitrogen and stored at −80 • C until used for immunoblotting.
2.5.
Immunoblotting. For IRS-1, homogenates were subjected to 10 μL IRS-1 monoclonal IP antibody (Santa Cruz Biotech, Santa Cruz, CA) overnight then coupled to protein A sepharose beads and rotated for 2 hours (Amersham Biosciences, Uppsala Sweden) and eluted with sample buffer. Samples were separated by SDS-PAGE using 7.5% or 10% Tris·HCl gels and then transferred to PVDF membranes for probing by appropriate antibodies. Following incubation with primary antibodies, blots were incubated with appropriate horseradish peroxidase-conjugated secondary antibodies. Horseradish peroxidase activity was assessed with ECL solution (Thermo Scientific, Rockford, IL), and exposed to film. The image was scanned and band densitometry was assessed with Gel Pro Analyzer software (Media Cybernetics, Silver Spring, MD). Equal loading of proteins was ensured by probing for actin. Content of phospho-proteins (using phospho-specific antibodies) was calculated from the density of the band of the phospho-protein divided by the density of the total protein using the appropriate antibody.
2.6. Statistical Analysis. All data are presented as means ± SEM. Two-way analysis of variance (ANOVA) was used to compare group and treatment (SPSS, Chicago, IL). Where an interaction was observed, post hoc analysis (Bonferroni and Tukey's HSD) was used to determine significance. Correlation analysis was performed by the Pearson productmoment method. Significance level was established a priori at P ≤ .05.
3.1.
Metformin and AMPK Activation. AMPK activation was determined by measuring phosphorylation of AMPK and its substrate acetyl-CoA-carboxylase (ACC) while controlling for total levels of both proteins. No difference in AMPK phosphorylation was observed between lean and obese controls in any of the three muscles. Moreover, chronic metformin treatment did not significantly affect AMPK phosphorylation in RG or soleus in neither the lean nor obese rodents (Figure 1(a)), and this was further evident by no change in levels of pACC in these muscles (Figure 2(b)). In contrast, metformin resulted in a significant increase in both phosphorylation of AMPK and ACC in both the lean (57% for pAMPK and 525% for pACC) and obese (106% for pAMPK and 710% for pACC) rats in WG compared with control animals (Figures 1(a) and 1(b); P < .05).
Metformin and IκBα. Inasmuch as metformin has been
shown to reduce IKKβ activity in various tissues, IκBα was measured as an indicator of IKKβ action. Several groups have shown that IκBα levels are closely associated with IRS1-Ser phosphorylation and IKKβ activity [22][23][24]. No differences in IκBα levels were observed between controls animals in either soleus or RG and chronic metformin treatment resulted in no significant effect in soleus from the lean or obese animals (Figure 2(a)). However, IκBα levels were significantly lower in WG of obese when compared with lean. Further, metformin treatment increased IκBα levels in both RG and WG of obese animals by 67% and 150%, respectively, to a level similar to that seen in lean (P < .05 for RG, P < .005 for WG). When combining all data from WG, IκBα showed a strong correlation to AMPK phosphorylation (r = 0.755, P < .005; Figure 2(b)).
IRS1-Serine 307
Phosphorylation. Due to the implicated role of serine phosphorylation of IRS1 in inhibiting insulin signaling, we measured IRS1-pS 307 and controlled for total IRS1. Similar to our observation with IκBα, there was no significant effect of treatment on soleus pSer 307 levels in either lean or obese rodents (Figure 3(a)). Similarly, no differences were observed in RG, although pSer 307 in obese tended to be lower in the metformin-treated versus control animals (P = .061). In contrast, pSer 307 levels in control animals were significantly higher in obese WG compared with lean (P < .05). Moreover, pSer 307 levels were significantly reduced (30%) with metformin treatment in WG from obese rodents (P < .05). Inasmuch as IκBα and pSer 307 appeared to follow similar trends in WG, we performed a correlation analysis between the two variables. When comparing pSer 307 and IκBα in all groups, no significant correlation was observed (r = −0.536, P = .067; data not shown). However, in WG alone the correlation between pSer 307 and IκBα reached significance (r = −0.789, P < .01; Figure 3(b)), indicating an inverse relationship in WG.
Discussion
The main finding of the present study is that the beneficial effects of metformin treatment on glucose tolerance in obese, insulin-resistant rodents is associated not only with an elevation in AMPK action, but also IκBα levels in WG from obese, insulin-resistant rats. This finding is an indication of reduced IKKβ activity and suggests that metformin treatment is able to reduce IKKβ activity and restore IκBα protein levels within the muscle. Moreover, these observations are associated with improved insulin signaling, as evidenced by reduced IRS1-pSer 307 levels. Lastly, levels of IκBα and IRS1-Ser 307 were significantly and inversely correlated with metformin-treated obese rats, but only in white muscle, suggesting a fiber-type specific action of metformin on the IKKβ signaling pathway.
Recent work has supported the role of IKKβ as an inhibitor of insulin action that results in insulin resistance. In particular, salicylate is known to inhibit the activity of IKKβ, and pretreatment of salicylate in lipid-infused rats rescues glucose tolerance back to similar levels seen in control rats when compared with lipid infusion alone [25]. Further, Yuan et al. [13] showed that Ikkβ +/− mice have lower fasting glucose and insulin concentrations compared to Ikkβ +/+ littermates when on high-fat diets. Additionally, IKKβ KO mice experience no decrement in insulin sensitivity in response to lipid infusion compared to control mice [26]. Collectively, these findings demonstrate that activation of IKKβ is associated with a negative impact on insulin sensitivity. Given these findings and those from other groups demonstrating significant correlations between IKKβ activity, IκBα levels, and IRS1-Ser 307 phosphorylation [9,[22][23][24], we are optimistic that the surrogate outcomes used to determine IKKβ action, namely, IκBα and IRS1-Ser 307 phosphorylation, IKKβ's immediate downstream effectors, adequately reflect IKKβ action.
In addition to IKKβ, other lipid-sensitive serine kinases have been implicated in phosphorylating IRS1-Ser 312 . In particular, certain protein kinase C (PKC) isoforms have been shown to be associated with changes in insulin signaling in a similar manner to IKKβ and some groups present evidence of it being indispensable in potentiating the lipid-induced decay in the insulin signal [23,[27][28][29]. In particular, in addition to observing a reduction in IκBα levels, Itani et al. [23] observed an increase in membrane-associated PKCβ in the muscle of patients during an acute lipid infusion. However, we found no differences in membrane-associated PKCβ levels when comparing animals or treatment (data not shown). The discrepancy between these data is possibly due to the diverse conditions and models (e.g., acute lipid infusions versus diet-induced obesity, human versus rodent). In support of our findings, Huang et al. [30] demonstrated that muscle PKCβ levels did not differ between control and diet-induced obese mice. Contrary to the reduced insulin sensitivity associated with IKKβ activity, AMPK plays a key role in improving glucose handling and increasing insulin action. In muscles from sedentary rats, coincubation with insulin and the compound 5-aminoimidasole-4-carboxamide-1-β-D-ribofuranoside (AICAR; an AMP-mimetic and AMPK activator) has been shown to induce a twofold greater glucose uptake compared to insulin alone [31]. Similarly, AICARperfused rat hindlimbs have been shown to increase glucose uptake compared with controls [32].
Research investigating the novel role of AMPK as a mediator in IKKβ's ability to inhibit IRS-1 function in endothelial cells has revealed that AMPK inhibits both fatty acid-and TNFα-induced increases in NF-κB in cultured endothelial cells [16]. When endothelial cell cultures were incubated with palmitate, inflammatory markers increased, but this response was attenuated in the presence of AICAR. Moreover, AICAR acted to prevent NF-κB activation in the presence of TNFα. This provides strong evidence in support of a role for AMPK in attenuating inflammation and associated cellular metabolic dysfunction. Additionally, metformin, which leads to AMPK activation, dose-dependently inhibits TNFα-induced NF-κB activation, whereas blocking signaling through AMPKα1 (via small interfering RNA) attenuates metformin-and AICAR-induced inhibition of NF-κB activation by TNFα, further supporting a role for AMPK attenuating the inflammation response [17]. A similar relationship was explored in macrophages [33], where treatment of RAW264.7 cells with berberine, a known AMPK activator, was shown to inhibit expression of inflammatory genes including IL-1β and MCP-1 via an AMPK-dependent mechanism.
Research involving metformin has also displayed anti-IKKβ properties in the liver. Cleasby et al. [20] observed elevated levels of IκBα in liver from metformin-treated rodents, providing indirect evidence that AMPK may be serving as an IKKβ inhibitor in an insulin sensitive tissue. These findings provide the rationale for exploring the potential role of AMPK as an IKKβ inhibitor in skeletal muscle inasmuch as muscle represents the main site of insulin-dependent glucose uptake.
The idea of AMPK attenuating inflammatory activity in skeletal muscle has only recently been investigated and, due to a scarcity of data, a consensus has yet to be reached. For example, Steinberg et al. [34] observed that muscle cells with constitutively active AMPK were protected from TNFα-induced suppression of insulin-stimulated glucose uptake (TNFα has been shown to elicit an increase in IKKβ activity). In ob/ob mice, it was found that AMPK activity was significantly reduced compared with lean controls and that TNFα neutralization in ob/ob mice restored AMPK activity to that of lean controls. Furthermore, obese mice exhibited reduced fatty acid oxidation, a defect that was not observed following TNFα neutralization. Lastly, ob/ob mice lacking a functional TNFα receptor (TNF −/− ) enjoy greater insulin sensitivity than control ob/ob mice. It was observed that AMPK activity was higher in obese TNF −/− relative to obese controls. Whereas the evidence provided by Steinberg et al. [34] places inflammatory mediators upstream of AMPK, the work by Hattori et al. [17] offers the opposite perspectivethat AMPK inhibits NF-κB pathway activity. Moreover, the observations of the current study extend the work by Hattori et al. [17] in that we provide evidence that AMPK may attenuate IKKβ action in skeletal muscle. Specifically, we 6 Journal of Obesity observed an increase in AMPK phosphorylation, an increase in IκBα levels (suggesting reduced IKKβ activity), and, finally, reduced levels of IRS1-pSer 307 in white muscle from metformin-treated obese rats.
In contrast, Ho et al. [35] explored the effects of AICARstimulated AMPK activation in rats in vivo and found no reduction in IKKβ phosphorylation 60 minutes following an intraperitoneal injection of AICAR despite a robust increase in AMPK activity in skeletal muscle. Further, they did not observe any change in IKKβ phosphorylation in isolated rat EDL muscles treated with AICAR. These findings appear to indicate that AMPK does not directly regulate IKKβ activity; however, the acute nature of the study design must be considered. The findings of the current study utilized a chronic treatment intervention and support the possibility that metformin-mediated activation of AMPK attenuates IKKβ activity, as has been established in hepatic tissue [20].
Interestingly, metformin-induced activation of AMPK occurs in the absence of any changes in ATP/ADP ratio, indicating that a decrease in cellular energy charge is not the link between metformin and AMPK activation [36]. In this regard, synergistic or independent mechanisms should also be considered. For example, metformin has also been shown to partially inhibit mitochondrial complex I and subsequent free radical production [37], raising the possibility that an additional, unique effect of metformin in improving IκBα and IRS1-pSer 307 levels may be related to its effects on mitochondrial function in addition to that of AMPK activation. An association between free radical production, IKKβ activity, and insulin sensitivity has been established and might prove to be a fruitful area of investigation on this topic.
It is noteworthy that we observed neither an increase in AMPK activity nor a reduction in IKKβ activity or IRS1-pSer 307 in soleus with metformin treatment. Similarly, no differences were noted in either AMPK or IRS1-pSer 307 levels in RG, although IκBα levels increased in obese RG with metformin treatment. The singular change in IκBα levels without an accompanying reduction in IRS1-Ser 307 phosphorylation in RG was unexpected, though the comparison of IRS1-pSer 307 levels between metformin-treated and control obese animals did approach statistical significance (P = .061). Future studies seem warranted then to determine if both statistical and physiological significance would be realized in red muscle following a longer treatment time as would be expected in human subjects under chronic metformin prescription.
In contrast to the observations in soleus and RG, AMPK, IκBα, and IRS1-pSer 307 levels were all affected by chronic metformin treatment in WG from the obese rats. Specifically, metformin treatment increased AMPK activity and was associated with both an increase in IκBα protein levels and a reduction in IRS1-pSer 307 . In accordance with these findings, it should be recognized that other investigators have noted differences in the IKKβ signaling pathway between muscle fiber types. For example, Bhatt et al. [38] observed a reduction in IκBα levels with diet-induced obesity in rat skeletal muscle in a fiber-type dependent manner. Specifically, obesity was associated with decreased levels of IκBα in superficial vastus (white, fast twitch-glycolytic), whereas the soleus (red, slow twitch-oxidative) appeared to be protected from such an effect. Additionally, Iglesias et al. [39] observed an AICAR-induced improvement in glucose uptake in white muscle that was not evident red muscle. Inasmuch as metformin may exert its action by altering mitochondrial function, it is possible that given red muscle's prevalence of mitochondria, a greater metformin dose is required to elicit the same response as observed in white muscle.
Conclusion
In summary, these findings demonstrate that IRS1-Ser 307 phosphorylation is elevated in certain muscle types of obese, highly insulin resistant Zucker rats. Moreover, metformin treatment is associated with increased AMPK function and IκBα levels, at least in white muscle. These findings are novel, in that they offer support for the hypothesis that AMPK may influence IKKβ action. Indeed, under the experimental conditions present in the current study, AMPK appears to exert an inhibitory effect on IKKβ in a fiber-type dependent manner, which contributes to the change in muscle insulin signaling. Future studies investigating dose and treatment time for Metformin seem warranted when considering potential effects on the IKKβ signaling pathway in muscles of red fiber type composition. | 2014-10-01T00:00:00.000Z | 2010-01-14T00:00:00.000 | {
"year": 2010,
"sha1": "0f42cd478c50ce779be0160f93bd4c83c1e4659a",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/jobe/2010/970865.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d8796eb38b0c1c085c88fb37eb74a5e98d57a475",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
1305605 | pes2o/s2orc | v3-fos-license | Temporal influence of endocrine therapy with tamoxifen and chemotherapy on nutritional risk and obesity in breast cancer patients
The effect of endocrine therapy with tamoxifen (TMX) on weight gain has been reported in the literature, but the outcomes are still controversial. Moreover, previous treatment options, such as chemotherapy (CT), also include body changes. The focus of this study was to verify the temporal influence of endocrine therapy with TMX on nutritional risk and obesity and its association with CT in breast cancer patients. In this cross-sectional study, 84 breast cancer surviving women were evaluated during endocrine therapy with TMX. Anthropometric, biochemical and body composition parameters were measured. A generalized estimating equation (GEE) was used to examine the association between CT and groups of women using TMX categorized by the duration of the treatment (group 1, women using TMX for the first 3 years; group 2, women using TMX between 3 and 4 years and group 3, women using TMX for more than 4 years). The interaction of CT with duration of TMX use showed a significant effect on Body Mass Index (BMI), waist circumference (WC) and body fat percentage (BFP) (GEE p-value = 0.002, 0.000, 0.000, respectively). Women from group 1 who underwent CT presented higher values of body variables compared to those women from group 2 who also underwent CT (BMI = 29.14 ± 0.93, 26.76 ± 0.85 kg/m2; WC = 94.45 ± 1.96, 91.07 ± 2.44 cm; BFP = 36.36 ± 1.50, 33.43 ± 1.66%, respectively). On the other hand, women from group 1 who did not undergo CT presented lower values of body variables compared to those women from group 2 who also did not undergo CT (BMI = 25.29 ± 0.46, 28.40 ± 0.95 kg/m2; WC = 85.84 ± 0.90, 97.75 ± 0.88 cm; BFP = 30.32 ± 0.43; 42.95 ± 1.03%, respectively). Women on endocrine therapy with TMX are mostly overweighed and obese, most evidently in women who received CT, and who were at the beginning of treatment. Women that did not undergo CT, despite presenting lower values of body variables in the first 3 years, still deserve special attention because significantly higher values were observed in women between 3 and 4 years of therapy.
Background
Breast cancer (BC) accounts for 29% of all new cases of cancer in women, being the second leading cause of death [1]. In patients treated with surgery, adjuvant endocrine therapy with tamoxifen (TMX), a selective estrogen receptor modulator, has been widely used in individuals expressing estrogen and/or progesterone endocrine receptors [2], prolonging substantially disease-free intervals and survival outcomes [3].
Changes in body weight are described as side effects during treatment [4][5][6]. Both the initial overweight and the amount of weight gained during treatment negatively influence the prognosis, survival and quality of life of women with BC [7][8][9]. In endocrine therapy, even though this gain is more modest (1 to 2 kg) [10,11] when compared to the CT period (3 to 7 kg) [12][13][14], it is a major concern regarding non-adherence to endocrine therapy [15]. Furthermore, even without weight gain, these women are affected by changes in body composition with loss of muscle mass and an increase in body fat percentage (BFP) [10,16]. The excess of BFP in postmenopausal women results in increased estrogen and androgen concentrations in adipose tissue [17], which can stimulate cancer cells [18], change circulating levels of pro-inflammatory cytokines [19], and also impact the efficiency of TMX [20]. However, these results are still unclear and need to be further investigated.
Furthermore, metabolic implications at the beginning of treatment for BC reveal impairment of glucose metabolism and dyslipidemia [21] and extend into survivors on endocrine therapy with TMX [22][23][24]. These implications are important along with weight gain due to the occurrence of cardiovascular diseases that may develop over time in postmenopausal women on endocrine therapy with TMX [25,26]. However, even in face of these implications, the overall beneficial effects of treatments for BC are already established [2,3]. Also, the combination of treatments for BC, such as chemotherapy (CT) plus TMX, promotes substantial benefits compared to CT alone, producing a further reduction in recurrence risk [2].
Considering the recommendation to use endocrine therapy with TMX for up to 10 years [3], the impact of body modifications on survival and disease recurrence during endocrine therapy is poorly understood [27,28]. In this sense, knowing the potential long-term effect of previous treatments, such as CT [12,13], it is necessary to understand its influence on the TMX side effects related to anthropometric parameters and BFP at different moments of endocrine therapy. In addition, this understanding will enable the development of multidisciplinary interventions directed throughout the treatment.
We hypothesized that women who underwent CT were more obese and that the degree of obesity was more evident at the beginning of TMX therapy. Thus, the objective of this study was to analyze the temporal influence of endocrine therapy with TMX on nutritional risk and obesity and its association with CT in BC patients, evaluated by means of anthropometric variables and body composition.
Ethical aspects
A transversal study conducted in 2015-2016 in a brazilian university hospital (HC-UFU, Uberlandia, Minas Gerais, Brazil) including one assessment with BC patients during endocrine therapy with TMX, in the period from August 2015 to March 2016.
This study was approved by the Human Research Ethics Committee (protocol number 907.129/14) and the entire study was conducted based on the standards of the Helsinki Declaration. All participants signed a free and informed consent form.
Sample size calculation
The sample size required for this study was determined using the G*Power software, version 3.1 [29]. The sample size calculations were based on an F test linear multiple regression with effect size f of 0.15, an alpha level of 0.05, 95% power and 3 predictors. Given the output Parameter, a total sample of 84 women was required at final analysis.
Eligibility criteria
The study included women diagnosed with BC with indication of endocrine therapy with TMX and with verbal and cognitive capacity to respond to the instruments used for data collection. Women older than or equal to 80 years and less than or equal to 18 years were excluded from the study, as well as patients with locoregional or distant BC recurrence; diagnosis of any other type of cancer; autoimmune diseases and/or use of corticosteroids; presence of diabetes mellitus; thyroid diseases; depressive syndrome; pregnant or postpartum women; admission to palliative care programs; institutionalized patients; without telephone contact; previous use of TMX and/or change to the use of aromatase inhibitors.
Participants for recruitment
The active medical records of patients being treated with TMX in the month of March 2015 were analyzed (n = 412) and 231 patients were classified as eligible for the study. Using a table of random numbers, 84 patients were invited to participate in the study according to the previously calculated sample. Groups were set according to the duration of TMX use, obtained by stratification into tertiles at three times of use (groups 1, 2 and 3), considering equivalent ranges of the duration: group 1 included 32 women using TMX for the first 3 years; group 2 included 22 women using TMX between 3 to 4 years; and group 3 included 30 women using TMX for more than 4 years (maximum time equals to 6 years and 6 months). The three groups included, after strict eligibility criteria, both women who underwent chemotherapy along with those who did not undergo (Fig. 1). The invitation to participate was made by phone and the evaluations were carried out at the oncology department of the clinical hospital.
Anthropometric assessment
A mechanical scale was used to measure weight, with sensitivity of 100 g; for height, a vertical stadiometer with a 1 mm precision scale was used; and for waist circumference (WC) a flexible and inelastic tape was used, following the protocol recommended by the World Health Organization [30]. After obtaining these measurements, the Body Mass Index (BMI) were calculated dividing weight by height squared (Kg/m 2 ), taking into consideration elderly women over 60 years of age [31].
The horizontal tetra polar bioelectrical impedance (BIA) (Biodynamics device model 450) was used to evaluate body compartments, using the cutoff point for excess BFP in women ≥ 24% [32]. Participants were guided regarding the protocol of the test [33].
Quantitative dietary assessment
Properly trained nutritionists collected information about food consumption by means of a 24-h dietary recall (24HR) applied through telephone interviews, according to the technique used in the Vigitel Study [34] with adaptations. For each participant, three nonconsecutive 24HR were applied, including a day of the weekend, in order to better reflect the eating habits of the participants. From the 24HR, the mean quantity of total energy, carbohydrate, protein and lipid were estimated. Quantification of nutrients was performed through Dietpro® software, version 5.7, using as a reference, preferably, the Brazilian Table of Food Composition [35]. However, for those foods not found in this table, the international reference was used, the table from the United States Department of Agriculture [36].
Laboratory assays
Venous blood was collected at the time of the interview, between 7 am and 10 am, after overnight fasting and under standard conditions for analysis of Total Cholesterol, LDL Cholesterol (LDL-C), HDL Cholesterol (HDL-C) (mg/dL), TG (mg/dL), Fasting glucose (mg/dL), C Reactive Protein (CRP) (mg/dL), and a complete blood count. The results were evaluated according to recommendations established in the literature [37][38][39].
Statistical analyses
First, the Kolmogorov-Smirnov normality test was performed. Parametric tests for variables with normal Fig. 1 Diagram reporting the number of women screened and recruited in this study (n = 84). Diagram reporting the number of women with breast cancer on endocrine therapy with tamoxifen screened and recruited during the study conducted at a university hospital in the city of Uberlandia, Minas Gerais, Brazil, 2015-2016 (n = 84). Group 1, women using tamoxifen for the first 3 years; group 2, women using tamoxifen between 3 and 4 years; Group 3, women using tamoxifen for more than 4 years; CT, chemotherapy; TMX, tamoxifen distribution, or non-parametric tests for variables without normal distribution were performed. Generalized Estimating Equations (GEE) were used to examine the association between groups of TMX/CT and nutritional risk and obesity at first, second and third usage time adjusting for age, smoking, alcohol consumption, physical activity, energy (kcal), and clinical stage. An interaction term between the CT and time was included in the model. The GEE model accounts for correlations among the within-subject outcome variables of BMI, WC and BFP and provides consistent estimates of the parameters of the standard errors using robust estimators. The adjustment method for multiple comparisons was Sequential Sidak. All statistical analyses were run using the SPSS® (SPSS, Inc., Chicago, USA) software package (SPSS Statistics for Windows, version 21) and a p-value ≤0.05 was considered statistically significant.
Regarding the anthropometric parameters, the current BMI values 63.1% of participants were above the values of eutrophy for adults and elderly (26.79 ± 4.59; 28.16 ± 4.53 kg/m 2 , respectively). When comparing the groups, the BMI values of adults were significantly higher among women in group 1 (28.38 ± 4.12 kg/m 2 , p = 0.018) when compared with the others. No statistically significant difference was found between the groups for the BMI of the elderly. In addition, among the BMI classifications, women who underwent CT (n = 74), 62.2% (n = 46) were classified as overweighed or obese and 37.8% (n = 28) were neither overweighed nor obese, considering adults and elderly. For those who did not undergo CT (n = 10), 70.0% (n = 7) were classified as overweight and 30.0% (n = 3) as non-overweighed. The BFP and WC presented mean values above the recommendations (35.23 ± 7.55%, 90.63 ± 11.07 cm, respectively), but without significant differences when compared between groups ( Table 2).
The blood analysis for the lipid parameters showed discretely altered values of TG and HDL-C (153. 49 Table 2).
Regarding food intake, we did not find a statistically significant difference for the average amount of energy, carbohydrate and protein ingested among the three groups. However, lipids had significantly higher mean values in group 1 than in groups 2 and 3 (66.74 ± 25.93, 48.61 ± 18.14, 56.61 ± 19.06 g, respectively, p = 0.012).
In the GEE analyses, we did not find significant isolated effects of CT on BMI, WC and BFP (p = 0.102, p = 0.084, p = 0.607, respectively). However, significant effects were observed when we evaluated the duration of TMX use (determined by the three groups) on WC and BFP (p = 0.003 and p = 0.001, respectively). Furthermore, the interaction between these two factors (CT and duration of TMX use) was significant for all anthropometric and body composition parameters (p < 0.05) ( Table 3). Table 4 shows the post hoc comparisons of the variables evaluated with CT and not CT and groups 1, 2 and 3. Analyses of the univariate effects showed that in group 1, women who did CT when compared with those who did not undergo CT, presented significantly higher values of BMI (29.14 ± 0.93; 25.29 ± 0.46 kg/m 2 , p = 0.003, respectively), WC (94.45 ± 1.96; 85.84 ± 0.90 cm, p = 0.001, respectively) and BFP (36.36 ± 1.50; 30.32 ± 0.43%, p = 0.001, respectively). In group 2, the tendency is inverse, i.e., women that underwent CT presented lower values for BMI, WC and BFP, but only for BFP was significantly lower (33.43 ± 1.66; 42.95 ± 1.03%; p = 0.000).
Discussion
In our study, we observed that the majority of women in endocrine therapy with TMX were classified as overweighed and obese, and we investigated the association of CT, usage time of TMX, and three different body parameters (BMI, WC and BFP). Although we did not find an isolated effect of CT, the interaction of CT with duration of TMX use showed a significant effect on BMI, WC and BFP. In our study, women from group 1 who did not undergo CT, presented lower values of body variables compared to those women who also did not undergo CT but were using TMX between 3 to 4 years (group 2). On the other hand, women from group 1 who underwent CT, presented higher values of body variables compared to those women who also underwent CT but were using TMX between 3 to 4 years (group 2). So, our study provides relevant knowledge to understand the need for specific and targeted conducts at different times of endocrine therapy.
In the present study we found values above the recommendations of weight and body fat excess in women on endocrine therapy with TMX, results similar to those observed in the literature [40,41]. These body modifications related to increased adipose tissue lead to unsatisfactory outcomes, especially in postmenopausal women with BC [42][43][44][45]. However, these outcomes of weight gain during endocrine treatment with TMX are still controversial and need to be further investigated [13,46,47]. One of those outcomes could be an abnormally high expression of the aromatase enzyme in the breast, an enzyme that is responsible for the production of increased local estrogen, thus predisposing the mammary tissue to hyperplasia and cancer [18], as well as a bioenergetic adaptation of the cancer cells [48,49]. In this sense, due to the important association of overweight with the prognosis of the disease [7-9, 27, 28], it is necessary to identify possible predictors about the body changes that occur during endocrine therapy with TMX. In the present study, we found that CT alone showed no effect on nutritional risk and obesity. It is known, however, that adjuvant CT for BC acts as an independent prognostic factor for bodily modifications with a potential long-term effect and may therefore affect the period of endocrine therapy [12,13]. However, when we evaluated in this study the interaction between CT and duration of TMX use, we verified a significant effect on all body parameters evaluated, which demonstrates the relevance of that interaction in body changes BMI body mass index, WC waist circumference, TG triglycerides, HDL-C high density lipoprotein, LDL-C low density lipoprotein, WBC white blood cell count, CRP C Reactive Protein, SD standard deviation, Group 1 women using tamoxifen for the first 3 years, Group 2 women using tamoxifen between 3 and 4 years, Group 3 women using TMX for more than 4 years. The cutoff points of the biochemicals parameters were evaluated according to recommendations [37][38][39]; p < 0.05 was considered significant, calculated by ANOVA 1 p = 0.006; 2 p = 0.049; 3 p = 0.010 over the years of endocrine treatment in the face of the increase of the number of long-term BC survivors and the many years of established endocrine therapy [3]. In this study, considering only women from group 1 (using TMX for the first 3 years), those that previously underwent CT had higher values of body fat and were more obese than those who did not undergone CT. The effect of CT interaction at different times of endocrine therapy with TMX on body parameters had not been reported in the literature before. Considering that the use of TMX starts in most cases after CT, the worst results for women who underwent CT may be due to the prolonged effects of chemotherapy and not for the effect of TMX. In a prospective and observational study performed with 272 French women treated with CT, greater weight changes were reported at 6 and 12 months after the end of this treatment [50], and the average weight gain in the first year after the end of the CT was 3 kg [51]. Such body modifications may be explained in part by the induction of CT in the reduction of energy expenditure [52], changes and perceptions of food due to the effects of nausea and changes in palatability [53], and negative nitrogen balance [54]. In addition, we may consider that the side effect of endocrine therapy with TMX on body weight, although still controversial, may exert an influence in this process. However, it is difficult to relate body modification entirely to TMX, since most studies of weight gain reports did not have a comparison group [13,55,56].
Also, an important aspect of CT is the induction of ovarian failure by treatment toxicity, especially in women approaching menopause [57,58], and in Brazil the mean age of menopause is 51 years old [59]. A study with women with BC in CT found an immediate reduction of ovarian blood flow after treatment, demonstrating a postmenopausal profile for most patients accompanied by related symptoms [60]. Thus, those perimenopause women who do CT, especially with anthracycline-based regimens compared to CMF [61], may enter menopause more frequently with CT and present earlier and induced symptoms already known from climacteric, such as changes in body composition [62,63].
When analyzing women who did not have CT in this study, we verified that the highest nutritional risk and body fat did not occur in the group with at most 3 years of TMX use, but in women in the intermediate duration group, between 3 and 4 years. Results of a crosssectional study with american women found that the highest percentage of weight gain occurred after 3 years of TMX use; however, CT was not considered [4]. These results suggest that these women who do not have CT may have different reactions between them. First, the concentration of important metabolites of TMX oxidative metabolism, such as endoxifene, is related to the occurrence of side effects from drug use [4], suggesting the need for prospective studies to see if different concentrations occur throughout treatment and its relation to previous treatments, such as CT.
Also, food intake is an important modifiable factor contributing to changes in nutritional status and the risk of obesity in women with BC [52][53][54]. However, in our study, we found that women did not present statistically significant differences for the average amount of energy, carbohydrate and protein intake among the three groups evaluated. However, only significantly higher mean values for lipids were observed for women in group 1. Possibly, the preference for more palatable foods in this period, still resulting from the cytotoxicity of those who did CT [53], may have influenced this result. In addition, as a result of the several years proposed for endocrine therapy [3], it has been shown that psychological factors such as anxiety and depression are common in endocrine therapy [64], and may interfere with changes in the dietary pattern [65]. There is a need for prospective studies to evaluate and consider these factors for better explaining these findings in view of the negative effects of obesity. Additionally, we found altered values for TG and HDL-C, with HDL-C showing inadequate values in women in the first 3 years and between 3 and 4 years of treatment statistically significant between the three groups. In this sense, since central obesity is associated with several biochemical alterations, including decreased glucose tolerance, elevated serum insulin levels and lipid changes [41][42][43], blood assessments are important in this population, being a risk factor for many diseases associated with such changes, including diabetes mellitus and cardiovascular disease [66,67].
In general, adjuvant therapy with AI is associated with better outcomes compared to TMX for postmenopausal women with endocrine-responsive BC. However, in many public hospitals from Brazil (as in our case), taken into consideration cost issues, AIs are reserved to be used only in high risk early BC patients. This approach is not all bad, considering findings from the Breast International Group Trial 1-98 comparing adjuvant TMX with letrozole which showed considerable less benefit of AI over TMX in patients presenting lower risk of recurrence [68,69]. However, many postmenopausal women with endocrine-responsive BC are still receiving TMX in low-resource hospitals. In addition, TMX has been chosen for patients presenting moderate to severe osteoporosis (which is not uncommon in post-menopausal women). Also, it is important to mention that central obesity becomes more prevalent after menopause, which may have distorted the results.
Possible limitations of this study should be considered. One limitation is the use of unequally weighted populations relative to the menopausal state that may at least in part interfere in the generalization of the study. Moreover, cross-sectional evaluation makes it impossible to establish causal relationships with changes in body composition and duration of TMX use along with the other variables. In this way, it would be important to obtain the usual weight before the beginning of the treatments for BC, since that obesity may have had some correlation with BC incidence and some effect on the treatment regimen chosen in the first place, and that is a possible confounder when interpreting the data from this study. We did not evaluate the CT in relation to its different chemotherapeutic agents, in which they can respond differently to weight gain [70].
Conclusions
Our results suggest that women in endocrine therapy with TMX require nutritional monitoring throughout treatment with the need for targeted interventions at specific times. Women who have undergone CT prior to Fig. 2 Distribution of women using endocrine therapy with TMX categorized according to groups (1, 2 and 3) and according to whether or not CT was performed. Distribution of women with breast cancer, according to groups of TMX usage duration in a university hospital in the city of Uberlandia, Minas Gerais, Brazil, 2015-2016 (n = 84, BMI and WC; n = 74, BFP). Group 1, women using tamoxifen for the first 3 years; group 2, women using tamoxifen between 3 and 4 years; Group 3, women using tamoxifen for more than 4 years; BMI, Body Mass Index; WC, waist circumference; BFP, body fat percentage; CT, chemotherapy; *p < 0.05 calculated by ANOVA. Post hoc comparison (Sidak method) initiating endocrine therapy deserve special attention in the first 3 years of treatment. However, women who did not undergo CT had a higher nutritional risk in the intermediate treatment period (between 3 and 4 years). In view of the great benefit of endocrine therapy with TMX already established, which exceeds the negative effects on body composition, these results reinforce the importance of nutritional guidelines and multidisciplinary follow-up, taking into account previous treatments such as CT, thus ensuring that BMI and body composition are reduced or maintained within a healthy range. In addition, these strategies may contribute to a greater adherence to treatment and also better medication action. | 2017-08-30T05:38:50.632Z | 2017-08-29T00:00:00.000 | {
"year": 2017,
"sha1": "2ea30499c78adcf0029c227450c2872fe027a4a7",
"oa_license": "CCBY",
"oa_url": "https://bmccancer.biomedcentral.com/track/pdf/10.1186/s12885-017-3559-z",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2ea30499c78adcf0029c227450c2872fe027a4a7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
384520 | pes2o/s2orc | v3-fos-license | Multilingual Language Processing From Bytes
We describe an LSTM-based model which we call Byte-to-Span (BTS) that reads text as bytes and outputs span annotations of the form [start, length, label] where start positions, lengths, and labels are separate entries in our vocabulary. Because we operate directly on unicode bytes rather than language-specific words or characters, we can analyze text in many languages with a single model. Due to the small vocabulary size, these multilingual models are very compact, but produce results similar to or better than the state-of- the-art in Part-of-Speech tagging and Named Entity Recognition that use only the provided training datasets (no external data sources). Our models are learning"from scratch"in that they do not rely on any elements of the standard pipeline in Natural Language Processing (including tokenization), and thus can run in standalone fashion on raw text.
Introduction
The long-term trajectory of research in Natural Language Processing has seen the replacement of rules and specific linguistic knowledge with machine learned components. Perhaps the most standardized way that knowledge is still injected into largely statistical systems is through the processing pipeline: Some set of basic language-specific tokens are identified in a first step. Sequences of tokens are segmented into sentences in a second step. The resulting sentences are fed one at a time for syntactic analysis: Part-of-Speech (POS) tagging and parsing. Next, the predicted syntactic structure is typically used as features in semantic analysis, Named Entity Recognition (NER), Semantic Role Labeling, etc. While each step of the pipeline now relies more on data and models than on hand-curated rules, the pipeline structure itself encodes one particular understanding of how meaning attaches to raw strings.
One motivation for our work is to try removing this structural dependence. Rather than rely on the intermediate representations invented for specific subtasks (for example, Penn Treebank tokenization), we are allowing the model to learn whatever internal structure is most conducive to producing the annotations of interest. To this end, we describe a Recurrent Neural Network (RNN) model that reads raw input string segments, one byte at a time, and produces output span annotations corresponding to specific byte regions in the input 1 . This is truly language annotation from scratch (see Collobert et al. (2011) and Zhang and LeCun (2015)).
Two key innovations facilitate this approach. First, Long Short Term Memory (LSTM) models (Hochreiter and Schmidhuber, 1997) allow us to replace the traditional independence assumptions in text processing with structural constraints on memory. While we have long known that long-term dependencies are important in language, we had no mechanism other than conditional independence to keep sparsity in check. The memory in an LSTM, however, is not constrained by any explicit assumptions of independence. Rather, its ability to learn patterns is limited only by the structure of the network and the size of the memory (and of course the amount of training data). Second, sequence-to-sequence models (Sutskever et al., 2014), allow for flexible input/output dynamics. Traditional models, including feedforward neural networks, read fixed-length inputs and generate fixed-length outputs by following a fixed set of computational steps. Instead, we can now read an entire segment of text before producing an arbitrary number of outputs, allowing the model to learn a function best suited to the task.
We leverage these two ideas with a basic strategy: Decompose inputs and outputs into their component pieces, then read and predict them as sequences. Rather than read words, we are reading a sequence of unicode bytes 2 ; rather than producing a label for each word, we are producing triples [start, length, label], that correspond to the spans of interest, as a sequence of three separate predictions (see Figure 1). This forces the model to learn how the components of words and labels interact so all the structure typically imposed by the NLP pipeline (as well as the rules of unicode) are left to the LSTM to model.
Decomposed inputs and outputs have a few important benefits. First, they reduce the size of the vocabulary relative to word-level inputs, so the resulting models are extremely compact (on the order of a million parameters). Second, because unicode is essentially a universal language, we can train models to analyze many languages at once. In fact, by stacking LSTMs, we are able to learn representations that appear to generalize across languages, improving performance significantly (without using any additional parameters) over models trained on a single language. This is the first account, to our knowledge, of a multilingual model that achieves good results across many languages, thus bypassing all the language-specific engineering usually required to build models in different languages 3 . We describe results similar to or better than the stateof-the-art in Part-of-Speech tagging and Named Entity Recognition that use only the provided training datasets (no external data sources).
The rest of this paper is organized as follows. Section 2 discusses related work; Section 3 describes our model; Section 4 gives training details including a new variety of dropout (Hinton et al., 2012); Section 5 gives inference details; Section 6 presents results on POS tagging and NER across many languages; Finally, we summarize our contributions in section 7.
Related Work
One important feature of our work is the use of byte inputs. Character-level inputs have been used with some success for tasks like NER (Klein et al., 2003), parallel text alignment (Church, 1993), and authorship attribution (Peng et al., 2003) as an effective way to deal with n-gram sparsity while still capturing some aspects of word choice and morphology. Such approaches often combine character and word features and have been especially useful for handling languages with large character sets (Nakagawa, 2004). However, there is almost no work that explicitly uses bytes -one exception uses byte n-grams to identify source code authorship (Frantzeskou et al., 2006) -but there is nothing, to the best of our knowledge, that exploits bytes as a cross-lingual representation of language. Work on multilingual parsing using Neural Networks that share some subset of the parameters across languages (Duong et al., 2015) seems to benefit the low-resource languages; however, we are sharing all the parameters among all languages.
Recent work has shown that modeling the sequence of characters in each token with an LSTM can more effectively handle rare and unknown words than independent word embeddings (Ling et al., 2015;Ballesteros et al., 2015). Similarly, language modeling, especially for morphologically complex languages, benefits from a Convolutional Neural Network (CNN) over characters to generate word embeddings (Kim et al., 2015). Rather than decompose words into characters, Rohan and Denero (2015) encode rare words with Huffman codes, allowing a neural translation model to learn something about word subcomponents. In contrast to this line of research, our work has no explicit notion of tokens and operates on bytes rather than characters.
Our work is philosophically similar to Collobert et al.'s (2011) experiments with "almost from scratch" language processing. They avoid taskspecific feature engineering, instead relying on a multilayer feedforward (or convolutional) Neural Network to combine word embeddings to produce features useful for each task. In the Results section, below, we compare NER performance on the same dataset they used. The "almost" in the title actually refers to the use of preprocessed (lowercased) tokens as input instead of raw sequences of letters. Our byte-level models can be seen as a realization of their comment: "A completely from scratch approach would presumably not know anything about words at all and would work from letters only." Recent work with convolutional neural networks that read character-level inputs (Zhang et al., 2015) shows some interesting results on a variety of classification tasks, but because their models need very large training sets, they do not present comparisons to established baselines on standard tasks.
Finally, recent work on Automatic Speech Recognition (ASR) uses a similar sequence-to-sequence LSTM framework to produce letter sequences directly from acoustic frame sequences (Chan et al., 2015;Bahdanau et al., 2015). Just as we are discarding the usual intermediate representations used for text processing, their models make no use of phonetic alignments, clustered triphones, or pronunciation dictionaries. This line of work -discarding intermediate representations in speech -was pioneered by Graves and Jaitly (2014) and earlier, by Eyben et al. (2009).
Model
Our model is based on the sequence-to-sequence model used for machine translation (Sutskever et al., 2014), an adaptation of an LSTM that encodes a variable length input as a fixed-length vector, then decodes it into a variable number of outputs 4 .
Generally, the sequence-to-sequence LSTM is trained to estimate the conditional probability P (y 1 , ..., y T |x 1 , ..., x T ) where (x 1 , ..., x T ) is an input sequence and (y 1 , ..., y T ) is the corresponding output sequence whose length T may differ from T .
The encoding step computes a fixed-dimensional representation v of the input (x 1 , ..., x T ) given by the hidden state of the LSTM after reading the last input x T . The decoding step computes the output probability P (y 1 , ..., y T ) with the standard LSTM formulation for language modeling, except that the initial hidden state is set to v: (1) Sutskever et al. used a separate LSTM for the encoding and decoding tasks. While this separation permits training the encoder and decoder LSTMs separately, say for multitask learning or pre-training, we found our results were no worse if we used a single set of LSTM parameters for both encoder and decoder.
Vocabulary
The primary difference between our model and the translation model is our novel choice of vocabulary. The set of inputs include all 256 possible bytes, a special Generate Output (GO) symbol, and a special DROP symbol used for regularization, which we will discuss below. The set of outputs include all possible span start positions (byte 0..k), all possible span lengths (0..k), all span labels (PER, LOC, ORG, MISC for the NER task), as well as a special STOP symbol. A complete span annotation includes a start, a length, and a label, but as shown in Figure 1, the model is trained to produce this triple as three separate outputs. This keeps the vocabulary size small and in practice, gives better performance (and faster convergence) than if we use the crossproduct space of the triples.
More precisely, the prediction at time t is conditioned on the full input and all previous predictions (via the chain rule). By splitting each span annotation into a sequence [start, length, label], we are making no independence assumption; instead we are relying on the model to maintain a memory state that captures the important dependencies.
Each output distribution P (y t |v, y 1 , ..., y t−1 ) is given by a softmax over all possible items in the output vocabulary, so at a given time step, the model is free to predict any start, any length, or any label (including STOP). In practice, because the training data always has these complete triples in a fixed order, we seldom see malformed or incomplete spans (the decoder simply ignores such spans). During training, the true label y t−1 is fed as input to the model at step t (see Figure 1), and during inference, the argmax prediction is used instead. Note also that the training procedure tries to maximize the probability in Equation 1 (summed over all the training examples). While this does not quite match our task objectives (F1 over labels, for example), it is a reasonable proxy.
Independent segments
Ideally, we would like our input segments to cover full documents so that our predictions are conditioned on as much relevant information as possible. However, this is impractical for a few reasons. From a training perspective, a Recurrent Neural Network is unrolled to resemble a deep feedforward network, with each layer corresponding to a time step. It is well-known that running backpropagation over a very deep network is hard because it becomes increasingly difficult to estimate the contribution of each layer to the gradient, and further, RNNs have trouble generalizing to different length inputs (Erhan et al., 2009).
So instead of document-sized input segments, we make a segment-independence assumption: We choose some fixed length k and train the model on segments of length k (any span annotation not completely contained in a segment is ignored). This has the added benefit of limiting the range of the start and length label components. It can also allow for more efficient batched inference since each segment is decoded independently. Finally, we can generate a large number of training segments by sliding a window of size k one byte at a time through a document. Note that the resulting training segments can begin and end mid-word, and indeed, mid-character. For both tasks described below, we set the segment size k = 60.
Sequence ordering
Our model differs from the translation model in one more important way. Sutskever et al. found that feeding the input words in reverse order and generating the output words in forward order gave significantly better translations, especially for long sentences. In theory, the predictions are conditioned on the entire input, but as a practical matter, the learn-ing problem is easier when relevant information is ordered appropriately since long dependencies are harder to learn than short ones.
Because the byte order is more meaningful in the forward direction (the first byte of a multibyte character specifies the length, for example), we found somewhat better performance with forward order than reverse order (less than 1% absolute). But unlike translation, where the outputs have a complex order determined by the syntax of the language, our span annotations are more like an unordered set. We tried sorting them by end position in both forward and backward order, and found a small improvement (again, less than 1% absolute) using the backward ordering (assuming the input is given in the forward order). This result validates the translation ordering experiments: the modeling problem is easier when the sequence-to-sequence LSTM is used more like a stack than a queue.
Model shape
We experimented with a few different architectures and found no significant improvements in using more than 320 units for the embedding dimension and LSTM memory and 4 stacked LSTMs (see Table 4). This observation holds for both models trained on a single language and models trained on many languages. Because the vocabulary is so small, the total number of parameters is dominated by the size of the recurrent matrices. All the results reported below use the same architecture (unless otherwise noted) and thus have roughly 900k parameters.
Training
We trained our models with Stochastic Gradient Descent (SGD) on mini-batches of size 128, using an initial learning rate of 0.3. For all other hyperparameter choices, including random initialization, learning rate decay, and gradient clipping, we follow Sutskever et al. (2014). Each model is trained on a single CPU over a period of a few days, at which point, development set results have stabilized. Distributed training on GPUs would likely speed up training to just a few hours.
Dropout and byte-dropout
Neural Network models are often trained using dropout (Hinton et al., 2012), which tends to im-prove generalization by limiting correlations among hidden units. During training, dropout randomly zeroes some fraction of the elements in the embedding layer and the model state just before the softmax layer (Zaremba et al., 2014).
We were able to further improve generalization with a technique we are calling byte-dropout: We randomly replace some fraction of the input bytes in each segment with a special DROP symbol (without changing the corresponding span annotations). Intuitively, this results in a more robust model, perhaps by forcing it to use longer-range dependencies rather than memorizing particular local sequences.
It is worth noting that noise is often added at training time to images in image classification and speech in speech recognition where the added noise does not fundamentally alter the input, but rather blurs it. By using a byte representation of language, we are now capable of achieving something like blurring with text. Indeed, if we removed 20% of the characters in a sentence, humans would be able to infer words and meaning reasonably well.
Inference
We perform inference on a segment by (greedily) computing the most likely output at each time step and feeding it to the next time step. Experiments with beam search show no meaningful improvements (less than 0.2% absolute). Because we assume that each segment is independent, we need to choose how to break up the input into segments and how to stitch together the results.
The simplest approach is to divide up the input into segments with no overlapping bytes. Because the model is trained to ignore incomplete spans, this approach misses all spans that cross segment boundaries, which, depending on the choice of k, can be a significant number. We avoid the missed-span problem by choosing segments that overlap such that each span is likely to be fully contained by at least one segment.
For our experiments, we create segments with a fixed overlap (k/2 = 30). This means that with the exception of the first segment in a document, the model reads 60 bytes of input, but we only keep predictions about the last 30 bytes.
Results
Here we describe experiments on two datasets that include annotations across a variety of languages. The multilingual datasets allow us to highlight the advantages of using byte-level inputs: First, we can train a single compact model that can handle many languages at once. Second, we demonstrate some cross-lingual abstraction that improves performance of a single multilingual model over each singlelanguage model. In the experiments, we refer to the LSTM setup described above as Byte-to-Span or BTS.
Most state-of-the-art results in POS tagging and NER leverage unlabeled data to improve a supervised baseline. For example, word clusters or word embeddings estimated from a large corpus are often used to help deal with sparsity. Because our LSTM models are reading bytes, it is not obvious how to insert information like a word cluster identity. Recent results with sequence-to-sequence autoencoding (Dai and Le, 2015) seem promising in this regard, but here we limit our experiments to use just annotated data.
Each task specifies separate data for training, development, and testing. We used the development data for tuning the dropout and byte-dropout parameters (since these likely depend on the amount of available training data), but did not tune the remaining hyperparameters. In total, our training set for POS Tagging across 13 languages included 2.87 million tokens and our training set for NER across 4 languages included 0.88 million tokens. Recall, though, that our training examples are 60-byte segments obtained by sliding a window through the training data, shifting by 1 byte each time. This results in 25.3 million and 6.0 million training segments for the two tasks.
Part-of-Speech Tagging
Our part-of-speech tagging experiments use Version 1.1 of the Universal Dependency data 5 , a collection of treebanks across many languages annotated with a universal tagset (Petrov et al., 2011). The most relevant recent work (Ling et al., 2015) uses different datasets, with different finer-grained tagsets in each language. Because we are primary interested in multilingual models that can share languageindependent parameters, the universal tagset is important, and thus our results are not immediately comparable. However, we provide baseline results (for each language separately) using a Conditional Random Field (Lafferty et al., 2001) with an extensive collection of features with performance comparable to the Stanford POS tagger (Manning, 2011). For our experiments, we chose the 13 languages that had at least 50k tokens of training data. We did not subsample the training data, though the amount of data varies widely across languages, but rather shuffled all training examples together. These languages represent a broad range of linguistic phenomena and character sets so it was not obvious at the outset that a single multilingual model would work. Table 1 compares the baselines with (CRF+) and without (CRF) externally trained cluster features with our model trained on all languages (BTS) as well as each language separately (BTS*). The single BTS model improves on average over the CRF models trained using the same data, though clearly there is some benefit in using external resources. Note that BTS is particularly strong in Finnish, surpassing even CRF+ by nearly 1.5% (absolute), probably because the byte representation generalizes better to agglutinative languages than word-based models, a finding validated by Ling et al. (2015). In addition, the baseline CRF models, including the (compressed) cluster tables, require about 50 MB per language, while BTS is under 10 MB. BTS improves on average over BTS*, suggesting that it is learning some language-independent representation.
Named Entity Recognition
Our main motivation for showing POS tagging results was to demonstrate how effective a single BTS model can be across a wide range of languages. The NER task is a more interesting test case because, as discussed in the introduction, it usually relies on a pipeline of processing. We use the 2002 and 2003 ConLL shared task datasets 6 for multilingual NER because they contain data in 4 languages (English, German, Spanish, and Dutch) with consistent annotations of named entities (PER, LOC, ORG, and MISC). In addition, the shared task competition produced strong baseline numbers for comparison. However, most published results use extra information beyond the provided training data which makes fair comparison with our model more difficult.
The best competition results for English and German (Florian et al., 2003) used a large gazetteer and the output of two additional NER classifiers trained on richer datasets. Since 2003, better results have been reported using additional semi-supervised techniques (Ando and Zhang, 2005) and more recently, Passos et al. (2014) claimed the best English results (90.90% F1) using features derived from word-embeddings. The 1st place submission in 2002 (Carreras et al., 2002) comment that without extra resources for Spanish, their results drop by about 2% (absolute).
Perhaps the most relevant comparison is the overall 2nd place submission in 2003 (Klein et al., 2003). They use only the provided data and report results with character-based models which provide a useful comparison point to our byte-based LSTM. The performance of a character HMM alone is much worse than their best result (83.2% vs 92.3% on the English development data), which includes a variety of word and POS-tag features that describe the context (as well as some post-processing rules). For English (assuming just ASCII strings), the character HMM uses the same inputs as BTS, but is hindered by some combination of the independence assumption and smaller capacity. Collobert et al.'s (2011) convolutional model (discussed above) gives 81.47% F1 on the English test set when trained on only the gold data. However, by using carefully selected word-embeddings trained on external data, they are able to increase F1 to 88.67%. Huang et al. (2015) improve on Collobert's results by using a bidirectional LSTM with a CRF layer where the inputs are features describing the words in each sentence. Either by virtue of the more powerful model, or because of more expressive features, they report 84.26% F1 on the same test set and 90.10% when they add pretrained word embedding features. Dos Santos et al. (2015) represent each word by concatenating a pretrained word embedding with a character-level embedding produced by a convolutional neural network.
There is relatively little work on multilingual NER, and most research is focused on building systems that are unsupervised in the sense that they use resources like Wikipedia and Freebase rather than manually annotated data. Nothman et al. (2013) use Wikipedia anchor links and disambiguation pages joined with Freebase types to create a huge amount of somewhat noisy training data and are able to achieve good results on many languages (with some extra heuristics). These results are also included in Table 2.
While BTS does not improve on the state-ofthe-art in English, its performance is better than the best previous results that use only the provided training data. BTS improves significantly on the best known results in German, Spanish, and Dutch even though these leverage external data. In addition, the BTS* models, trained separately on each language, are worse than the single BTS model (with the same number of parameters as each singlelanguage model) trained on all languages combined, again suggesting that the model is learning some language-independent representation of the task.
One interesting shortcoming of the BTS model is that it is not obvious how to tune it to increase recall. In a standard classifier framework, we could simply increase the prediction threshold to increase precision and decrease the prediction threshold to increase recall. However, because we only produce annotations for spans (non-spans are not annotated), we can adjust a threshold on total span probability (the product of the start, length, and label probabilities) to increase precision, but there is no clear way to increase recall. The untuned model tends to prefer precision over recall already, so some heuristic for increasing recall might improve our overall F1 results.
Dropout and Stacked LSTMs
There are many modeling options and hyperparameters that significantly impact the performance of Neural Networks. Here we show the results of a few experiments that were particularly relevant to the performance obtained above. First, Table 3 shows how dropout and bytedropout improve performance for both tasks. Without any kind of dropout, the training process starts to overfit (development data perplexity starts increasing) relatively quickly. For POS tagging, we set dropout and byte-dropout to 0.2, while for NER, we set both to 0.3. This significantly reduces the over-fitting problem. Second, Table 4 shows how performance improves as we increase the size of the model in two ways: the number of units in the model's state (width) and the number of stacked LSTMs (depth). Increasing the width of the model improves performance less than increasing the depth, and once we use 4 stacked LSTMs, the added benefit of a much wider model has disappeared. This result suggests that rather than learning to partition the space of inputs according to the source language, the model is learning some lanugage-independent representation at the deeper levels.
BTS
To validate our claim about language-independent representation, Figure 2 shows the results of a tSNE plot of the LSTM's memory state when the output is one of PER, LOC, ORG, MISC across the four languages. While the label clusters are neatly separated, the examples of each individual label do not appear to be clustered by language. Thus rather than partitioning each (label, language) combination, the model is learning unified label representations that are independent of the language.
Conclusions
We have described a model that uses a sequence-tosequence LSTM framework that reads a segment of text one byte at a time and then produces span annotations over the inputs. This work makes a number of novel contributions: First, we use the bytes in variable length unicode encodings as inputs. This makes the model vocabulary very small and also allows us to train a multilingual model that improves over single-language models without using additional parameters. We introduce byte-dropout, an analog to added noise in speech or blurring in images, which significantly improves generalization. Second, the model produces span annotations, where each is a sequence of three outputs: a start position, a length, and a label. This decomposition keeps the output vocabulary small and marks a significant departure from the typical Begin-Inside-Outside (BIO) scheme used for labeling sequences.
Finally, the models are much more compact than traditional word-based systems and they are standalone -no processing pipeline is needed. In particular, we do not need a tokenizer to segment text in each of the input languages. | 2015-12-02T08:37:38.323Z | 2015-12-01T00:00:00.000 | {
"year": 2015,
"sha1": "4dabd6182ce2681c758f654561d351739e8df7bf",
"oa_license": "CCBY",
"oa_url": "https://www.aclweb.org/anthology/N16-1155.pdf",
"oa_status": "HYBRID",
"pdf_src": "ACL",
"pdf_hash": "9210b3a0bc84b6aeeb40a145e9fd288e8c928a11",
"s2fieldsofstudy": [
"Computer Science",
"Linguistics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
253244371 | pes2o/s2orc | v3-fos-license | On the role of transverse detonation waves in the re-establishment of attenuated detonations in methane-oxygen
The problem of detonation attenuation in stoichiometric methane-oxygen and its re-establishment following its interaction with obstacles was investigated using high resolution numerical simulation. The main focus was on the role of the transverse detonation on the re-establishment of the detonation wave. We applied an efficient thermochemically derived four-step global combustion model using an Euler simulation framework to investigate the critical regimes present. While past attempts at using one- or two- step models have failed to capture transverse detonations, for this scenario, our simulations have demonstrated that the four-step combustion model is able to capture this feature. We suggest that to correctly model detonation re-initiation in characteristically unstable mixtures, an applied combustion model should contain at least an adequate description to permit the correct ignition and state variable response when changes in temperature and pressure occur. Our simulations reveal that there is a relationship between the critical outcomes possible and the mixture cell size, and while pockets of unburned gas may exist when a detonation re-initiates, it is not the direct rapid consumption of these pockets that gives rise to transverse detonations. Instead, the transverse detonations are initiated through pressure amplification of reaction zones at burned/unburned gas interfaces whose combustion rates have been enhanced through Richmyer-Meshkov instabilities associated with the passing of transverse shock waves, or by spontaneous ignition of hot spots, which can form into detonations through the Zel'dovich gradient mechanism. In both situations, non-uniform ignition delay times are found to play a role. Finally, we found that the transverse detonations are in fact Chapman-Jouguet detonations, but whose presence contributes to overdriving the re-initiated detonation along the Mach stem.
Introduction
In this investigation, we revisit the problem of detonation attenuation and it's reestablishment following its interaction with obstacles, which has been investigated previously both experimentally [1,2,3,4,5,6,7] and numerically [4,6,8,9]. In particular, this investigation focuses on the role of the transverse detonation on the reestablishment of the detonation wave, and investigates the importance of applying a numerical combustion model that responds appropriately to the thermodynamic state behind the complex shock wave dynamics. This problem is particularly important for the development and validation of numerical strategies to simulate and predict the final stages of deflagration to detonation transition (DDT), as critical shock-flame complex regimes may be established close to the choked Chapman-Jouguet (CJ)-deflagration velocity. It is currently believed that a sufficient condition for DDT to occur is when flame propagation reaches this velocity limit [10].
In early experiments, transverse shock waves were believed to play an important role in the re-establishment of the detonation wave [2,3]. In Radulescu and Maxwell [4], transverse detonations were observed during the re-establishment of detonation waves in acetylene-oxygen, yet such a feature could not be captured numerically at the time. We note here that this transverse detonation is also a key component in critical regimes of detonation diffraction [11] and also in marginal or spinning detonation propagation [12,13]. In more recent experiments, Bhattacharjee et al. [6] investigated several possible mechanisms that contribute to detonation re-initiation. In general, they found that forward jetting of combustion products behind the Mach shock was found to play an important role in triggering rapid ignition and coupling to the Mach shock. In some critical cases, a large pocket of unburned reactive gas remained behind, while in more sensitive reactive mixtures it was believed that rapid reaction of this pocket lead to the establishment of the transverse detonation. Although the burn-up of these pockets has not yet been formally linked to the transverse detonation feature, it has been determined through numerical simulation that the burning rate of such pockets can be influenced by the strength of transverse shock waves during detonation propagation of irregular reactive mixtures [14]. Moreover, the burning rate of these pockets has been found to influence the cellular structure in methane-oxygen [15].
Although Euler simulations coupled to one-or two-step Arrhenius combustion kinetics have been attempted to capture detonation re-initiation as observed in experiments [4,6,8], recovered solutions were found to depend highly on the resolution adopted. Moreover, self-sustained transverse detonations were never observed in these simulations. This was, to some extent, believed to be a consequence of using simplified chemical mechanisms. These are generally tuned to give the appropriate ignition delay for only a particular state, i.e. the von Neumann state, but do not correctly reproduce the detailed reaction zone structures. Moreover, the application of the calorically perfect gas assumption also leads to significant errors in prediction of the state behind the various shock wave dynamics present. In more recent work, which also adopted a one-step combustion modeling approach, Maxwell et al. [9] found that adequate closure of the turbulent combustion resulted in improved prediction of the re-initiation of a detonation, but also did not predict any transverse detonation features. Recent simulations of highly irregular detonation propagation [16], however, have shown that the application of a reduced detailed elementary reaction mechanism can indeed reproduce the various re-initiation regimes as observed experimentally. It is clear from all of this past work that in order to capture the complete set of features observed during the critical re-establishment of a detonation wave, a sufficiently detailed description of the chemical reactions is required. Moreover, the need exists to develop low-memory and low-overhead strategies to investigate detonation phenomena at high resolution and at larger scales.
In the current study, we address the problem of simulating detonation quenching and re-initiation following its interaction with a cylindrical obstacle in methaneoxygen, as observed by Bhattacharjee et al. [6], by attempting to capture the transverse detonation phenomenon using a more detailed, but minimal global description of the chemistry. Specifically, we use a thermally perfect four-step global reaction mechanism [17], with temperature dependent properties, which has been calibrated to reproduce methane-oxygen reaction characteristics in a wide range of temperatures and pressures [18,19,20]. Through this approach we aim to determine if such a minimal thermochemically derived combustion model can be used to capture important features of detonation initiation, i.e. transverse detonations. We also aim to discover the mechanisms of formation and the roles of these waves, and to what extent rapid burning of reactive gas pockets contribute to the formation of these transverse detonation waves.
Governing equations and combustion model
In the current study, the two-dimensional reactive Euler equations were solved, which thus explicitly ignores diffusion effects. Instead, deflagrative burning on reaction surfaces was driven through numerical diffusion associated with the finite-volume scheme adopted. The complete set of conservation laws for mass, momentum, total energy, and ith chemical species solved here is ∂ρ ∂t + ∇ · (ρu) = 0 (1) where ρ, u, p, Y i ,ω i , refer to the density, velocity vector, pressure, mass fraction of the ith species, and the reaction rate of of the ith species, respectively. The total specific energy for a thermally perfect gas is given by where h i is the enthalpy of the ith species, and the temperature (T ) is determined by the ideal gas law, where R is the specific gas constant. Finally, the speed of sound is computed using the chemically frozen ratio of specific heat capacities, γ = c p /c v , through The specific heat capacities, c p and c v , and enthalpies for each species, h i , are determined by the usual temperature dependent NASA polynomial approximations [21] for a multi-component gas. Since complete detailed hydrocarbon chemistry descriptions are not amenable to high resolution simulations, we instead applied a thermochemically derived four-step global reaction mechanism [17], which has been calibrated to reproduce various constant-volume and one-dimensional combustion characteristics for methane-oxygen mixtures [18,19,20]. While reduced elementary reactions mechanisms have been successfully applied at micro-scale resolutions to study transverse detonations in methane-oxygen mixtures [16], for example using only 13 species and 35 reactions, the 4-step model was adopted instead owing to its much lower overhead.
This permitted hundreds of simulations to be performed for a wide range of quiescent pressures and resolutions in a timely manner. In this model, we considered only the evolution of global species R0, R1, P 1, and P 2. The equivalent reactant and product groups used can be summarized as As a result of the above grouping, the NASA coefficients of each group of species were determined by considering the sum of individual specie coefficients multiplied by their mole fraction (in the species group) [17]. The reaction paths were built by fitting the reference data from constant volume processes using the detailed GRI-3.0 mechanism [22] in Cantera [23]. Although newer mechanisms have been developed for high pressure C1-C4 combustion [24,25], the selected GRI-3.0 mechanism was deemed appropriate since the conditions encountered in this study are moderate, up to T = 1500 K and p = 3.5 atm in the unburned gas. This is well within the range of pressures for which the GRI-3.0 mechanism was optimized. The reaction path fitting was done by substituting the global species for a reactive mixture (R, P 1, and P 2) into the process while conserving the overall thermodynamic properties. The reaction paths, and corresponding reaction rates and orders, were acquired by modeling the reaction as having two thermally neutral induction regime paths, two irreversible exothermic reaction paths that convert R to P 1 and P 2 separately, and an additional equilibrium step between P 1 and P 2. The reaction scheme can be summarized as where the absolute reaction rate constants k i1 , k i2 , k r1 , k r2 , k ef , and k er , and reaction order s 0 , rely only on the local thermal state of the mixture, while the stoichiometry coefficients are: [17,18,19,20]. However for completeness, Fig. 1 demonstrates the four-step model ability to capture constant volume ignition delay times for stoichiometric methane-oxygen at a wide range of initial temperatures and pressures when compared to the GRI-3.0 mechanism. Also, Fig. 2 compares the temperature profiles obtained behind a Mach 6.35 (2262.6 m/s) shock in stoichiometric methane-oxygen at T 0 = 300 K and p 0 = 5.5 kPa using the four-step model, conventional one-and two-step calorically perfect gas models [8], a one-step with temperature dependent heat capacities, and the detailed GRI-3.0 mechanism [22]. Here we first note that the conventional one-and two-step perfect gas models [8] do not capture the correct post-shock or post-reaction state variables (i.e., temperature). Although the induction lengths have been tuned to the conditions behind the given incident shock strength, we point out that such tuning was actually performed at the wrong temperature (and pressure). Should a second shock form in the shocked mixture, and since the induction lengths were tuned only to the conditions behind the first shock, it is very likely that the ignition time would not be correct since the state variables would deviate further from the detailed chemistry. We also note that the one-step model performs poorly at minimizing heat release in the induction zone, which of course impacts the local ignition delay times and their gradients behind the shock. In fact, it was previously demonstrated that temperature gradients capable of allowing detonations to form calculated using detailed chemical models is much shallower compared to those predicted by simple chemical models [26]. This is likely due to the sensitivity of local ignition delay times and coupling of shock and reaction zones to the temperature of the gas.
Although a one-step combustion model with temperature dependent heat capacities would perform better at capturing the post-shock states, as shown, and could be tuned to reproduce the ignition delays in a wide range of temperatures and pressures, we note the incorrect product state. In this simple model, we considered only the reaction of R0 → P 1, governed by an Arrhenius reaction rate law in the form Here, A = 2 × 10 12 , m = 0.2, n = −0.6, and (Ea/R) = 20, 562 K was used.
In this model, equilibrium with products forming species P 2 was not considered, yet the formation of such incomplete combustion products are known to be heavily depen-dent on the state variables and also highly influential on the final enthalpy obtained [17]. This shortcoming would likely lead to incorrect detonation velocities, since the enthalpy change (or heat release) thus differs significantly from the detailed chemistry.
The four-step model used, on the other hand, is a minimal global combustion model with equilibrium effects that is able to reproduce the detailed detonation structure, with the exception of minor departures in the reaction zone stiffness, as shown in the figure. the four-step model [18] and compared to the detailed GRI-3.0 mechanism [22] for a wide range of initial densities (ρ) and temperatures (T ).
Finally, to solve the governing equations, Eqs. (1)-(4), the second order HLLC method [27] was applied, using the van Albada slope limiter [28]. The usual operator splitting approach was applied, where the hydrodynamic evolution was solved first using a CFL number of 0.4, followed by adding the first order source term evaluation across the same time-step. The source terms (ω i ) were evaluated using the implicit backward Euler method based on Newton iteration, and implemented using the Sundials CVODE libraries [29]. Adaptive mesh refinement (AMR) [30] was also applied to compute detailed solutions only in regions of interest, such as the shocked and un- at T 0 = 300 K and p 0 = 5.5 kPa computed using conventional one-and two-step perfect gas models [8], a one-step model with temperature dependent heat capacities, the four-step model [18], and the detailed GRI-3.0 mechanism [22]. burned gas. For this study, a computational cell was flagged as needing refinement if Y R1 > 0.001, or if Y R0 > 0.99 and ρ > 1.1ρ 0 , where ρ 0 is the density of the quiescent fluid. Cells were also flagged as needing refinement when density changes of more than 10% occurred between grid levels. Finally, the grid was always refined along the boundary of the internal half-cylinder geometry. When a cell was flagged as 'bad', or needing refinement, the badness was also diffused by a few cells to ensure smooth solutions across fine-course cell boundaries. The base grid resolution for all cases was 10 mm, with anywhere between 4 to 8 additional levels of refinement applied, depending on the desired minimum grid resolution.
Domain, initial and boundary conditions
This study simulated stoichiometric methane-oxygen detonation interactions with a half-cylinder obstacle, corresponding to the past experiments [5,6]. A 150mm radius half-cylinder was modeled in a two-dimensional channel of 200 mm height and 1.75 m length, as shown in Fig. 3. An initially overdriven Zel'dovich-von Neumann-Doring (ZND) solution was imposed at x = 0 that was oriented to propagate to the right in the positive x−direction, while the left boundary is placed at x = −40 mm. In order to overcome startup errors associated with sharp discontinuities, the ZND solution was given an overdrive factor (f ) of 1.2, where f = (U s /U CJ ) 2 . Here, U s is the overdriven shock speed, while U CJ corresponds to the CJ-detonation speed. The right boundary condition is a zero-gradient type, while the remaining boundaries, including the cylinder surface, are symmetric type in which only the normal velocity components are reversed. The left boundary condition thus deliberately creates a Taylor-Wave structure [31], whose intention is to slow the overdriven wave down to the CJ-speed prior to its interaction with the cylinder. Once the CJ-speed is reached, the flow of products becomes choked. Beyond this, the expansion wave has no effect on the detonation wave front. The leading edge of the cylinder is placed 500 mm from the initial ZND wave. This distance was found to be sufficiently long to permit the detonation wave to settle to within 3% of the CJ-detonation speed by the time the wave reached the throat of the cylinder, for all initial pressures and resolutions considered. Finally, the cylinder surface was treated using a conventional staircase approach. Implications of this approach, including our justification for its adoption is discussed in Appendix A. In order to observe the different regimes expected (detonation quenching, critical ignition, critical detonation re-initiation, and transmission), the initial pressure was varied anywhere from p 0 = 3.5 kPa to 16 kPa. This choice of pressure ranges was based on the experimental results of Bhattacharjee [5]. The initial temperature for all simulations was T 0 = 300 K.
Regimes observed at the first shock reflection
In this section, we present an overview of the several different outcomes observed that resulted from varying the initial pressure. The minimum resolution used here was 78 μm, which was found sufficient to capture different regimes and flow features of interest. At this resolution, 21.6 to 55.7 grids per ZND induction lengths (or 4.7 to 11.9 grids per reaction lengths) were captured, depending on the initial quiescent pressure.
This resolution is also consistent with past numerical investigations that used one-and two-step combustion modeling [6,8]. The effects of grid resolution is presented in In general, six different possible outcomes were observed, and were found to somewhat depend on the initial pressure. To classify each case, both qualitative and quantitative information was used to determine the category of behavior seen in each simulation. Figure immediately after clearing the obstacle. A detonation is quenched when the cellular pattern disappears, signaling a decoupling of the shock front and reaction zone that is never re-established into a detonation. Figure 4b shows critical ignition (CI) without detonation re-initiation. In this case, significant burning of the reactants occurred behind the Mach shock, but a reaction zone did not couple to the shock. This can be seen in the soot foil image as the detonation took longer to fully quench. Figure 4c shows the main regime of interest in this study, critical detonation re-initiation (CDR). This is characterized by an area of complete quenching, followed by re-initiation that features one or more transverse detonations. In the particular case shown, the transverse detonations are characterized by dark bands that started at the center and propagated toward the upper and lower boundaries of the channel. In other cases, a transverse detonation started near the lower boundary and propagated upwards. Figure 4d is critical detonation re-initiation without transverse detonation (CDR-NTD), which also had an area of complete quenching, but a transverse detonation was not observed before the detonation became fully established again. It is important to note that this regime was a less frequent outcome compared to CDR, numerically, except at the lowest resolution of 625 μm. We also note that Bhattacharjee reported the experimental outcome to be rare, and could not be easily reproduced [5]. In the simulation shown, a localized explosion occurred near the top boundary, but this was not the mechanism that triggered detonation re-initiation along the Mach shock front. Figure Local explosion ture of the overall combustion regime behavior. Unattenuated detonation transmission ( Fig. 5f) began overdriven (up to ∼ 69.9% above U CJ ) and then decayed to an average propagation speed of 2417.54 m/s, ∼ 3.5% above U CJ . The initially high values for speed for the latter five cases are due to a shock reflection along the bottom boundary.
Detonation quenching
In Fig. 6a, we show that at sufficiently low initial pressures, for example at p 0 = 9 kPa, the simulated density field of the fully quenched detonation wave resulting from the diffraction around the obstacle compares well qualitatively to the schlieren pho-tograph of a past experiment [5]. The figure shows a clear separation between the shock front and reaction zone. This separation occurs because the detonation front is allowed to expand after the obstacle, leading to an increased surface area of the shock.
The increased area leads to a weakened shock strength, which lengthens the ignition delay times and therefore increases the distance between the shock front and reaction zone [32]. The various shock dynamics, including the incident shock, Mach shock, transverse, and reflected waves are all captured well compared to the experiment. The slip line and forward jetting are also captured. We note that although the experimental image was captured at a much lower pressure (p 0 = 5.5 kPa), few experiments were conducted in the range of p 0 = 6 to 10 kPa, and thus an exact quenching limit was not found experimentally. Quantitatively, the normalized Mach shock speed on the bottom wall (U s /U CJ ) was found to compare favorably to values measured experimentally by Bhattacharjee [6], as shown in Fig. 7. In this figure, the abscissa is the normalized distance from the center of the cylinder, characterized by is the cylinder center location and D is the cylinder diameter.
From our simulations, failure always occurred below p 0 ≤ 8 kPa at the 78 µm resolution, and sometimes up to p 0 = 10.5 kPa. In this regard, the outcome appears to be stochastic in this pressure range. By measuring the detonation cell size just prior to its interaction with the cylinder, using an autocorrelation procedure [33], we find that the limit to ensure failure, at this resolution, is (d H /λ) fail < 4.3, where d H is the size of the gap between the cylinder and top boundary, and λ is the cell size at evaluated This result is about 5 to 10 times greater than Bhattacharjee's result, where (d H /λ) fail = 0.5 to 1.0 [5]. Since experiments were not documented between p 0 = 6 to 10 kPa, it is likely that the actual limit for (d H /λ) fail could be higher than reported experimentally. Mach shock x tongue of unburned gas
Critical ignition without detonation
In Fig. 6b, at a slightly elevated pressure of p 0 = 9.35 kPa, a critical ignition regime was observed where significant burning occurred behind the Mach shock, yet however a closer coupling of shock and reaction zone was observed in the experiments [6]. The experiment showed nearly all of the gas behind the Mach shock as burned, whereas the simulation had a smaller area of burned gas. We note here that previous numerical modeling of the critical ignition case [34] has highlighted the importance of providing closure to turbulent diffusion in order to properly capture forward jetting and mixing of combustion reactants and products in this critical ignition regime. At this time, we have not provided such closure.
Critical detonation re-initiation
In the range 8.5 ≤ p 0 ≤ 13 kPa, different critical outcomes were observed where fully quenched detonations re-initiated after the first shock reflection with the bottom boundary. Figure 6c shows a typical critical detonation re-initiation outcome at We draw attention to this particular simulated critical outcome, as the sustained transverse detonation feature was never observed in past numerical attempts at modeling this scenario [4,6,8,9]. We attribute this success to the adoption of the thermally perfect four-step combustion model, which was calibrated to reproduce the the correct ignition delays at different temperatures and pressures when compared to the GRI-3.0 mechanism [22]. It is worth noting that while past numerical simulations have been successful in capturing transverse detonation waves in critical diameter problems involving hydrogen [35,36] and highly irregular near-limit detonation propagation in methane-oxygen [16], such simulations have generally needed to use detailed elementary combustion mechanisms. Exceptions to this are highly irregular critical detonation propagation [37] and critical detonation diffraction [38] where the transverse detonations were in fact observed using one-step models. However, the combustion models in these cases required calibration of the heat release to force the desired unstable detonation behavior, and did not necessarily reproduce all of the combustion characteristics of a particular reactive mixture.
Critical detonation re-initiation without transverse detonation
At p 0 = 10 and 11 kPa, critical cases were observed where detonation re-initiation was only observed along the Mach stem. This is shown for the p 0 = 11 kPa case in It is important to note here that in the past numerical investigations based on calorically perfect one-or two-step model approaches [4,6,8,9], this mechanism of detonation re-initiation was almost always observed. In this numerical investigation, however, and also in the experiments [5], this outcome was observed less frequently. In fact, most critical detonation re-initiation cases, regardless of specific scenario considered, have long been speculated to occur with the presence of a transverse detonation [2,4,11,39].
Thus, the physical observance of this specific outcome, CDR without transverse detonation, is not typical and also appears to be stochastic. When the past experiment was repeated, with the same initial pressure, different results were obtained, and that the observed CDR without transverse detonation outcome was not reproducible [5]. Likewise, when this simulation was conducted again (at p 0 = 10 kPa), different behavior was observed (i.e. critical transmission), which reveals how sensitive the regimes are to the state of the cellular structure in the pore prior to the diffraction process.
Critical transmission
Critical transmission is a regime that was not observed in the past experiments.
This could be attributed to the lack of soot foils and sufficient number of diagnostic schlieren images in the experiments. In the experiments, the observation window was limited to a single location, so the broader perspective was likely missed. This case is comparable to the critical diameter problem [11], where the wave can quench locally but re-initiate on its own before interaction with a shock reflection from the wall boundary. Critical transmission is therefore a newly observed category for this specific scenario being investigated. This regime is characterized by partial quenching of the Figure 9b shows that a transverse detonation did form, and traveled through the quenched segment. This largely resembles the process for detonations surviving the critical diameter problem [40,41], and there is likely a strong relation of the origins of the transverse detonations to this case as well. As the wave front propagated forward, there were more quenching and re-initiation events that took place, as shown in Fig. 9c, but there were still portions of the wave that maintained the detonation structure. This is further supported by the soot foil image from Fig. 4e. Eventually, the wave settled into a fully established detonation wave (Fig. 9d) and maintained that state as it propagated through the remaining length of the channel.
Unattenuated detonation transmission
Finally, at sufficiently high pressures, i.e. for p 0 ≥ 10 kPa at 78 μm resolution, unattenuated detonation transmission was observed without any local quenching during the diffraction phase. After encountering the half-cylinder obstacle, the detonation structure was minimally affected, resulting only in some variation of cell size, i.e. an increase in average size compared to the structure before the obstacle interaction (refer to Fig. 4f).
Effects of grid resolution
In order to fully interpret the simulation results obtained in this study, it was necessary to perform a grid resolution study in order to understand the influence of changes in grid resolution. This was especially important since Euler simulations involving detonations are well known to give different solutions with changes in resolution [4,42].
In Euler simulations, deflagrative burning at the interface of the burned and unburned gas can only occur through numerical diffusion. Since a finer resolution results in decreased numerical diffusion [43], the laminar burning rates also decrease. At the same time, turbulent motions are damped at coarser resolutions due to increased numerical diffusion. In general, multiple grids per detonation induction and reaction lengths are required, so that the details of the reactive hydrodynamic structures may be captured.
In this study, simulations were conducted at resolutions as coarse as 625 µm (∼ 3.8 to 15.6 grids per induction length) and as fine as 39 µm (∼ 50.3 to 95.6 grids per induction length). A visual summary of the outcomes at each resolution and initial pressure is shown in Fig. 10 (top). In total, more than one hundred simulations were conducted.
It was observed early on that as the resolution becomes finer, the range of pressures encompassing the six categories of behavior shifts upward to higher pressures. For example, the range of pressures that encompass the different regimes in between detonation quenching and transmission is 4 < p 0 < 7.5 kPa for the coarsest resolution (625 µm), and 8 < p 0 < 14 kPa for the finest resolution (39 µm). As a result, different regimes can be observed at the same pressure across resolutions. Also, while critical outcomes were observed for the coarsest resolutions, 625 µm, 312 µm, and 156 µm, the range of pressures for these regimes is much lower than what was observed experimentally, where CDR was only observed for 10.4 ≤ p 0 ≤ 16.8 kPa [5]. Even though the range of critical pressures was higher for the 78 µm and 39 µm resolutions, where CDR was observed up to p 0 = 13 kPa at both resolutions, the departure from the experimental limit can be attributed to losses in the experiments due to boundary layers and heat conduction through the shock tube walls, which were not accounted for in the simulations. Simulations of the critical diameter problem also exhibit this behavior, where it is generally observed that the critical pressures are lower compared to the corresponding experiments [36]. The principal result of the resolution study conducted revealed that only the 78 µm resolution was found to capture all of the possible critical outcomes, and the only resolution to capture critical ignition without detonation. Furthermore, we note that at the finest resolution of 39 µm, critical ignition and CDR without transverse detonation were not observed. The assumption can therefore be made that the amount of numerical diffusion was too low at the finest resolution to allow for an adequate representation of turbulent deflagrative burning that would normally occur in these regimes (critical ignition and CDR without transverse detonation).
The 78 µm resolution was therefore deemed sufficient for observing the mechanisms involved in the detonation re-initiation phenomenon, and was also coarse enough that computational efficiency was ensured. The range of pressures for the different critical outcomes (8.5 ≤ p 0 ≤ 13 kPa) was also comparable to the past experiments Also shown in Fig. 10 (bottom) are the outcomes at each resolution quantified by the ratio of the gap size (throat) to the measured mixture cell width (d H /λ). Here, the mixture cell width (λ) was determined for each simulation by applying the autocorrelation procedure [33] for a numerical sootfoil window where 0.45 ≤ x ≤ 0.50. We first note that the cell size did not converge for increasing resolution, however this was expected since cell sizes obtained in past numerical Euler simulations of methane-oxygen mixtures also did not converge with resolution [43]. In fact, it has been demonstrated that for highly irregular mixtures, closure of subgrid-scale turbulent mixing and combustion is required to resolve the correct cellular structure [15]. Despite this, the results in Fig. 10 (bottom) reveal that the outcomes remained grid-insensitive at leading order for the resolutions considered. At the 78 µm resolution, the critical range was found to be 3.6 ≤ (d H /λ) crit ≤ 5.7, while at 39 µm this range was shifted slightly lower to 3.2 ≤ (d H /λ) crit ≤ 4.9. For all resolutions considered, 3 ≤ (d H /λ) crit ≤ 6 to leading order. In fact, this outcome resembles that of the detonation diffraction problem in rectangular channels where critical outcomes of detonation survival are observed following the abrupt area expansion for channel width to cell size ratios (W/λ) in the range of 3 to 10 [44,45,46]. In this problem, however, the presence of the cylinder confinement should permit critical transmission of the detonation at larger characteristic mixture cell sizes compared to cases of abrupt expansion. For example, the converging side of the obstacle experiences a reflected shock, which acts to decrease the effective cell size across the throat, much like detonation propagation into a converging wedge [47,48,49], and so the actual effective (d H /λ) crit ratio may in fact be larger. However, the cell size at the throat is difficult to measure, as the reflected wave may not have reached the top wall by the time the detonation reaches the throat. Also, on the diverging part, a weaker expansion wave is initially felt by the detonation front. Eventually, however, the expansion of the full 90 • turn is felt by the detonation front. Thus, the diverging obstacle effect, in this case, may be predominantly to delay the quenching or critical initiation distance. Much like the critical diameter problem, there clearly exists a relationship between the mixture cell size and gap size, and most likely also the cylinder geometry itself.
Despite the differences in regimes observed at different resolutions, and the pressures at which they are observed, the occurrence of the CDR regime are qualitatively similar across resolutions. Figure 11 shows a comparison of the density gradient evolution obtained for CDR outcomes at the six different resolutions tested: p 0 = 4.5 kPa at 625 µm resolution (frames a-c), p 0 = 7 kPa at at 312 µm (frames d-f), p 0 = 9.15 kPa at 156 µm (frames g-i), p 0 = 10.25 kPa at 78 µm (frames j-l), and p 0 = 10.5 kPa at 39 µm (frames m-o). As the resolution becomes finer, more details of the various features present become visible. All resolutions include the key features of a reflected transverse shock, incident shock, triple point, transverse detonation wave, and extended transverse wave. This extended wave is an oblique shock wave and reacting slip line that connects the triple point to the transverse detonation wave. It is a feature that was not explicitly discussed in Bhattacharjee's thesis due to a lack of available resolution in the experimental schlieren photographs [5]. This feature, however, has been captured numerically before using skeletal detailed elementary reaction mechanisms for Mach stem detonation re-initiation in critical detonation propagation of stoichiometric methane-oxygen [16] and detonation initiation arising from a double Mach shock reflection in propane-oxygen [50]. In all resolutions, a shock reflection or local explosion drove local pressure waves outward. In most cases, the coupling of these pressure waves to the rapid energy release due to chemical reaction led to the initiation of the transverse detonation wave first, and then the detonation along the Mach stem. Sometimes these reaction waves originated from a localized explosion event (see Fig. 11k), but in other cases the pressure waves formed directly due to auto-ignition behind the shock reflection itself. The consistency of features observed across resolutions validates the strategy adopted to investigate detonation re-initiation when a transverse detonation is present. The main differences between resolutions were the pressures at which the CDR outcome occurred. Also there was a prominent pocket of unburned gas present in the finest resolution, seen in Fig. 11n, which was not present at the coarsest resolutions. This can likely be explained by the presence of higher numerical diffusion at coarser resolutions, which lead to quicker burning rates of shocked and unburned gas.
Origins and the role of the transverse detonations during re-initiation
Critical detonation re-initiation cases involving a transverse detonation were observed for initial pressures ranging from 8.5 ≤ p 0 ≤ 13 kPa at the 78 µm resolution. Although all other possible cases were observed with some random occurrence in this pressure range, this behavior is consistent with experimental observations of Bhattacharjee [5], who noted the stochastic nature of outcomes at critical pressures.
Although exact locations and timings of each detonation re-initiation event differed from simulation to simulation, we found that in most cases detonation re-initiation occurred through a local explosion event that was triggered by the passing of a transverse pressure wave over the interface that separated burned from unburned gases. For example, a detailed sequence of events where detonation re-initiation occurred for p 0 = 9.5 kPa is shown in the density gradient evolution of Fig. 12. In Fig. 12a, two triple points have formed due to the propagation of reflected waves from both the top and bottom boundaries of the simulation. These triple points traveled toward each other and eventually collided, as shown in Fig. 12b. This caused the formation of new reflected waves with increased temperature and pressure behind them (Fig. 12c). At the same time, a pocket of unburned gas formed behind the various shock dynamics (Fig. 12d). The reflected waves propagated through areas of both shocked and unburned gas as well as the burned gas, and passed through the latter more quickly due to its lower acoustic impedance (Fig. 12e) To gain more clarity on the formation of the detonation waves observed in Fig. 12, detailed temperature, pressure, and local ignition delay time profiles are shown in demonstrated its ability to at least mimic a wide spectrum of detailed ignition delay times and steady detonation profiles [20]. Although gradients in τ ig were shallow in this case, it is important to point out that non-uniformities did exist in the ignition delay time profiles of Figs. 13 and 14, and that such gradients may have promoted the propagation of the reaction wave until a sustained detonation has formed [53]. In this case, the detonation front (d2) eventually propagated outward in every direction and the detonation prior to quenching. Moreover, it is well known that the detonation cellular structure has a strong dependence on pressure [54]. Figure 15a shows the initial formation of the reflected transverse shock, Mach shock, and triple point. In this case, a flow field that resembled the critical ignition case had formed, where a significant amount of gas had been ignited behind the Mach shock, yet the Mach shock velocity was measured to remain ∼ 12% below the CJ velocity prior to detonation re-initiation as shown previously in Fig. 7. An explosion event occurred on the surface of the pocket of unburned reactive gas as a result of the passing of the reflected shock wave. This explosion event triggered a transverse detonation wave and also generated local pressure waves that propagated outward toward the Mach shock (Fig. 15b). The explosion event also directly consumed the pocket of unburned gas. The supported Mach shock transitioned into a self-sustaining detonation wave as shown in Fig. 15c. At the same time, the transverse detonation wave continued to consume shocked but unburned gas behind the incident shock wave (Fig. 15d). The details of the detonation initiation, in this case, are revealed in Fig. 16. In this case, and much like the 9.5 kPa case discussed previously, the reflected transverse shock (sw1) passed over the burned/unbruned gas interface. As a result, the reaction rate of hot spot (hs) was enhanced by Richtmyer-Meshkov instabilities, which generated a local pressure rise as observed in the pressure plot of Fig. 16b. This lead to the direct rapid coupling of transverse shock and reaction zone, which thus initiated the transverse detonation (d1). In Fig. 16c, a shock wave (sw2), generated from the local explosion of the hot spot (hs), propagated towards the Mach shock. At first this shock travelled in the burned gas, but then initiated the detonation along the Mach shock (d2) through pressure amplification in a short region of shocked and unburned gas that contained gradients in ignition delay times. This can be seen in Figs. 16d and e.
At elevated pressures, the transverse detonation was observed to form shortly after from the initial shock reflection on the bottom wall. This is shown for p 0 = 13 kPa in the density gradient evolution of Fig. 17. This rapid initiation of the transverse detonation wave appears to be similar to the formation previously shown by Lau-Chapdelaine [8], who observed a transverse detonation initiation directly from the shock reflection using a two-step model at p 0 = 13.9 kPa. However, in this past work, the transverse detonation was not self-sustained as it was in this current study. A main difference in this case compared to the other two cases discussed above is the split in the decoupled shock front and the reaction zone, or a pocket that formed from the shock reflection from the top wall. This created two zones of shocked yet unreacted gas (Fig. 17c). Because of this, two transverse detonations formed that originated from the bottom of the channel (Fig. 17d). The second transverse detonation served to consume the second decoupled reaction zone (the pocket), and disappeared after that. The details of the detonation initiation, in this case, are shown in Fig. 18. Much like the previous two cases, the reflected transverse shock (sw) passed over a burned/unbruned gas interface, enhancing its combustion and increased reaction rate through Richtmyer-Meshkov instabilities. This lead to the rapid growth of the shocked hot spot (h1) into the gas with a favourable ignition delay time, as shown in Fig. 18a. Much like the 9.5 kPa case, a hot spot (h2) was formed spontaneously in the region of lowest igntion delay, as shown in Fig. 18b. In this case, however, the rapid ignition of the newly formed hot spot (h2) was sufficient to trigger a pressure increase locally where shown (i1). In fact, this hot spot was found to transition to detonation (d1) through the same pressure amplification mechanism previously shown for the 9.5 kPa case. This is shown in Fig. 19, which shows the temperature, pressure, and log(ignition delay) profiles at different times along the horizontal dashed line shown in Fig. 18b, at y = 0.002 m.
Since (∇τ ig ) −1 > U CJ , a spontaneous wave was able to form, which eventually developed into the detonation (d1). In Fig. 18c another region of increased pressure was generated where the transverse shown (sw) met with an unburned/burned gas interface.
This ignition spot (i2) lead to the direct coupling of shock and reaction zone, which thus initiated detonations (d2) and (d3) shown in Fig. 18d. In fact, while (d2) was initiated directly from the rapid compression and energy deposition, (d3) was found to also develop through the pressure amplification mechanism into the gas containing mild gradients of ignition delay times. Eventually it was detonation (d3) that was able to first reach the Mach shock to initiate the self-sustained detonation (d4). Although detonation on the Mach shock, in all of the cases presented above, originated by the passing of a transverse shock on a burned/unburned gas interface, we do note that it is possible for a detonation to be initiated by the spontaneous formation of a hot spot through the Zeldovich gradient mechanism [53,55], such as the formation of detonation (d1) discussed here. In fact, this mechanism of spontaneous wave formation leading to detonation initiation on the Mach shock was found to be the case for p 0 = 10.5 kPa at the 39 µm resolution (not shown). In all situations, however, pressure amplification of reactive waves was found to be a common feature in the re-establishment of the detonation wave in the CDR regime.
In all of these examples, the transverse detonation wave was an important and very common feature in detonation re-initiation. For the onset of detonation, there appears to be a shock compression event that leads to an explosion event. While the sequence in which the transverse detonation forms varies, it is almost always the avenue by which the re-established detonation front extends to the entire domain height, creating a fully established and self-sustained detonation wave. Finally, we note that transverse detonations also appear to be the main feature through which detonations survive complete quenching in the critical transmission regime, as shown previously in Fig. 9.
Detonation re-initiation without the transverse detonation
As mentioned earlier, CDR without a transverse detonation was an outcome that was also observed. Much like the past experiments [5], this outcome was not as common as the CDR regime with transverse detonations. Figure 20 shows the numerical details for p 0 = 11 kPa of how the detonation front re-initiated directly on the Mach shock as the result of a triple point collision of two transverse waves, however no trans- verse detonation formed. This could be attributed to the lack of an explosion event further away from the Mach shock, which was normally found to be triggered on a decoupled burned/unburned gas surface behind the Mach shock. We do note, however, that for this case, a localized explosion did eventually occur near the top boundary of the channel around x = 1.15 m (S = 1.8) and y = 0.16 m, as shown in Fig. 20e and f. This explosion is similar to the early events of CDR with a transverse detonation, however in this case it was not the mechanism that re-initiates the detonation front.
We also note that when CDR without transverse detonation was observed at a lower pressure (p 0 = 10 kPa), such a localized explosion did not occur. In this case, whose numerical soot foil is shown in Fig. 21, global quenching of the wave occurred after re-initiation, which was later re-initiated again but with a transverse wave. We thus find that CDR without transverse detonation is a regime that is sensitive to the formation of local explosion events, or quenching of the wave front.
Based on both numerical evidence from this study, and the past experiments [5], even though CDR without a transverse wave is possible, a much more likely CDR outcome is that a transverse detonation is triggered, either by a local explosion on a burned/unburned gas surface, or directly by a shock reflection. This is in contrast to the re-initiation mechanisms observed using past one-and two-step combustion modeling approaches [6,8]. Thus, it is very likely that in order to capture the event that triggers the transverse detonation, numerically, the ignition response of the combustion model must adapt appropriately to changes in thermodynamic states behind shock waves and reflected shocks. Since the past one-and two-step combustion modeling approaches [6,8] applied calorically perfect gas assumptions, and whose tuning parameters were calibrated only to recover the ignition delay behind a specific predetermined thermodynamic state, errors in temperature, pressure, and also the ignition delay would be expected behind multiple shock dynamics of varying strength. While Lau-Chapdelaine attributed the lack of a self-sustained transverse detonation to the absence of proper resolution of Richtmyer-Meshkov instabilities [8], we instead propose this shortcoming to arise due to the lack of ability to accurately model ignition delays in simple combustion models, and therefore such models cannot accurately capture local explosion events behind transverse shock waves at burned/unburned gas surfaces and their subsequent propagation into gases with non-uniform ignition delay times. We believe this is likely why the past numerical attempts have almost always lead to detonation re-initiation along the Mach shock, without the self-sustained transverse detonation wave.
Triple point speeds and the transverse detonation strengths
In order to investigate further the quantitative details of the transverse waves, the speed of the triple point was measured for a few different cases. Figure and U tp /U CJ = 1.19 for 78 µm and 39 µm respectively, with less than a 2% difference from each other. Finally, CDR with no transverse detonation shows a triple point speed that oscillates in magnitude above and below the CJ-velocity through the entire domain height, with a mean speed of U tp /U CJ = 1.05. As a reference, experimental measurements of the Mach shock speeds for CDR at three different pressures from Bhattacharjee [5] have been included, whose estimated speeds were found to be in the range of 1.05U CJ to 1.17U CJ . The degree of overdrive experienced by the wave front when a transverse detonation wave is present is consistent between the simulations and the past experiments. Since the triple point speed of CDR without a transverse detonation travels close to the CJ-speed, while CDR is always overdriven, we can attribute the overdriven state of the Mach shock and triple point to the presence of the transverse detonation. It is very likely that the rapid energy release and subsequent expansion of the products behind the transverse detonation act as a piston to overdrive the Mach stem through the re-initiation process. This mechanism also acts to explain the where φ, represents the scalar of interest that was averaged, and the over-line represents an ensemble average. Once the mass-weighted velocity of the transverse wave was determined relative to the shocked gas, its strength was obtained by normalizing the velocity to the mass-weighted average speed of sound in the shocked gas, i.e.
Then from the ensemble averaged density and pressure, ρ and p, the Mach number of the CJ-solution (M CJ ) associated with the shocked and unburned state was determined. We thus present (M T /M CJ ) vs. normalized distance from the center of the cylinder (S) for several different initial pressures in Fig. 23. In all CDR cases simulated, including those not shown, we found that the transverse detonation was in fact a CJ-detonation (within 1%). Also shown for comparison are estimated transverse detonation strengths from Bhattacharjee's experiments [5] for three different pressures.
Although Bhattacharjee had estimated the transverse detonation strength to vary from 0.6 to 1.2M CJ , we note that significant errors likely existed in the experimental estimation of the sound speed from schlieren images. Moreover, Bhattacharjee's estimate did not consider the relative difference in the velocity vectors of the triple point and the gas behind the incident shock.
Finally, the ignition delay times were calculated for the same sample windows used to estimate the transverse detonation strengths. Figure to sustain the transverse detonation wave through the global detonation re-initiation process, and should be investigated in more detail to confirm if such an ignition delay time gradient is in fact required to sustain the transverse detonation. We do note here, however, that such a gradient in the ignition delay time exists since the incident shock of the quenched detonation weakens with time, causing lengthened induction lengths of the unreacted gas as more time passes.
Conclusion
In this investigation we applied a thermochemically derived four-step global combustion model [17] to investigate critical detonation attenuation and the role of transverse detonations during its re-establishment following its interaction with obstacles [6]. Our simulations have demonstrated that application of this minimal global combustion model is able to capture the sustained transverse detonation feature in this scenario, unlike past applications of simple one-and two-step combustion schemes [6,8]. We attribute this to the fact that the relatively simple four-step model used contains an adequate description to permit the correct ignition and thermodynamic state response when changes in temperature and pressure occur [18,19,20], i.e. behind shocks and reflected shocks. This appears to be required not only to capture the transverse detonation, but also to capture the less frequent situations where detonation re-initiation occurs without a transverse detonation. In both of these cases, accurate treatment of ignition delay time behind shock compression is important. Perhaps the past applications of one-and two-step combustion models to this scenario did not contain sufficiently steep gradients in ignition delays times to trigger or sustain transverse detonations. Since the 4-step model applied gives rise to ignition delay times that effectively respond appropriately to changes in the thermodynamic state, when compared to delailed chemistry, detonations can likely form in shallower ignition delay time gradients compared to the past one-and two-step modeling approaches. In addition to this, we also acknowledge that closure of turbulent mixing is also equally important for capturing critical ignition associated with the lower pressure limits of the critical regime. In future work, we recommend the coupling of the four-step combustion model to the compressible linear eddy model for large eddy simulation approach [15]. For now, however, we draw our conclusions from Euler simulations where turbulent mixing is implicitly controlled through resolution of the numerical scheme.
In this work, we have found that there exists a relationship between the outcome and the cell size and geometry involved. In this investigation, we found that a range of critical outcomes was possible when 3 ≤ (d H /λ) crit ≤ 6, where (d H /λ) is the gap size to mixture cell size ratio. In future work, influences on gap size and cylinder radius should be explored in a parametric investigation. For the critical detonation re-initation outcome, we have clarified that one principal mechanism through which transverse detonations and detonations along the Mach shock can form is through pressure amplification of reaction zones at burned and unburned gas interfaces behind Mach shocks, and in the presence of ignition delay time gradients. In this mechanism, the passing of the transverse shock wave over the burned and unburned gas interface leads to enhanced combustion rates through Richmyer-Meshkov instabilities, which generates the pres-sure necessary to amplify into a coupled shock and reaction zone, or detonation. These detonations are also possible to form through spontaneous ignition of the gas, i.e. from a hot spot formed by the passing of transverse shocks in regions of lowest ignition delay times, which can ultimately form through the Zeldovich gradient mechanism [55].
When transverse detonations do not form, it is possible for detonation re-initiation to occur on a Mach shock directly through a triple point collision. However, this outcome is not as common as the former, and was found to be sensitive to local explosions, or quenching. Also, since higher pressures have more triple points that can survive the expansion from the obstacle, and since direct initiation of a detonation on the Mach shock following a triple point collision does not produce transverse detonations, this likely explains why transverse detonations are only observed at critical low pressures.
In addition to all of this, it was confirmed that transverse detonations are indeed CJdetonations, and whose presence allows for the detonation along the Mach stem to be overdriven. Finally, our simulations have revealed that while pockets of unburned gas may exist when transverse detonations occur, it is not the direct burn-up of these pockets that give rise to transverse detonations as previously suspected. Instead, the pockets of unburned gas are consumed by their own deflagrative burning, or by the passing of such transverse detonation waves.
Appendix A. Effect of resolution and internal boundary conditions on the inert gasdynamic evolution
In this investigation, to handle the presence of the internal cylinder geometry, the straight-forward staircase type of boundary was constructed within the computational domain. Specifically, cells are marked as either being a fluid or a solid depending on their location. While simple to implement, this method is known to introduce artificial roughness to the flow and also introduces nonphysical waves which originate from the surface [57]. As a result, local errors of O(1) may also appear near the surface [58]. A common alternative in a Cartesian grid-based framework would have been to adopt an embedded boundary technique [59]. However, this method is not necessarily conservative, and can result in different Mach, transverse, and incident shock configurations when compared to shock-wedge simulations where boundaries are aligned with the grid itself [60,61]. The cut-cell approach [62] is another popular method for Cartesian grids, but the possibility may arise for modified cells near the boundary to become too small, which may lead to numerical instability. In the end, we chose to use conventional staircase boundaries for two main reasons: (1) Numerical stability of the scheme was ensured, and (2) conservativity was satisfied such that undesirable flow leakage was avoided. Also, since errors originating from staircase boundaries are local in nature, it has been suggested that such errors may be neglected in applications where flow fields along the boundary are not the main focus [58]. In this section, we examine the influence of the discrete internal boundary conditions on the evolved unsteady flow fields of inviscid and inert shock-cylinder interactions at different resolutions. Here, we considered the same domain previously shown in Fig. 3, except that the solution for a shock travelling at 2317.84 m/s was imposed at x = 0.4 m, which corresponded to the CJ-shock speed solution with p 0 = 10 kPa, and the left boundary prescribed was instead a zero-gradient boundary condition. does not explicitly account for diffusion terms, and is therefore sensitive to changes in resolution anyways. Based on these observations, we believe that it is unlikely that the internal boundary conditions applied would have significantly influenced the spectrum of outcomes observed beyond the typical resolution related errors associated with the Euler scheme applied. | 2022-11-02T01:15:52.008Z | 2022-10-31T00:00:00.000 | {
"year": 2022,
"sha1": "04f075d4211231e8a13262a4b24d4a5abd753e0f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "04f075d4211231e8a13262a4b24d4a5abd753e0f",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
} |
119077684 | pes2o/s2orc | v3-fos-license | Computing Masses and Surface Tension from Effective Transfer Matrices
We propose an effective transfer-matrix method that allows a measurement of tunnelling correlation lengths that are orders of magnitude larger than the lattice extension. Combining this method with a particularly efficient implementation of the multimagnetical algorithm we were able to determine the interface tension of the 3D Ising model close to criticality with a relative error of less than 1 per cent.
Introduction
During the last two years there has been considerable progress in the Monte Carlo simulation of interfaces separating two phases of a spin model or nite temperature QCD. There are three major methods to determine the interface tension: 1. Following Binder, by comparing the height of the maximum and minimum in the order parameter distribution 1]. 2. By measuring the tunnelling correlation length tunnel of a cylindrical system 2]. In 3 D, tunnel is related with the surface tension via tunnel / exp( L 2 ) ; (1) where L is the extension of the lattice in spatial direction. 3. By forcing an interface into the system by applying suitable boundary conditions 3]. A major drawback of method (2.) is the rapid increase of tunnel with the area L 2 . Grossmann and Laursen 4] came to the conclusion that the range of L values accessible with standard techniques is not su cient to control systematic errors due to sub-leading corrections to eq. (1).
In this talk we present a new method that allows overcoming this severe problem. Using the new method one can accurately measure tunnelling correlations length tunnel that are several orders of magnitude larger than the extension of the lattice in time direction.
More than a decade ago several authors proposed to improve the measurement of glueball masses by considering a large number N op of operators and their cross-correlations 5]. L uscher and Wol 6] showed that in the limit of in nite separation the eigenvalues of the correlation matrix give the exact masses of N op 1 states. However, these results can only be applied when the lattice extension is much smaller than the largest correlation length.
Motivated by studies that describe a system with cylindrical geometry as an e ective 1D model 7], we consider the order parameter on a single time-slice as an e ective spin. Assuming that one can neglect couplings of e ective spins with distances larger than 1 we arrive at the expression for the e ective transfer matrix. Here we reduced the whole lattice to two e ective sites. The distance of these sites is half of the extension of the lattice in the time direction t. The operator where the factor t=2 is due to the fact that the lattice spacing of the e ective model is t=2 while that of the original model is 1.
In general the transformation to the e ective model will lead to an action that has more than nearest neighbour couplings. Hence one can only expect that e ;i converges to i in the limit t ! 1. An analysis of a e ective two-state system shows that for bulk t tunnel tunnel e ;tunnel / t 1 : (4) We obtain the expectation values of (m i ; M) from a Monte Carlo simulation of the original model. In order to properly measure the e ective transfer matrix, we need a good statistical coverage of all the relevant magnetizations. In order to ght the supercritical slowing down due to exponentially suppressed tunnelling rates, we employed a multimagnetical algorithm: instead of using the canonical probability distribution, we simulated a hand-tuned distribution which explicitly enhances the probability of the states with interfaces: p( ) / e H( ) G(M ); (5) where M is the magnetization of the con guration . The function G is tuned so that the ma- The update was implemented with a demon algorithm, which enabled us to use very e cient multi-spin coding. For details, we refer to 8{10].
Monte Carlo Results for the 2D Ising Model
We did simulations in the broken phase, at = 0:47. This value is low enough for the tunnelling correlation length to become very large, even with modest L, but is close enough to c for the bulk correlation length (4:349 : : : when L = 1) to be still substantially larger than 1. We performed simulations for lattices with spatial extensions L = 16, 32 and 64, and time-like extensions t = L=2, L and 2L. The statistics of the runs was typically 5 10 7 sweeps. The results are summarized in table 1. We nd an impressive reproduction of the tunnelling correlation length that gets as large as 44014 on the L = 64 lattice. The convergence of e ;tunnel towards the exact result is consistent with eq. (4).
We also tried to reproduce the large tunnelling correlation length on the L = 64 lattice using the standard technique of tting the correlation functions of time slice magnetizations. However, this did not lead to any sensible result.
Monte Carlo Results for the 3D Ising Model
We simulated the 3D Ising model at = 0:225. The results for the tunnelling correlation length are summarized in table 2. The typical statistics is again 5 10 7 sweeps. In g. 1 we show the interface tension, obtained from the correlation length = log(2 e ;tunnel )=L 2 , and, using the same data, from the histogram analysis. The extrapolation 1=L 2 ! 0 gives consistent results, provided that we discard the cubical volumes from the histogram analysis. The correlation length measurement seems to have less severe nite-size e ects than the histogram method. Our result for the in nite volume limit of the surface tension at = 0:225 is = 0:00744(3).
Summary and Conclusion
Using the e ective transfer-matrix method we have measured the tunnelling correlation lengths up to 229000000 for the 3D Ising model. We showed that the systematic errors of the method are under control. The application of a multimagnetical algorithm combined with an e cient demon implementation allowed us to determine the interface tension with a relative error of less than 1%. | 2019-04-14T02:29:52.772Z | 1993-12-20T00:00:00.000 | {
"year": 1993,
"sha1": "58a2e5e26502efaa35c78718ca7278fa02ac9d9c",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-lat/9312078",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "119503b380de307cabafb2ffeae469516754b49a",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
118360384 | pes2o/s2orc | v3-fos-license | Cosmological Particle Creation in the Lab
One of the most striking examples for the production of particles out of the quantum vacuum due to external conditions is cosmological particle creation, which is caused by the expansion or contraction of the Universe. Already in 1939, Schr\"odinger understood that the cosmic evolution could lead to a mixing of positive and negative frequencies and that this"would mean production or annihilation of matter, merely by the expansion". Later this phenomenon was derived via more modern techniques of quantum field theory in curved space-times by Parker (who apparently was not aware of Schr\"odinger's work) and subsequently has been studied in numerous publications. Even though cosmological particle creation typically occurs on extremely large length scales, it is one of the very few examples for such fundamental effects where we actually may have observational evidence: According to the inflationary model of cosmology, the seeds for the anisotropies in the cosmic microwave background (CMB) and basically all large scale structures stem from this effect. In this Chapter, we shall provide a brief discussion of this phenomenon and sketch a possibility for an experimental realization via an analogue in the laboratory.
Introduction
One of the most striking examples for the production of particles out of the quantum vacuum due to external conditions is cosmological particle creation, which is caused by the expansion or contraction of the Universe. Already in 1939, Schrödinger understood that the cosmic evolution could lead to a mixing of positive and negative frequencies and that this "would mean production or annihilation of matter, merely by the expansion" [Schrödinger, 1939]. Later this phenomenon was derived via more modern techniques of quantum field theory in curved space-times by Parker [Parker, 1968] (who apparently was not aware of Schrödinger's work) and subsequently has been studied in numerous publications, see, e.g., [Birrell & Davies, 1982;Fulling, 1989;Wald, 1994]. Even though cosmological particle creation typically occurs on extremely large length scales, it is one of the very few examples for such fundamental effects where we actually may have observational evidence: According to the inflationary model of cosmology, the seeds for the anisotropies in the cosmic microwave background (CMB) and basically all large scale structures stem from this effect, see Section 5. In this Chapter, we shall provide a brief discussion of this phenomenon and sketch a possibility for an experimental realization via an analogue in the laboratory.
Scattering analogy
For simplicity, let us consider a massive scalar field Φ in the 1+1 dimensional Friedmann-Robertson-Walker metric with scale factor a(τ ) where τ is the proper (co-moving) time and η the conformal time. The latter co-ordinate is more convenient for our purpose since the wave equation simplifies to In the massless case m = 0, the scalar field is conformally invariant (in 1+1 dimensions) and thus the expansion does only create particles for m > 0. After a spatial Fourier transform, we find that each mode φ k (η) behaves like a harmonic oscillator with a timedependent potential with k 2 + a 2 (η) m 2 → Ω 2 (t) and η → t. There is yet another analogy which might be interesting to notice. If we compare the above equation to a Schrödinger scattering problem in one spatial dimension − 1 2m we find that is has precisely the same form after identifying t ↔ x, φ(t) ↔ Ψ(x), and Note that Ω 2 is always greater than zero in our case -which corresponds to propagation over the barrier E > V (x). If Ω 2 were less than zero over some region in time, one would have a barrier penetration (i.e., tunnelling) problem E < V (x).
With the condition that in the past the field has the form e iΩ in t , in the future the solution would be αe iΩoutt + βe −iΩoutt due to scattering from the region where Ω 2 < 0. This would correspond to particle creation with probability proportional to |β| 2 . However even if Ω 2 > 0 everywhere there will still be some scattering (above the barrier).
In order to derive the cosmological particle creation, we can study a positive pseudo-norm solution of Eq. (3) which initially behaves as e −iΩ in t and finally evolves into a mixture of positive and negative pseudo-norm solutions -which is in this case equivalent to positive and negative frequencies αe −iΩoutt +βe +iΩoutt (assuming that Ω is constant asymptotically).
In the Schrödinger scattering problem, the initial solution e −iΩt could be identified with a left-moving wave on the left-hand side of the potential "barrier" while the final solution αe −iΩt +βe +iΩt would then correspond to a mixture of left-moving αe −iΩt and right-moving βe +iΩt waves on the right-hand-side. As a consequence, the Bogoliubov coefficients α and β are related to the reflection R and transmission T coefficients via α = 1/T and β = R/T . In this way, the Bogoliubov relation |α| 2 − |β| 2 = 1 is equivalent to the conservation law |R| 2 + |T | 2 = 1 for the Schrödinger scattering problem. The probability for particle creation can be inferred from the expectation value of the number of final particles in the initial vacuum state which reads 0 in |n out |0 in = |β| 2 .
WKB analysis
In order to actually calculate or estimate the Bogoliubov coefficients, let us re-write Eq. (3) in a first-order form via introducing the phase-space vector u and the matrix M d dt If we define an inner product via we find that the inner product of two solutions u and u ′ of Eq. (5) is conserved The split of a solution into positive and negative frequencies (i.e., positive and negative pseudo-norm) corresponds to a decomposition in the instantaneous eigen-basis of the matrix Choosing the usual normalization u ± = (1, ±iΩ) T / √ 2Ω, we find At each time t, we may expand a given solution u(t) of Eq. (5) into the instantaneous eigen-vectors where the pre-factors are now defined as time-dependent Bogoliubov coefficients α(t) and β(t). It is useful to separate out the oscillatory part with the WKB phase Now we may insert the expansion (10) into the equation of motion (5) and project it with the inner product (6) onto the eigen-vectors u ± which giveṡ This equation (12) is still exact and very hard to solve analytically -except in very special cases. It can be solved formally by a iterative integral equation It can be shown that this iteration converges to the exact solution for well-behaved Ω(t) [Braid, 1970]. Standard perturbation theory would then correspond to cutting off this iteration at a finite order, which can be justified if Ω(t) changes only very little. For the scalar field in Eq. (2) this perturbative treatment should be applicable in the ultrarelativistic limit, i.e., as long as the mass is much smaller than the wave-number. In many cases, however, another approximation -the WKB method -is more useful. This method can be applied if the rate of change of Ω(t), e.g., the expansion of the universe, is much slower than the internal frequency Ω(t) itself. Writing with some dimensionless function f of order one, the WKB limit corresponds to Ω 0 ≫ ω.
In terms of the reflection coefficient R = β/α mentioned earlier, we geṫ which is known as Riccati equation. Again, this equation is still exact but unfortunately non-linear. Neglecting the quadratic term R 2 would bring us back to perturbation theory.
In the WKB-limit, the phase factors e ±2iϕ are rapidly oscillating and the magnitude of R can be estimated by going to the complex plane. Re-writing the Riccati equation (15) as we may use an analytic continuation ϕ → ϕ+iχ to see that R becomes exponentially suppressed R ∼ e −2χ . How strongly it is suppressed depends on the point where the analytic continuation breaks down. Since e ±2iϕ is analytic everywhere, this will be determined by the term ln Ω. Typically, the first non-analytic points t * encountered are the zeros of Ω, i.e., where Ω(t * ) = 0. In the case of barrier reflection, these points where Ω = 0, i.e., where V = E, lie on the real axis and correspond to the classical turning points in WKB. In our case, we have scattering above the barrier and thus these points become complex -but are still analogous to the classical turning points in WKB. Consequently, we find 1 If there is more than one turning point, the one with the smallest χ * > 0, i.e., closest to the real axis (in the complex ϕ-plane) dominates. If these multiple turning points have similar χ * > 0, there can be interference effects between the different contributions, see, e.g., [Dumlu & Dunne, 2010].
Adiabatic expansion and its breakdown
Note that we could repeat steps (5) till (12) and expand the solution u(t) into the firstorder adiabatic eigen-states instead of the instantaneous eigen-vectors u ± . To this end, let us re-write (12) as d dt The eigen-vectors of the matrix N are the first-order adiabatic eigen-states w ± and the eigen-frequencies N · w ± = ±iΩ ad w ± are renormalized to Assuming α in = 1 and β in = 0, the system stays in the adiabatic eigen-state w + to lowest order in ω/Ω and we get This adiabatic expansion into powers of ω/Ω can be continued and gives terms likeΩ 2 /Ω 4 andΩ/Ω 3 to the next order in ω/Ω (see below). One should stress that this expansion is not the same as in (13) since it is local -i.e., only contains time-derivatives -while (13) is global -i.e., contains time-integrals. Since all terms of the adiabatic expansion (20) are local, they cannot describe particle creation -which depends on the whole history of Ω(t). In terms of the adiabatic expansion into powers of ω/Ω, particle creation is a non-perturbative effect, i.e., it is exponentially suppressed, see Eq. (17) and thus cannot be found be a Taylor expansion into powers of ω/Ω. For any finite ratio of ω/Ω, this also means that the adiabatic expansion (into powers of ω/Ω) must break down at some point. To make this argument more precise, let us re-write Eq. (18) in yet another form In this representation, the eigen-values of N are given by ±iΛ and the eigen-vectors read Decomposing the solution w(t) into these eigen-vectors and usingẇ + =ξw − as well asẇ − =ξw + , we find d dt This is the same form as Eq. (22) if we change Λ and ξ accordingly. Thus, by repeating this procedure, we get the iteration law By this iteration, we go higher and higher up in the adiabatic expansion since ξ n always acquires an additional factor of ω/Ω. Thus, for ω ≪ Ω, the values of ξ n quickly decay with a power-law ξ n = O([ω/Ω] n ) initially. As we go up in this expansion, however, the effective rate of change of ξ n increases. For example, if Ω(t) has one global maximum (or minimum) and otherwise no structure, the time-derivativeΩ/(2Ω 2 ) = tanh(2ξ 1 ) has two extremal points and a zero in between. By taking higher and higher time derivatives, more and more extremal points and a zeros arise and thus the effective frequency ω eff n of ξ n (t) increases roughly linearly with the number n of iterations ω eff n = O(nω). Furthermore, the adiabatically renormalized eigen-values Λ n decrease with each iteration. Thus, after approximately n = O(Ω/ω) iterations, the effective frequency ω eff n becomes comparable to the internal frequency Λ n . At that point, the adiabatic expansion starts to break down. Estimating the order of magnitude of ξ n at that order gives Since the effective external ω eff n and internal Λ n frequencies are comparable and ξ n is very small, we may just use perturbation theory to estimate β and we get β = O(ξ n ), i.e., the same exponential suppression as in Eq. (21). If we would continue the iteration beyond that order, the ξ n would start to increase again -which the usual situation in an asymptotic expansion, see Figure 1. Carrying on the iteration too far beyond this point, theξ 2 n exceed the Λ 2 n and thus we have barrier penetration instead of propagation over the barrier (as occurs for all orders below this value of n). In this procedure, it is this barrier penetration which gives the mixing of positive and negative pseudo-norm, and the creation of particles. Were the system to remain as propagation over the barrier for all orders n in this adiabatic expansion, one would have no particle creation. Figure 1: Sketch of the effective external frequencies ω eff n (crosses) and amplitudes ξ n (solid line) depending on the iteration number n obtained numerically for a concrete example. One can observe that ω eff n grows approximately linearly with n while ξ n first decreases but later (for n > 5) increases again.
Example: inflation
As an illustrative example, let us consider a minimally coupled massive scalar field in 3+1 dimensions -which could be the inflaton field (according to our standard model of cosmology). Again, we start with the Friedmann-Robertson-Walker metric (1) with a scale factor a(τ ) and obtain the equation of motion Rescaling the field φ(τ, r) = ℧(τ )Φ(τ, r) with ℧(τ ) = a 3/2 (τ ) and applying a spatial Fourier transform, we obtain the same form as in Eq. (3) In the standard scenario of inflation, the space-time can be described by the de Sitter metric a(τ ) = exp{Hτ } to a very good approximation, where H is the Hubble parameter. In this case, the effective potential℧/℧ just becomes a constant (3H/2) 2 and the frequency Ω(τ ) reads Inserting a(τ ) = exp{Hτ }, we see that modes with different k-values follow the same evolution -just translated in time. (This fact is related to the scale invariance of the created k spectrum.) Initially, this frequency is dominated by the k 2 term and we havė Ω/Ω = −H which means that we are in the WKB regimeΩ/Ω ≪ Ω. However, due to the cosmological red-shift, this k 2 term decreases with time until the other terms become relevant. Then the behavior of the modes depends on the ratio m/H. For m ≫ H, the modes remain adiabatic (i.e., stay in the WKB regime) and thus particle creation is exponentially suppressed. If m and H are not very different, but still m > 3H/2 holds, the modes are adiabatic again for large times -but for intermediated times, the WKB expansion breaks down, leading to a moderate particle creation. For m < 3H/2, on the other hand -which is (or was) supposed to be the case during inflation -the frequency Ω(τ ) goes to zero at some time and becomes imaginary afterwards. This means that we get a barrier penetration (tunneling) problem where the modes φ k (τ ) do not oscillate but evolve exponentially in time φ k (τ ) ∝ exp{±τ 9H 2 /4 − m 2 }. Here one should remember that the original field does not grow exponentially due to the re-scaling with the additional factor ℧(τ ) = a 3/2 (τ ). This behavior persists until the barrier vanishes, i.e., the expansion slows down (at the end of the inflationary period) and thus the effective potential℧/℧ drops below the mass term. After that, the modes start oscillating again. However, in view of the barrier penetration (tunneling) over a relatively long time (distance), we get reflection coefficients R which are not small but extremely close to unity R ≈ 1. This means that the Bogoliubov coefficients α and β are huge -i.e., that we have created a tremendous amount of particles out of the initial vacuum fluctuations. According to our understanding, precisely this effect is responsible for the creation of the seeds for all structures in our Universe. Perhaps the most direct signatures of this effect are still visible today in the anisotropies of the cosmic microwave background radiation.
An alternative picture of the mode evolution in terms of a damped harmonic oscillator can be obtained from the original field in Eq. (28) Initially, the term e −2Hτ k 2 dominates and the modes oscillate. Assuming m ≪ H (which is related to the slow-roll condition of inflation), the damping term dominates for late times and we get a strongly over-damped oscillator, whose dynamics is basically frozen (like a pendulum in a very sticky liquid). The transition happens when H ∼ ke −Hτ , i.e., when the physical wavelength λ = 2πe Hτ /k exceeds the de Sitter horizon ∝ 1/H due to the cosmological expansion e Hτ . After that, crest and trough of a wave lose causal contact and cannot exchange energy any more -that's why the oscillations effectively stops. As a final remark, we stress that this enormous particle creation effect is facilitated by the rapid (here: exponential) expansion and the resulting stretching of wavelengths over many many orders of magnitude (i.e., the extremely large red-shift). Therefore, a final mode with a moderate wavelength originated from waves with extremely short wavelengths initially. Formally, these initial wavelengths could be easily far shorter than the Planck length. However, on these scales one would expect deviations from the theory of quantum fields in classical space-times we used to derive these effects. On the other hand, this problem is not only negative -it might open up the possibility to actually see signatures of new (Planckian) physics in high-precision measurements of the cosmic microwave background radiation, for example.
Laboratory analogues
Apart from the observation evidence in the anisotropies of the cosmic microwave background radiation mentioned above, one may study the phenomenon of cosmological particle creation experimentally by means of suitable laboratory analogues, see, e.g., [Unruh, 1981;Barceló, Liberati, & Visser, 2011]. The are two major possibilities to mimic the expansion or contraction of the Universe -a medium at rest with time-dependent properties (such as the propagation speed of the quasi-particles) or an expanding medium. Let us start with the former option and consider linearized and scalar quasi-particles (e.g., sound waves) with low energies and momenta propagating in a spatially homogeneous and isotropic medium. Under these conditions, their dynamics is governed by the low-energy effective action Here we assume positive a 2 and non-negative b 2 and c 2 for stability. The factor a 2 (t) can be eliminated by suitable re-scaling of the time co-ordinate. Then, after a spatial Fourier transform, we obtain the same form as in Eq. (3). The quasi-particle excitations φ in such a medium behave in the same way as a scalar field in an expanding or contracting Universe with a possibly time-dependent potential (mass) term ∝ b 2 (t)φ 2 . In order to avoid this additional time-dependence of the potential (mass) term, the factors b and c must obey special conditions. For example, Goldstone modes with b = 0 correspond to a massless scalar field in 3+1 dimensions -whereas the case of constant c is analogous to a massive scalar field in 1+1 dimensions. As one would intuitively expect, the expansion or contraction of the Universe can also be mimicked by an expanding or contracting medium. Due to local Galilee invariance, such a medium can also be effectively spatially homogeneous and isotropic as in Eq. (32) when described in terms of co-moving co-ordinates. For a quite detailed list of references, see [Barceló, Liberati, & Visser, 2011]. There are basically three major experimental challenges for observing the analogue of cosmological particle creation in the laboratory. First, the initial temperature should be low enough such that the particles are produced due to quantum rather than thermal fluctuations. Second, one must be able to generate a time-dependence (e.g., expansion of the medium) during which the effective action in Eq. (32) remains valid (in some sense) but which is also sufficiently rapid to create particles. Third, one must be able to detect the created particles and to distinguish them from the radiation stemming from other sources. For trapped ions, for example (see, e.g., [Schützhold et al, 2007]), the first and third point (i.e., cooling and detection) is experimental state of the art, while a sufficiently rapid but still controlled expansion/contraction of the ion trap presents difficulties. For Bose-Einstein condensates (see, e.g., [Barceló, Liberati, & Visser, 2011] and references therein), on the other hand, the first and third points are the main obstacles. | 2012-03-06T11:38:41.000Z | 2012-03-06T00:00:00.000 | {
"year": 2013,
"sha1": "ea6d6ce3e9d6c5300cf56fd39df04ad1495d85c6",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1203.1173",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "ea6d6ce3e9d6c5300cf56fd39df04ad1495d85c6",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
210617128 | pes2o/s2orc | v3-fos-license | Effect of pesticides on soil microorganisms
In light of the rapidly growing human population, extensive pesticides have been utilized to maximize crop production. This has become a major environmental concern. To assess the influence of commonly used pesticides on soil microorganisms counts and microbial activities in the form of CO2 production, a factorial experiment was conducted. Herbicide (Glyset I.P.A, Glyphosate 48%) and insecticides Miraj (Alphacypermethrin 10%) and Malathion (50% WP) were separately added to the soil at 0, 50, 100 and 200 ppm and incubated in the laboratory at 30 °C. The counts of bacteria, fungi, actinomycetes and CO2 production were examined weekly for 7 consecutive weeks. The results demonstrated that the addition of the three mentioned pesticides significantly decreased the microbial activities and counts of soil bacteria, fungi and actinomycetes. The observed effect was depended upon the type and amount of pesticide as well as the length of incubation period. The microbial activities and the number of bacteria, fungi, and actinomycetes were inversely proportional to the concentration of pesticides added to the soil. In most treatments, soil samples treated with 200ppm of Malathion demonstrated the lowest microbial activities and counts of bacteria, fungi and actinomycetes. This study suggests that the investigated pesticides negatively affect microbial counts and activity in soil, which confirms and reinforces previously reported environmental concerns.
Introduction
Due to rapidly growing human population, extensive pesticides have been utilized to maximize crop production. The extensive consumption of pesticides in cultivated soils leads to the pollution of the soil with harmful materials (Muñoz- Leoz et al., 2013). About 3 million tons of pesticides that costs about US$ 40 billion is utilized in world agriculture annually ( Pan UK, 2003). About 99.9% from the applied pesticide not reached to target organisms and become as pesticide residues accumulation which pollute the soil environment and just 0.1% reached to target organisms ( Carriger et al., 2006;Pimental, 1995). Both pesticides residues accumulation and microorganisms activity usually present in the same reign, soil top layer ( Harris and Sans, 1969;Alexander, 1961) . The impact of different pesticides on the growth of soil microorganisms and its activity are difficult to expect. Even if the pesticides used in low concentration they effect chemical and biological properties, biochemical activity and soil microorganisms (Cycon et al., 2006;Singh et al., 2008;Cycon et al., 2010). 2 Pesticides in the soil impact the non target and useful microorganisms (Singh and Prasad,1991;Bhuyan et al., 1992 ) and their activities (Schuster and Schroder, 1990). Beneficial soil microorganisms play essential role in soil fertility and productivity such as organic matter biodegradation, nutrients recycling, humus formation, Soil structural stability, nitrogen fixation, plant growth promotion, disease biocontrol, and other biochemical transformation such as ammonification, nitrification phosphorus solubilizing (Prasad Reddy et al., 1984;Husain et al., 2003). The effect of pesticides on soil microorganisms and their activity depend upon the type of pesticides used , quantities and soil conditions (Subhani et al., 2000).The objective of this study is to assess the influence of three usually used pesticides on soil microorganisms counts and microbial activities in the in the form of CO2 production.
Soil sampling and analysis
The soil samples were taken from Alrashedia area 5 km north of Mosul city. The soil was taken from surface area ( 0-20) cm. To remove debris, the soil was sieved with 2 mm sieve. The physical and chemical characteristics of the soil were determined as following. Soil texture by hydrometer method, Soil Reaction (pH) by glass electrode method (1:2.5 soil water suspension), soluble salts by Electrical Conductivity, Organic material by rapid titration method (Black et al., 1965), available phosphorus by Olsen's method (Olsen et al., 1954) Ca and Mg by Graham method (Graham et al.,1962) , Potassium and sodium by flame photometer, (Jackson, 1973). Some of soil physical and chemical properties were recorded in Table 1. .
Soil sample preparation and Experimental design
200 gm of sieved soil was placed in 250 ml flask. The treatments involved of three pesticides . Herbicide (Glyset I.P.A, Glyphosate 48%) and two insecticides Miraj (Alphacypermethrin 10%) and Malathion (50% WP). pesticides were applied at 0, 50, 100. 200 ppm levels. The pesticides pollution include twelve treatments in a completely randomized design replicated three times . All components were gently mixed with the soil. The moisture content of soil was got to 60% water holding capacity. Distilled water was added to maintain them 60% of WHC. For measuring CO 2 production, 10 ml of 2M sodium hydroxide solutions was placed in a glass tube and put the tube gently on soil surface in each flask. The flasks were closed well with rubber stoppers to avoid any gaseous exchange between the flasks and outside atmosphere. A blank, in three replicate, was also done to account for the quantity of CO 2 already present in the flask's atmosphere. The flasks were incubated at a temperature of 30C. Soil sampling was conducted weekly intervals for 7 weeks .
Measurement of microbial activity
activities of microorganisms were determinded in the form of carbon dioxide production according to Anderson et al., 1982. the glass tube was gently got out of flask weekly and the sodium hydroxide solution was transmitted to clean flask. For following incubation fresh sodium hydroxide solution was put in clean glass tube and placed in the same flask and it is gave back to the incubator. The process was reiterate at the finish of each former incubation period. After addition 10ml of 1M barium chloride solution and drops of phenolphthalein, to the recovered sodium hydroxide solution and titrated against 1 M hydrochloric acid solution till the pink color is gone. During the reaction one mole of carbon dioxide equalize two moles of sodium hydroxide. The quantity of released CO 2 was adjusted as mg CO 2 /100g soil 2NaOH + CO 2 Na 2 CO3 Na 2 CO 3 + BaCl 2 2NaCl + BaCO 3
Microbial analysis
In assessing microbial population, standard plate count methods were used to prepare nutrient agar (NA) for assessment of the bacteria population , potato dextrose agar (PDA) for assessment of fungi, Starch casein nitrate (SCN) agar -for assessment of actinomycetes, . One gram each of the soil samples were measured into the test tube containing 9 ml sterile distilled water and serially diluted to dilution factor 10 5 and 1 ml of the proper dilutions was pipette into sterile plate with appropriate medium which were incubated at 30˚C. All plated were incubated inverted wise. Microbial counted were done at 48 hours for NA and 72 hours for PDA and 6 days for(SCN) (Stanley,2015; Adesina and Adelasoye, 2014 )
Statistical analysis
ANOVA was carried out. The means were compared using Least Significant Difference LSD test at p <0.05 after, ANOVA. Table 2 shows the addition of pesticides decreased the count of bacteria in all pesticides types and at all incubation periods. In the first week of incubation period, the addition of Glyset (Glyphosate 48%) 50ppm, 100ppm and 200ppm decreased the count of bacteria by 4%, 11% and 13% respectively. While at 7 th week of incubation, the reduction in bacteria count was 6%, 9% and 9% respectively. 50ppm, 100ppm and 200ppm decreased the count of bacteria by 18%, 24% and 32% respectively. While at 7 th week of incubation, the reduction in bacteria count was 9%, 17% and 45% respectively. Similar results were observed by (Goswami et al., 2013;Wesley et al., 2017) who reported that the decrease in the soil microbial count and biomass can be associated with the toxic effect of Cypermethrin on soil microorganisms. The presence of Cypermethrin and thiamethoxam inhibited the metabolic process and significantly decreased ammonifying, nitrifying and denitrifying bacteria compared to the untreated sample (Nicoleta et al., 2015). .
Effect of pesticides on bacteria count
The results in table 2 show that the presence of malathion insecticide decreased bacteria number. In the first week of incubation period, the addition of malathion 50ppm, 100ppm, and 200ppm decreased the number of bacteria 40%,42% and 59% respectively. While at 7 th week of incubation period, the reduction was 32%, 38% and 41% respectively. . Table 3 shows that the presence of glyset herbicide decreased fungi count in all glyset concentration. However, the reduction in fungi count was just significant at 200ppm. The addition of glyset 200ppm, decreased the fungi count by 20% and 13% at first week of incubation period and 7 th week respectively. Tanney and Hutchison, 2010 reported that the addition of glyphosate depressed the growth of 21 from 22 fungal species.
Out results in table 3 show that the presence of Miraj insecticide reduced the fungi count at all concentrations and period of incubations. The addition of Miraj at 50ppm, 100ppm and 200ppm, decreased the fungi count in the first week of incubation 60%, 61% and 63% respectively. While the reduction at 7 th week was 48%, 50% and 62% respectively. Goswami et al., 2013 concluded that cypermethrin application had toxic effects on soil microorganisms. Table 3 shows the addition of malathion decreased the fungi counts at all concentrations and periods. During the first week of incubation, the addition of malathion at 50ppm,100ppm and 200ppm decreased fungal population 56 %, 62% and 66% respectively. While during the 7 th week, the reduction was 58%, 64% and 65% respectively. Similarly, studies have shown that the presence of malathion insecticide decreased fungal population ( Walia et al., 2018;Smith, et al., 2000) . Our results show that the most adverse effect was seen in soil treated with malathion specially at 200ppm. The effect of pesticides on actinomycetes population was shown in table 4. The presence of glyset herbicide inhibited actinomycetes population. During the first week of incubation, the addition of glyset 50ppm, 100ppm and 200ppm declined actinomycetes population by 5%, 7% and 22% respectively. However, during the 7 th week, the reduction was 7%, 9% and 10% respectively. Our results show that the reduction of actinomycetes population as a result of glyset addition was significant just at 200ppm. Table 4 shows actinomycetes population was decreased as a result of Miraj insecticide addition. During the first week of incubation, the depression in actinomycetes population was 54%, 56% and 60% as a result of addition of miraj insecticide 50ppm, 100ppm and 200ppm respectively. During the 7 th week, the depression was 63%, 64% and 69% respectively.
Actionomycetes population was depressed in the soil treated with malathion ( table 4). During the first week of incubation period, the addition of malathion 50ppm, 100ppm and 200ppm decreased actinomycetes population by 34%, 36% and 40% respectively. While during the 7 th week, the reduction in actinomycetes population was 37%, 42% and 50% respectively. Our results show that the most adverse effect was seen in soil treated with malathion specially at 200ppm.These results are coordinated with several studies (Walia et al., 2018;Haleem, et al., 2013) who reported that the population of actinomycetes decreased by malathion treatment.
Microbial activity
The pesticides treatment utilized in the current study caused adverse impact on the microbial activity in the form of CO 2 production (table 5). There were significant decreases in CO 2 production and these decreases significant in all pesticides types and concentrations used except Glyset 50 ppm, 100 ppm and miraj 50 ppm. The addition of 200ppm glyset decreased the CO 2 production during first and 7th week of incubation by 18% and 26% respectively. During the first week, the addition of Miraj insecticide 100ppm and 200ppm, decreased CO 2 production by 25%, 29% respectively. While the reduction during the 7 th week was 32% and 36% respectively. The most adverse effect was seen with soil treated with malathion insecticide. During the first week of incubation, the addition of malathion 50ppm, 100ppm and 200ppm depressed CO 2 production by 31%, 37% and 43% respectively. While the depression in CO 2 production during 7th week was 42%, 45% and 52% respectively. The same results were shown by Goswami et al., 2013 who reported that the application of cypermethrin insecticide on soil at high concentration leads to poisonous impact on soil biomass, respiration and FDHA activity. Yousaf et al., 2013 concluded that the pesticides were very poisonous to soil microbes, as showed by the decrease of CO2 produced. | 2019-10-31T08:58:02.312Z | 2019-09-01T00:00:00.000 | {
"year": 2019,
"sha1": "cb88eb98583be62d298f29019360d26e8e6be4a1",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1294/7/072007",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "aeb76b8e7ee248d47d6be64f8bb65bbc013c9ba9",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
247788002 | pes2o/s2orc | v3-fos-license | Exfoliation Syndrome in Baja Verapaz Guatemala: A Cross-Sectional Study and Review of the Literature
There are little epidemiologic data on exfoliation syndrome (XFS) or exfoliation glaucoma (XFG) in Guatemala, especially in the underserved Baja Verapaz region. This observational study assessing XFS/XFG and demographic factors of this region aims to better understand unique exogenous and endogenous risk factors associated with XFS/XFG in Guatemala. During Moran Eye Center’s global outreach medical eye camps from 2016–2017, 181 patients age 15 years and older presented for complete eye exams. These individuals were screened for eye disease and evaluated for possible surgical interventions that could occur during the camps to improve eyesight. During the dilated exams, XFS was noted as missing or present. Of those 181, 10 had insufficient data and 18 lacked a definitive diagnosis of XFS or XFG, resulting in 153 evaluable patients; 46 XFS and 9 XFG were identified. Age, gender, hometown, ancestry (languages spoken by parents and grandparents), past medical history, family medical history, and occupational data (only 2017 trip) were obtained for each patient. The most common occupations of these individuals were farming and housekeeping. Higher rates of XFS/XFG were noted in individuals of rural compared to urban settings and Mayan speaking people compared with Spanish speakers. Based on this subset of patients, with various ocular pathologies being evaluated during medical eye outreach camps, the prevalence of XFS/XFG appeared to be 36%, a high prevalence compared to other world populations. Location and higher altitude, along with a farming occupation, may contribute to XFS development and subsequent progression to XFG. To our knowledge, this is the largest study looking at the epidemiology of XFS/XFG in the Baja Verapaz region of Guatemala for those over the age of 15 years seeking eye exams and interventions.
Introduction
Guatemala is one of the most underserved countries in Central America and close to half of Guatemalans live in rural areas [1]. In addition to many of Guatemala's citizens inadequate access to housing, clean water, food, and education, they also lack access to health care and especially eye care [2]. There are estimates that of 80,000 Guatemalans are blind due to cataracts and thousands more are functionally blind from lack of access to eyeglasses [3,4]. In the mountainous region of Baja Verapaz, located several hours north of Guatemala City is the city of Salamá, people of diverse languages and communities call this home, including the majority of Mayan Guatemalans that live in the highlands [5]. The vast majority of people residing in Salamá and nearby regions are medically underserved 2 of 12 as there is only one ophthalmologist for roughly 800,000 inhabitants. Obtaining populationbased data of eye diseases that affect the underserved population in Guatemala will help with the development of interventions for preventable blindness in this population, thereby having a profound impact on quality of life and socioeconomic benefits [6,7].
Exfoliation syndrome (XFS), first discovered in 1917 in a Finnish population, is a complex, inherited systemic disorder characterized by abnormal accumulation of extracellular matrix material (ECM) in the eye, heart, brain, lungs, and skin [8][9][10][11][12][13][14][15]. Deposition of fibrillar ECM debris, or exfoliation material (XFM), within anterior segment structures of the eye is the manifestation of clinically diagnosed XFS (see Figure 1), which is the most common recognizable cause of open-angle glaucoma worldwide [16]. Patients with XFS are at high risk of developing exfoliation glaucoma (XFG), a particularly aggressive form of glaucoma, as well as more advanced and rapid cataract formation [17]. It is important to diagnose XFS given that cataract surgery in XFS can carry a higher risk of intraoperative and postoperative complications [18,19].
J. Clin. Med. 2022, 11, x FOR PEER REVIEW 2 of 12 this home, including the majority of Mayan Guatemalans that live in the highlands [5]. The vast majority of people residing in Salamá and nearby regions are medically underserved as there is only one ophthalmologist for roughly 800,000 inhabitants. Obtaining population-based data of eye diseases that affect the underserved population in Guatemala will help with the development of interventions for preventable blindness in this population, thereby having a profound impact on quality of life and socioeconomic benefits [6,7]. Exfoliation syndrome (XFS), first discovered in 1917 in a Finnish population, is a complex, inherited systemic disorder characterized by abnormal accumulation of extracellular matrix material (ECM) in the eye, heart, brain, lungs, and skin [8][9][10][11][12][13][14][15]. Deposition of fibrillar ECM debris, or exfoliation material (XFM), within anterior segment structures of the eye is the manifestation of clinically diagnosed XFS (see Figure 1), which is the most common recognizable cause of open-angle glaucoma worldwide [16]. Patients with XFS are at high risk of developing exfoliation glaucoma (XFG), a particularly aggressive form of glaucoma, as well as more advanced and rapid cataract formation [17]. It is important to diagnose XFS given that cataract surgery in XFS can carry a higher risk of intraoperative and postoperative complications [18,19]. Although originally believed to be an inherited disease found primarily in Scandinavian descendants, the XFS phenotype has been reported in multiple populations and appears to be highly associated with genetic variants in the lysyl oxidase like-1 (LOXL1) locus, a key enzyme in ECM deposition and repair [20][21][22][23]. Several additional hypotheses exist as to why this XFM collects and deposits in certain patients based on epigenetic underpinnings, possibly due to DNA methylation [24], UV exposure [25][26][27], and the latitude of a population [27]. The Reykjavik Eye Study [28] found no relation between time outdoors and risk of XFS but, recently, cumulative solar exposure and outdoor occupation have been linked to XFS development in USA and Israeli patients [26,29].
Based on the literature, XFS is rarely observed below age 50, and the prevalence increases significantly with age [30]. In a study of subjects over 60 years across various ethnicities, the prevalence rates ranged between 0% in Greenland Eskimos to 21% in Icelanders [31]. Other XFS prevalence rates include 0.4% in China [32], 13% in Spain [33], 7.8-12% for Gurung and 0.3% for Tamang peoples in Nepal [34], and 2.4% in women over the age Although originally believed to be an inherited disease found primarily in Scandinavian descendants, the XFS phenotype has been reported in multiple populations and appears to be highly associated with genetic variants in the lysyl oxidase like-1 (LOXL1) locus, a key enzyme in ECM deposition and repair [20][21][22][23]. Several additional hypotheses exist as to why this XFM collects and deposits in certain patients based on epigenetic underpinnings, possibly due to DNA methylation [24], UV exposure [25][26][27], and the latitude of a population [27]. The Reykjavik Eye Study [28] found no relation between time outdoors and risk of XFS but, recently, cumulative solar exposure and outdoor occupation have been linked to XFS development in USA and Israeli patients [26,29].
Based on the literature, XFS is rarely observed below age 50, and the prevalence increases significantly with age [30]. In a study of subjects over 60 years across various ethnicities, the prevalence rates ranged between 0% in Greenland Eskimos to 21% in Icelanders [31]. Other XFS prevalence rates include 0.4% in China [32], 13% in Spain [33], 7.8-12% for Gurung and 0.3% for Tamang peoples in Nepal [34], and 2.4% in women over the age of 65 in our Utah population [35]. One study found an XFS prevalence of 15% in western Guatemala and another found 24% in Baja Verapaz (22% had XFS and cataracts), but there are little epidemiologic data on XFS and XFG prevalence or incidence in Guatemala, especially in the Baja Verapaz region [36][37][38]. This observational study assessing XFS, XFG, and possible demographic factors of this region aims to better understand unique exogenous and endogenous risk factors that may be associated with XFS in Guatemalans.
Materials and Methods
During Moran Eye Center's global outreach trips from 2016-2017, the study team, working in conjunction with the Salama Lions Eye Hospital staff, traveled to the Baja Verapaz region to offer eye care and assess the prevalence of XFS/XFG in an underserved population. In Salama and several surrounding communities, outreach eye clinics were established to offer complete eye exams, refractions, and screening of patients for eye diseases that would benefit from surgery. People from all over the Baja Verapaz and the surrounding region traveled to these clinics, some more than 50 miles, to receive comprehensive dilated eye exams. All community members who presented to outreach clinics were seen and evaluated. In total, 181 patients were recruited for the study and consented. Access to care was not denied to any community member who came to the clinics. Study participants ranged in age from 15 to 94 years of age. Participants who could benefit from surgery received surgery, i.e., cataract, glaucoma, or pterygium, during those trips. Those with XFS/XFG were identified on dilated slit lamp exam. Glasses were made for those patients with refractive errors and all patients were referred back to the local ophthalmologist in Salamá for continued care.
At these on sight medical camps in the field, Moran surgeons and teams provide free eye exams as well as surgical intervention that can restore sight to hundreds of patients in a week, while also helping local trainees gain experience. During these time periods, the Moran outreach team was in Guatemala from 2014-2017. The Moran Outreach, a team consisting of ophthalmologists, nurses, medical and surgical technicians, and researchers, is an investment in training more ophthalmologists around the world.
Under informed consent, which was translated into either Spanish or Mayan, eye exams were performed, questionnaires were completed, and blood samples for future genetics work from XFS patients and their family members were collected. Age, gender, hometown, ancestry (languages spoken by parents and grandparents), past medical history (PMH), family medical history (FH), and occupational (only 2017 trip) data were obtained for each patient. Location/hometown was further divided into rural (less than 2000 inhabitants) and urban (>2000 inhabitants). Patients were allowed multiple responses for language, PMH and FH. This research was conducted under Utah IRB (00081512). All data were de-identified, HIPAA compliant, and adhered to the tenets of the Declaration of Helsinki.
In 2016, anterior lens capsules were collected on those patients undergoing cataract surgery. These capsules were processed under a collaborative agreement with Singapore Research Center and Duke University looking to measure mRNA for LOXL1.
Demographic variables were examined using descriptive statistics. To determine if there were any significant differences between patients with and without an XFS/XFG diagnosis, a chi-squared test was used for all discrete variables and the Kruskal-Wallis test was used for age. To account for multiple testing, the Bonferroni correction was applied. Demographic variables that were significantly associated with an XFS/XFG diagnosis after adjustment for multiple testing were considered for inclusion in multivariable modeling. The resulting multivariable logistic regression model was adjusted for age and no family medical history. p-values ≤ 0.05 were considered statistically significant. All analysis were performed using SAS version 9.4 (SAS Institute, Cary, NC, USA).
Results
Out of 181 total patients evaluated by the Moran team who provided consent to join the study, 10 patients had insufficient data to be included and 18 lacked a clear diagnosis of either XFS or XFG, resulting in 153 viable patients ( Figure 2). The overall prevalence of XFS/XFG in our studied population appeared to be 36%. Of these 153 patients, 66 were male (45%), 81 were female (55%), and 6 did not indicate gender. The XFS/XFG rate in males was 52%, with a rate of 48% in females, and XFS/XFG patients were 48 years of age or older. Demographic details are included in Table 1. The average age of all patients being examined was 64 years with a range of 15 to 94 years. Those 55 patients with XFS/XFG were, on average, 72 years of age. The median age in years for XFS/XFG patients is significantly older than non XFS/XFG patients at 74 and 63, respectively, with a p-value = 0.0008. A significantly higher proportion of patients with an XFS/XFG diagnosis (32 (58%)) compared to no XFS/SFG diagnosis (30 (31%)) had no reported family medical history, p-value 0.02.
Lack of any reported family history significantly increased the odds of an XFS/XFG diagnosis in both the unadjusted (OR 3.15, 95% CI 1.59, 6.27, p-value 0.001) and adjusted analysis (OR 2.34, 95% CI 1.13, 4.83, p-value 0.02). Details of univariate and multivariate analysis can be found in Table 2.
Out of 181 total patients evaluated by the Moran team who provided consent to join the study, 10 patients had insufficient data to be included and 18 lacked a clear diagnosis of either XFS or XFG, resulting in 153 viable patients (Figure 2). The overall prevalence of XFS/XFG in our studied population appeared to be 36%. Of these 153 patients, 66 were male (45%), 81 were female (55%), and 6 did not indicate gender. The XFS/XFG rate in males was 52%, with a rate of 48% in females, and XFS/XFG patients were 48 years of age or older. Demographic details are included in Table 1. The average age of all patients being examined was 64 years with a range of 15 to 94 years. Those 55 patients with XFS/XFG were, on average, 72 years of age. The median age in years for XFS/XFG patients is significantly older than non XFS/XFG patients at 74 and 63, respectively, with a p-value = 0.0008. A significantly higher proportion of patients with an XFS/XFG diagnosis (32 (58%)) compared to no XFS/SFG diagnosis (30 (31%)) had no reported family medical history, p-value 0.02.
Lack of any reported family history significantly increased the odds of an XFS/XFG diagnosis in both the unadjusted (OR 3.15, 95% CI 1.59, 6.27, p-value 0.001) and adjusted analysis (OR 2.34, 95% CI 1.13, 4.83, p-value 0.02). Details of univariate and multivariate analysis can be found in Table 2.
The most common occupations were farming and housekeeping. Higher rates of XFS/XFG were noted in individuals of rural (76%) compared to urban settings (24%). Selfproclaimed ethnicity in Mayan and Spanish people were 40% and 51%, respectively. Blood analysis is underway. Capsular analysis for LOXL1 mRNA was negative.
XFS/XFG rates in parental language was 64% for Spanish and 44% languages other than Spanish. Rates for grandparent language were similar at 56% Spanish and 45% Mayan. Rates of XFS/XFG patients had three salient conditions in their PMH and FH: cardiovascular (CV) disease, hypertension (HTN), and diabetes (DM). PMH rates were 6% for The most common occupations were farming and housekeeping. Higher rates of XFS/XFG were noted in individuals of rural (76%) compared to urban settings (24%). Self-proclaimed ethnicity in Mayan and Spanish people were 40% and 51%, respectively. Blood analysis is underway. Capsular analysis for LOXL1 mRNA was negative.
XFS/XFG rates in parental language was 64% for Spanish and 44% languages other than Spanish. Rates for grandparent language were similar at 56% Spanish and 45% Mayan. Rates of XFS/XFG patients had three salient conditions in their PMH and FH: cardiovascular (CV) disease, hypertension (HTN), and diabetes (DM). PMH rates were 6% for DM and 19% HTN, FH was 7% DM, and 11% CV disease and HTN in the XFS/XFG cohort. Further details on occupation, PMH, and FH can be found in Table 1.
In this population of patients presenting to outreach eye clinics, XFS was found in higher proportions of patients with advanced age, males, rural locations, outdoor occupation, non-Spanish languages spoken by parents and grandparents, and a PMH or FH of HTN, DM, and CV disease. CV disease included arrhythmia, heart attack, heart disorder, dyslipidemia, and stroke, which are chronic conditions generally associated with underlying atherosclerosis [48]. Recently, CV disease was determined to be an associated risk factor in influencing early glaucoma development in XFS patients, perhaps due to vascular dysfunction [49,50].
While age was statistically significantly associated with XFS/XFG, the increase in the odds of an XFS/XFG diagnosis was a very modest increase of 6%. The association of XFS/XFG with lack of family history of CV disease, asthma, and diabetes is very strong. Two studies suggest that XFS might be associated with longer survival [13,14]. A possibility for this is patients with XFS/XFG may have the XFS favorably altering the relation between other systemic diseases and survival. This possible interaction is not currently well-understood and research to clarify this relationship should be conducted.
In this XFS/XFG cohort, approximately equal numbers of XFS were seen in those who reported Mayan versus Spanish descent (40% and 51%, respectively). This could be due to a stronger environmental rather than genetic variability in this region. Further, the difference in XFS rates between sexes of 52% males and 48% females is consistent with previously published literature [51,52], and, in this scenario, could be attributed to cultural norms of men having outdoor occupations like farming, thus increasing their UV exposure.
Various environmental and epigenetic factors have been hypothesized to increase the risk of XFS, such as ocular exposure to UV light and increasing latitude and altitude leading to epigenetic modifications [26,27]. UV exposure is an interesting risk factor to consider in this population given their ample UV exposure due to the predominant occupations of rural regions like Baja Verapaz. It has been hypothesized that UV could likely be related to XFS as a cause of tissue insult (oxidative stress), an epigenetic risk factor, triggering molecular pathways that in turn perform and result in abnormal extracellular matrix accumulation [53].
The environmental risk factor of increased sun exposure could be contributory to elevated XFS rates in rural communities and those with outdoor occupations. Similarly, XFS was found in 110 of the 480 fisherman or agriculturalists and none of the 60 urban residents in the Northern Adriatic Sea [54]. A study in southern India found that subjects who worked outdoors had a significantly higher odds ratio of XFS [25,55]. It is important to consider the low XFS prevalence of Innuits of Greenland and Peruvian residents of Lake Titicaca (4000 m, mostly sunny, low humidity) [52,56]. Pasquale et al. speculates these groups have relatively thick irides that may ameliorate uveal tract damage caused by the expected high degree of reflected UVR [26]. In our Guatemalan cohort, higher rates of XFS were noted in rural patients, with 76% compared to urban 24%. We hypothesize that increased sun exposure associated with outdoor farming occupations that dominate rural communities in Guatemala may contribute to this increased development of XFS.
Latitude and altitude have also been reported as possible contributors to XFS development. In 1973, Forsius found no XFS in Eskimos but did find XFS in 20% of Lapps living at the same latitude [57,58]. In 2011, Stein et al. determined that living at more northern latitudes within the United States and solar exposure may be an environmental risk factor for XFS [27]. Kang et al., in 2012, similarly found that northern latitudes in the US may contribute to XFS, but Scandinavian heritage was not significantly associated with the disease [59]. Moreover, XFS was found to be more prevalent in people living at high altitudes in two series [60,61], but not in a third [31]. Faulkner found a 38% XFS prevalence in Navajo Indians over age 60 residing at 1500 m, 36 • N in Arizona [45], but this has been debated based on a recent pending study at our institution, which found rates of only 0.6% [44]. Stein found, in the US Midwest, that increasing elevation with increased sunny days had a decreased hazard ratio for XFS [27]. Another consideration is Barger et al., who reported a 15% XFS prevalence in a western Guatemala population, raising suspicion that environmental factors like climactic factors, altitude, and UV exposure may contribute to XFS development and XFG progression [36].
These results do not completely explain the possibility of increased XFS for the Baja Verapaz, a mountainous region at 960 m and 15 N latitude with ample sunny days; thus, other factors may likely be at play. Further studies are warranted to understand possible environmental factors in various geographic locations that may be contributing to XFS and subsequent progression to XFG.
Other Considerations
A high prevalence of XFS/XFG of 36% was found in these patients seeking eye care at outreach clinics from 2016-2017. It is well accepted that XFS can lead to more advanced cataracts and vision threatening glaucoma and, thus, this cohort may have been more visually challenged and, therefore, seeking eye care. Family members with XFS/XFG create a substantial burden as the XFS/XFG family member likely cannot work and provide a necessary income to sustain members of the family if visually impacted. This may cause people outside and inside the home to be reliant on family members for resources. If there is a genetic or familial component, this could disproportionately affect families economically who currently have an XFS/XFG family member.
Barriers to accessible healthcare can be significant to this population and may include distance from medical providers, socioeconomic status, lack of transportation, and infrastructure. A single ophthalmologist could treat the 800,000 inhabitants near Salamá, but they are the inhabitants who can afford the care and who have the means and ability to travel. Some participants reported the difficulty to travel from their hometown to Salamá due to scarce daily bus trips. Taxis provide personalized timing and destination transportation, but are expensive compared to buses. Factoring in 1-2 days of travel for eye care means loss of income for those days, which is usually compounded by the need for a family member to accompany them. This is only a glimpse of the difficulties Guatemalans and other members of developing countries face in accessing healthcare.
Further, the patients that completed the trip to Salamá and nearby eye care communities camps were able to travel, which speaks to healthier more able-bodied individuals. This may not represent the population in greatest need of eye care due to poor health and those possibly lacking support rendering them unable to travel to Salamá or the nearby communities.
The Baja Verapaz region has a high concentration of indigenous Guatemalans that have historically been treated as lesser than their ladino (mixed Latino/European descent) counterparts [62]. This is an important consideration when analyzing the barriers to medical access in this region and how it may differ from that of other more urban or similarly rural regions of the country and South America. Bolivia has relatively similar dense indigenous populous regions where use of traditional medicine is employed due to limited access and the state of medical services for indigenous people fall short [63]. Another cultural consideration surrounding access to medical care in rural Guatemala is the history of the civil war. Guatemalans are still distrustful of authority and contemporary medical care, and this affects the decision to seek medical care-on top of transportation and financial concerns [62,64].
As part of the Moran Eye Center's mission to improve access to ophthalmic eye care around the world, Dr. Orlando Gonzalez received hands-on training from Moran physicians and specialized training in utilizing phacoemulsification for cataract surgery.
Study Strengths
This is the largest study to date assessing the prevalence of XFS/XFG in the Guatemalan population of Baja Verapaz over the age of 15 yrs seeking eye exams and interventions.
Longitudinal follow up by Dr. Gonzalez continues to occur in managing ocular pathologies in the residents of Baja Verapaz. We hope to update this study in the future with ongoing results from Dr. Gonzalez.
Study Limitations
As this study was limited to self-selected patients who attended outreach eye camps seeking eye care, the sample size was modest. This study included community members from across the Baja Verapaz region, who were able to travel, that were found to have XFS, XFG, and/or other eye pathologies. Furthermore, given patients were being evaluated for surgery, and XFS is known to increase the likelihood of cataracts, we may be biasing the sample. Thus, our findings may not be representative of the general population of Baja Verapaz, but is likely representative of communities within and surrounding Salamá and in those with visual complaints as well as concomitant cataracts.
Another limitation is communication across an array of languages. The ability to obtain information depends on the quality of questions asked, the interpretation, and the understanding of the patients. No professional interpreters were available and clinic staff, family members, or outreach team members were available as interpreters for these encounters.
Further, this is a small population that was analyzed at a snapshot in time. Thus this study, with a relatively small sample size, may not have adequate power to detect small effects between groups, and we acknowledge that false-negative type II errors may have occurred. Future studies should have larger populations and longitudinally prospectively follow patients to better assess how various factors could affect XFS development and progression to XFG. Given the small sample size, such limitations exist due to population differences, study methodologies, lack of large population-based prospective studies, and differing environmental factors [65]. More data are needed to confirm what looks to be a high prevalence for XFS. It is important to acknowledge the significant impact on vision XFS can have, hence identifying this disease early could produce favorably impactful outcomes for a large portion of the population in the Baja Verapaz region.
Conclusions
This cross-sectional observational study demonstrates that the prevalence of XFS/XFG in Guatemalans in the Baja Verapaz region seeking eye care and interventions during a Moran Outreach trip was upwards of 36%. This rate is on the upper end compared to other populations globally and elevated compared to 15% in a western Guatemalan cataract population [36] and 24% in a Mayan cohort (22% also had cataracts) [38]. Larger epidemiology studies are warranted to better understand the prevalence and impact of XFS/XFG. Further, a next step could be to combine demographic and social factors from other papers as a meta-analysis to better understand risk for XFS and XFG. | 2022-03-30T15:18:15.290Z | 2022-03-24T00:00:00.000 | {
"year": 2022,
"sha1": "2056f05f7ca877183ac9e032d607fee3429e3913",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0383/11/7/1795/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "81c102f64a70c2f88a38d62a3323438bca57ed8b",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
3439791 | pes2o/s2orc | v3-fos-license | Extensive Degradation and Low Bioavailability of Orally Consumed Corn miRNAs in Mice
The current study seeks to resolve the discrepancy in the literature regarding the cross-kingdom transfer of plant microRNAs (miRNAs) into mammals using an improved miRNA processing and detection method. Two studies utilizing C57BL/6 mice were performed. In the first study, mice were fed an AIN-93M diet and gavaged with water, random deoxynucleotide triphosphates (dNTP) or isolated corn miRNAs for two weeks (n = 10 per group). In the second study, mice were fed an AIN-93M diet, or the diet supplemented with 3% fresh or autoclaved corn powder for two weeks (n = 10 per group). Corn miRNA levels were analyzed in blood and tissue samples by real-time PCR (RT-PCR) following periodate oxidation and β elimination treatments to eliminate artifacts. After removing false positive detections, there were no differences in corn miRNA levels between control and treated groups in cecal, fecal, liver and blood samples. Using an in vitro digestion system, corn miRNAs in AIN-93M diet or in the extracts were found to be extensively degraded. Less than 1% was recovered in the gastrointestinal tract after oral and gastric phases. In conclusion, no evidence of increased levels of corn miRNAs in whole blood or tissues after supplementation of corn miRNAs in the diet was observed in a mouse model.
Introduction
MicroRNA (miRNA) are a group of small, functional, non-protein coding RNA oligonucleotides that were discovered two decades ago and are universally found in microorganisms, plants, and animals [1,2]. The miRNAs have been shown to mediate 30% of the post-transcriptional silencing in mammals and modulate a wide range of critical biological processes, including neuronal development, cell differentiation, apoptosis, proliferation, and immune response [1][2][3][4][5]. In 2012, Zhang and colleagues reported a novel cross-kingdom uptake of intact rice miRNAs (miR156a and miR168a) via dietary consumption into circulation and organs of humans and mice [6]. More importantly, the rice miR168a was reported to directly down-regulate the expression of a cholesterol regulation-related gene LDLRAP1 in the liver [6]. This result suggests ingestion of plant miRNA may influence physiology and health in humans. The importance of this finding was signified by the number and breadth of follow-up studies [7][8][9][10][11][12][13][14][15][16][17][18]. However, various follow up bioinformatic studies, in vitro and in vivo studies to identify the cross-kingdom transfer of plant miRNAs and the presence of exogenous miRNAs in human or mammalian circulation or organs lead to mixed results [7][8][9][10][11][12][13][14][15][16][17][18]. At present, no definitive conclusion has been reached regarding the extent and prevalence of dietary plant miRNAs entering circulation.
The detection of plant miRNAs in human or animal samples can be challenging and confounded by the potential artifacts introduced in the sample preparation, sequence detection, and difficulties in delineating the origin of miRNAs [15,16]. These inherent problems related to plant miRNA analysis may contribute in part to the difficulty in validating/confirming cross-kingdom regulation by plant miRNAs. A good marker for the plant origin of miRNAs for specific detection of plant miRNAs in biological samples is the distinct methylation patterns on the 3 ends of plant and mammalian miRNAs [19]. Taking advantage of 3 protective property of plant miRNA, we previously reported that periodate oxidation and β elimination reaction treatment specifically targets the unmethylated 3 end of miRNAs to eliminate the artifacts in PCR detection of plant miRNAs in biological samples [20]. Hence, we believe that analysis using this method may help us validate the presence of plant miRNA in biological samples and elucidate bioavailability of plant miRNA in vivo.
The present study seeks to test the hypothesis that dietary plant-derived miRNA can survive the gastrointestinal (GI) tract and be taken up into circulation and tissues in vivo. This study utilized corn as the source of dietary plant miRNAs. Common plant miRNAs, such as miR156a, miR164a, and miR167a, are detected at high levels in corn [21]. More importantly, corn and its processed products are the most widely used/consumed ingredients in foods [22,23], therefore the cross-kingdom transfer of these miRNAs could bear significant biological, health and economic impacts. Corn or corn miRNA extract were incorporated into rodent diet. An autoclaving-base method was developed to degrade corn miRNAs to be use as a control treatment to eliminate the matrix effect as a variable. Corn miRNAs in the gastrointestinal tract, liver, and blood from animals fed a corn supplemented diet or gavaged with miRNA extracts from corn were determined using the periodate oxidation/β-elimination method.
Our results indicate that corn miRNA was extensively degraded in the GI tract and that the uptake into circulation and tissues was minimal.
Corn small RNA isolation. Fresh corn used in this study was purchased from a local market (Beltsville, MD, USA). Corn small RNAs were isolated according to a previously published protocol [24]. Briefly, 0.1 g of pulverized plant sample was added in a 1.5 mL microcentrifuge tube with 500 µL of LiCl extraction buffer and 500 µL of phenol pH 8.0. Extraction mixture was vortexed for 1 min and incubated for 5 min at 60 • C. Then the mixture was centrifuged at 20,000× g at 4 • C for 10 min. The upper phase was transferred to a new microcentrifuge tube and 600 µL of chloroform-isoamyl alcohol (24:1; v/v) was added. The mixture was vortexed and centrifuged, and the upper phase was transferred to a new microcentrifuge tube and incubated at 65 • C for 15 min. Then, 50 µL of 5 M NaCl and 63 µL of 40% polyethylene glycol 8000 (w/v) was added followed by incubation on ice for at least 30 min. The low molecular weight RNA was separated from the pellet which consisted of high molecular weight RNA and DNA by centrifugation. The supernatant was mixed with 500 µL of phenol-chloroform-isoamyl alcohol (25:24:1; v/v/v). The mixture was centrifuged, and the supernatant was transferred to a new microcentrifuge tube with 50 µL of 3 M sodium acetate pH 5.2 and 1200 µL of absolute ethanol. RNA samples were incubated at −20 • C overnight. The small RNA was precipitated, washed twice, and dried. Isolated RNA was resuspended in RNase-free water and kept in −80 • C. RNA concentration and purity were determined using Nanodrop 8000 Spectrophotometer (Thermo Fisher Scientific, St. Louis, MO, USA).
Animals, Diets, and Study Design
Male C57BL/6 mice (5-week old, Charles River, Wilmington, MA, USA) were acclimated for 1 week and given free access to water and AIN-93M diet (D10012M, Research Diet, Inc., New Brunswick, NJ, USA). Then the animals were randomly assigned to two subsequent studies (30 animals each). For study 1, all animals were fed a control diet (AIN-93M, 10% calorie from fat). The animals were divided into three treatments (n = 10/treatment) (1) control group (water); (2) random nucleotides (25 µg in 100 µL distilled water) (dNTP group); (3) purified small RNA isolated from corn kernel (25 µg in 100 µL distilled water) (Corn sRNA group). Random nucleotides and corn small RNA were administered by gavage using water as a vehicle and the control group was given 100 µL water. For study 2, the animals were assigned to the following three groups: (1) control diet (AIN-93M, 10% calorie from fat) (Control group); (2) AIN-93M + 3% autoclaved corn kernel powder (Autoclaved corn group); (3) AIN-93M + 3% fresh corn kernel powder (Fresh corn group) (Supplementary Materials Table S1). Corn kernel was incorporated into diets as a ground powder. The amounts of corn small RNA or corn kernel were calculated based on 1 serving/day of 166 g corn kernel for human consumption (National Nutrient Database for Standard Reference, Release 28, Agricultural Research Service-USDA) and a 5 g per day of food intake for the mouse. The powder was blended into the formulated diet by Research Diets, Inc. (New Brunswick, NJ, USA). Animals were single-housed in ventilated racks for the duration of the experiment. Fecal samples were collected daily. At the end of the two-week feeding period, blood was collected by cardiac puncture with syringes previously rinsed with potassium EDTA solution (15% w/v) and kept on ice. Contents of cecum and colon, and liver samples were collected, immediately frozen in liquid nitrogen. All samples were kept at −80 • C before analysis. This study was carried out in strict accordance with the recommendations in the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health. The protocol was approved by the U.S. Department of Agriculture (USDA) Agricultural Research Service (ARS) Beltsville Area Institutional Animal Care and Use Committee (IACUC) (Protocol # 16-017).
Plant miRNA Isolation and Detection
MiRNA isolation from the blood and tissue samples was performed using the mirVana™ miRNA Isolation Kit (with phenol) from Thermo Fisher Scientific (St. Louis, MO, USA) according to the manufacturer's protocol. Plant miRNA was detected and quantitated using quantitative real-time PCR (qRT-PCR) as previously described [20]. Specific plant miRNA primers were purchased from Thermo Fisher Scientific (St. Louis, MO, USA) (TaqMan MicroRNA Assay: 000333, 000344, 000348, 241641_mat) and used for microRNA reverse transcription and detection. TaqMan MicroRNA Reverse Transcription Kit (Thermo Fisher Scientific, St. Louis, MO, USA) and the small RNA-specific RT primer from the TaqMan MicroRNA Assays were used to reverse transcribe complementary DNA. Five µL of 2 ng/µL RNA was used in reverse transcription and 1 µL of reverse transcription product was used in quantitative PCR. PCR was performed on ViiA7 Real-Time PCR System (Applied Biosystems, Foster City, CA, USA) using TaqMan Universal PCR Master Mix (Cat No.: 4304437) and miRNA-specific TaqMan primer from the TaqMan MicroRNA Assays. The following amplification parameters were used for PCR: 50 • C for 2 min, 95 • C for 10 min, and 40 cycles of amplification at 95 • C for 15 s and 60 • C for 1 min.
In Vitro Digestion of Plant miRNAs
An in vitro digestion model was used to mimic oral, gastric, and intestinal digestion of plant miRNAs and the procedure was adapted from a previously published protocol [25]. Rodent fecal samples were collected at the moment of excretion and immediately frozen in liquid nitrogen. Fecal pellets were re-suspended in LB broth in an anaerobic condition and vortexed to homogenize and release the fecal bacteria into the supernatant. The fecal suspension was then centrifuged at 800× g for 3 min to separate the fecal debris. Fecal bacteria were expanded in LB broth in anaerobic condition at 37 • C and cryopreserved in 25% glycerol in −80 • C before treatment. Briefly, the oral digestion phase: an electrolyte buffer (composed of KCl, KH 2 PO 4 , NaHCO 3 , NaCl, MgCl 2 (H 2 O) 6 , (NH 4 ) 2 CO 3 , CaCl 2 (H 2 O) 2 ) and 75 U/mL α-amylase at pH 7; the gastric digestion phase: an electrolyte buffer and 2000 U/mL pepsin at pH 3; the intestinal digestion phase: an electrolyte buffer, 100 U/mL pancreatin, and 10 mM bile at pH 7; the colonic phase: fecal bacteria culture in anaerobic condition (seeded at 1 × 10 7 per mL) at 37 • C. Oral phase: 1 g/mL of AIN-93M diet with fresh corn, 25 µg/mL of corn miRNA extract, or 25 µg/mL of methylated or unmethylated miR171j were mixed with the oral phase digestion buffer at 1:1 (v/v) and incubated at 37 • C for 2 min. Gastric Phase: the oral mix, after incubation, was mixed with the gastric phase digestion buffer at 1:1 (v/v) and incubated at 37 • C for 2 h. Intestinal phase: the gastric mix, after incubation, was then mixed with the intestinal phase digestion buffer at 1:1 (v/v) and incubated at 37 • C for 2 h. Colonic phase: the intestinal mix, after incubation, was mixed with the colonic phase digestion fluid at 1:1 (v/v) and incubated at 37 • C in anaerobic condition overnight. Digestion samples were collected at each phase and flash frozen in liquid nitrogen and stored at −80 • C until analysis. MiRNAs were isolated using the mirVana™ miRNA Isolation Kit (with phenol) and plant miRNAs were detected using RT-PCR described above. Recovery of miRNAs was calculated as the percentage of miRNAs detected after each digestion phase to the number of miRNAs added at the beginning of the assay.
Statistical Analysis
All experiments were conducted in triplicate and representative data was reported as a mean ± standard deviation. A 0.005 amol to 500 fmol range of synthetic miRNAs were used to construct a standard curve in qRT-PCR. Linear regression and statistical analysis were performed using GraphPad Prism 6 (2015, GraphPad Software, La Jolla, CA, USA). Significance differences between means from treated groups compared to controls were determined using one-way analysis of variance (ANOVA) and Tukey's Honestly Significant Difference (HSD) test. Statistical significance was defined at p ≤ 0.05.
Food Intake, Body Weight and Levels of Corn miRNAs Consumption
Two animal studies were conducted to test our hypothesis. In study 1, animals received daily gavage of (1) 100 µL ultrapure water, (2) 100 µL of 250 ng/µL of random dNTP, and (3) 100 µL of 250 ng/µL of corn sRNA isolates. The isolates were pre-determined to contain 56.03 pg of miR156a, 13.2 pg of miR164a, and 1215.63 pg of miR167a per 25 µg of corn sRNA isolates. In study 2, animals were fed (1) AIN93M base diet, (2) AIN93M + 3% fresh corn powder, (3) AIN93M + 3% autoclaved corn powder. The diet supplemented with the fresh corn powder was determined to contain 173 pg of miR156a, 65 pg of miR164a, and 515 pg of miR167a per gram diet. No significant reduction of corn miRNA in the diet was detected between the start and end of the study (Supplementary Materials, Figure S1). Autoclaving corn powder at 121 • C for 30 min degraded 99%, 98%, and 97% of miR156a, miR164a, and miR167a, respectively (Supplementary Materials, Figure S2). The autoclaved corn powder was used as matrix control. For the corn sRNA extract study, no difference was observed in body weight and food intake throughout the two-week feeding period ( Figure 1A,B). For corn powder supplementation study, no difference was observed in final body weight and food intake. Interestingly, total body weight gain over the 2-week feeding period was significantly lower in the fresh-corn fed animals (1.03 ± 0.89 g), comparing to that of autoclaved-corn fed animals (2.03 ± 0.68 g) (p < 0.05) ( Figure 1C,D).
Analysis of Corn miRNAs in Blood and Liver
Liver and whole blood samples from both studies were analyzed for plant miRNAs (miR156a, miR164a, and miR167a) as determined by the Ct values for specific miRNA from RT-PCR where higher Ct values indicate lower expression values. In study 1, using corn small RNA extracts, we found no differences in miRNA Ct values between the control, dNTP, and corn miRNA groups in the liver and whole blood samples (Figure 2A, upper panel). In the whole blood samples, the Ct values of the three experimental groups was the same as no-template control (Figure 2A, upper panel). By contrast, the Ct values of the liver samples were significantly lower than the no-template controls (Figure 2A upper panel). However, after periodate oxidation followed by β elimination, no differences were detected between the no-template control group and the experimental groups in the liver and blood samples (Figure 2A lower panel, Table 1).
Similar observations were made in the liver and whole blood samples from the 2nd study using the fresh corn powder. No differences were detected in miRNA Ct values among the control, autoclaved corn, and fresh corn groups in the liver and whole blood samples ( Figure 2B upper panel). No differences in Ct values were observed between the no-template control group and the three experimental groups in whole blood samples ( Figure 2B). Also, after periodate oxidation followed by β elimination, no differences were detected between the no-template control group and the experimental groups in the liver and blood samples ( Figure 2B, Table 2).
Analysis of Corn miRNAs in Blood and Liver
Liver and whole blood samples from both studies were analyzed for plant miRNAs (miR156a, miR164a, and miR167a) as determined by the Ct values for specific miRNA from RT-PCR where higher Ct values indicate lower expression values. In study 1, using corn small RNA extracts, we found no differences in miRNA Ct values between the control, dNTP, and corn miRNA groups in the liver and whole blood samples (Figure 2A, upper panel). In the whole blood samples, the Ct values of the three experimental groups was the same as no-template control (Figure 2A, upper panel). By contrast, the Ct values of the liver samples were significantly lower than the no-template controls (Figure 2A upper panel). However, after periodate oxidation followed by β elimination, no differences were detected between the no-template control group and the experimental groups in the liver and blood samples (Figure 2A lower panel, Table 1).
Similar observations were made in the liver and whole blood samples from the 2nd study using the fresh corn powder. No differences were detected in miRNA Ct values among the control, autoclaved corn, and fresh corn groups in the liver and whole blood samples ( Figure 2B upper panel). No differences in Ct values were observed between the no-template control group and the three experimental groups in whole blood samples ( Figure 2B). Also, after periodate oxidation followed by β elimination, no differences were detected between the no-template control group and the experimental groups in the liver and blood samples ( Figure 2B, Table 2).
Analysis of Corn miRNAs in Cecal and Fecal Samples
Corn miRNA was also determined in cecal and fecal samples. In the corn sRNA study, no differences were detected in miRNA Ct values between the control, dNTP, and corn sRNA groups in the cecal or the fecal (from Days 2 and 14) samples. The Ct values of no-template controls for the cecal, fecal (from Days 2 and 14) were significantly higher than those of the experimental groups (Figure 2A). Cecal and fecal (Day 14) samples were then treated with periodate oxidation followed by β elimination. After periodate treatment, no differences were detected between the no-template control group and the experimental groups in cecal samples. However, significant difference persisted in the fecal samples but the differences in Ct values were less (Figure 2A, Table 1).
A similar observation was made in the cecal and fecal (from Day 2 and 14) samples from the fresh corn powder study. Small but statistically significant differences were detected in miR167a Ct value in cecal and Day 2 fecal samples, and miR164a Ct value in Day 14 fecal samples. No-template control Ct values of cecal and fecal (from Days 2 and 14) samples were significantly higher than those of the experimental groups ( Figure 2B). After periodate treatment, differences between the Ct values of the no-template control group and the experimental groups were much smaller compared to those before the periodate treatment in the cecal and the fecal samples ( Figure 2B, Table 2).
Based on the standard curves constructed for each miRNA, the recovery rate of miRNAs from fecal samples were calculated according to the detection levels in qRT-PCR (Tables 3 and 4). Less than 0.1% of total miRNA (the sum of miR156a, miR164a, and miR167a) tested in both studies were recovered from the fecal samples.
Fate of Corn miRNA in Mouse GI Tract
To assess the effect of digestion on the corn miRNA, corn miRNA was determined in the contents collected from different parts of mouse GI tract from our animal studies. Comparing to the amount of corn miRNA miR167a administered via gavage or food intake, the recovery of miRNAs in each section of the GI tract were calculated to account for less than 0.3% of originally ingested in the stomach, less than 0.1% of originally ingested in the intestine and feces, and less than 0.01% of originally ingested in colon and cecum (Figure 3).
In Vitro Analyses of miRNA Recovery in GI Tract
Given minimal corn miRNA was accounted for in samples collected, we used an in vitro digestion system to determine if degradation was responsible for the observations. Corn miRNAs supplemented in AIN-93M diet or miRNA extract, as well as methylated and unmethylated synthetic miR171j, were treated following an established in vitro digestion model [25]. After the oral phase, 53-66% of the original corn miR167a were detected in the digestion fluid. Synthetic miR171j with or without 3 end methylation was used as positive and negative controls and 33% methylated miR171j was recovered, while only 4% of the unmethylated miR171j were recovered after the oral phase (Figure 4). After the gastric phase, over 97% of corn miR167a in AIN diet or in the miRNA extract were degraded. Finally, after the intestinal phase, less than 1% of miR167a was detected in the samples (Figure 4).
Discussion
This study investigated the cross-kingdom transfer and bioavailability of plant miRNAs using corn miRNAs administrated in a mouse model and the presence or absence of the ingested miRNAs were analyzed in the diet, cecum, feces, liver, and in whole blood. An updated method using periodate oxidation reaction was employed to ensure the detected miRNA(s) in the biological samples are of plant origin. No detection of corn miRNAs was observed in the cecum, feces, liver, or in the blood following supplementation of corn miRNAs in the animal diet or gavage to the animals. In conjunction with our in vitro digestion study, we concluded that the corn miRNAs are extensively degraded during the digestive process and are not uptake into circulation or tissues in our mouse model.
Recent studies reported conflicting results in the detection of exogenous miRNAs in human or animal models [7][8][9][10][11][12][13][14][15][16][17][18]. Major issues appeared to be (1) the reliability of detection and (2) the biological significance of miRNA at the detected level. The reliability issue of detection may arise from false detection and authenticity of plant origin of the miRNAs. Oversampling in database analyses, potential contamination in sequencing study or false positive detection in PCR assay were pointed out as a possible mechanism of false detection in previous reports [13,14,20]. In this study, we tried to overcome the detection reliability issue by employing a detection method combining the sequence specificity (PCR) and characteristic structure of 3 end methylation on the plant miRNAs (resistant to periodate oxidation). It is apparent from Figure 2 that potential false positive detections can occur in PCR-based assays, especially at high cycles (>30 cycles). The differences were eliminated by periodate treatments that oxidize unmethylated miRNAs. It is possible that the confounding factor in cecal and fecal samples may be derived from mammalian miRNAs that are similar in sequences or immature plant miRNA that lack the 3 methylation to protects miRNA from oxidation. A previous study by Luo et al. reported the detection of corn miRNAs in a pig model upon maize and chow consumption [26]. Similar levels of corn miRNAs in the fresh corn were reported by Luo and in this study (Supplementary Materials, Figure S1). Luo and colleagues treated the blood and tissue sample with periodate, which led to reductions in expression levels, while not completely eliminating the detection [26]. The discrepancy between the two studies may stem from the different periodate treatment methods used. Comparing the periodate treatment used in Luo's [26] studies, which was the same as that used by Zhang and colleagues in the 2012 study [6], periodate oxidation followed by alkaline β elimination used in this study can degrade miRNAs without 3 end methylation more efficiently [20]. The differences in the model animals and food intake may also contribute to the different results observed in the two studies.
A critical factor for dietary exogenous miRNAs to exert biologically significant effects is the amount of intact and functional miRNAs that actually reaches human or animal circulation and organs. In this study, no detectable levels of corn miRNAs were found in the circulation or the liver (Tables 1 and 2, Figure 2). In the GI tract, less than 0.1% of corn miRNAs were accounted for and in vitro analysis showed that the dietary miRNAs were quickly and almost completely digested before arriving in the small intestine ( Figure 4). Upon losing the intact sequence and the 3 end methylation, the corn miRNAs may undergo further degradation in the GI tract to individual nucleotides. These nucleotides may be used by gut bacteria or taken up during absorption. However, the amounts of free nucleotides from miRNAs account for a small fraction of the total nucleotides ingested from a dietary source, e.g., genomic DNA and RNA in the food, and is therefore not expected to play an important role as free nucleotides. Therefore, at least for miRNA derived from corn, they are not likely to be available for systemic absorption. Previous studies reported that food matrix, such as exosome packaging of bovine milk miRNAs, can improve the stability of the miRNAs against digestion [27,28]. However, no differences were observed in this study between corn miRNA extract or fresh corn powder supplements. Conflicting evidence of whether certain genetic materials can survive the GI tract was reported in previous studies [29]. Using a different in vitro digestion model consisting of a simulated gastric phase, Philip and colleagues reported that when subjecting soybean seeds to up to 75 min in vitro digestion, soy miRNAs can be detected from the digestion fluid [29]. In this study, a substantial amount of the corn miRNA was degraded in the oral phase, which was missing in Philip's study. The α-amylase used in this study was prepared from crude human saliva, and therefore, may contain additional enzymes or substances in trace amounts to break down miRNAs. Such reactions may mimic the actual conditions in the oral phase. The extent of degradation in the oral phase may be affected by a number of factors, such as the protective effect of the matrix (AIN-93M diet vs. miRNA extract), the type and amounts of miRNAs (corn miRNAs vs. miR171j), and the methylation status. In other studies, plant miRNAs were observed to be significantly or completely degraded in the GI tract [13,15,17,18]. Considering the absence of detectable corn miRNAs in the circulation and liver in this study and the minimal recovery in the GI tract, the significance and biological relevance of exogenous miRNA transfer may be limited to selected plant foods.
Conclusions
Significant degradation of corn miRNAs occurred during digestion which resulted in minimal uptake of corn miRNAs after oral consumption. No corn miRNAs could be detected in whole blood, cecal or liver of the animals. Moreover, degradation of corn miRNAs in the GI tract occurred relatively early and therefore cross-kingdom transfer of exogenous miRNAs appears to be insignificant and not biologically relevant.
Supplementary Materials: The following are available online at www.mdpi.com/2072-6643/10/2/215/s1, Figure S1: Stability of corn miRNAs in the AIN-93M diet at the start and end of the feeding period, Figure S2: Degradation of corn miRNAs after autoclaving, Table S1: Diet composition. | 2018-04-03T06:11:20.707Z | 2018-02-01T00:00:00.000 | {
"year": 2018,
"sha1": "37c345ac390ef5ee78f9038920f40dad54bf58e4",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6643/10/2/215/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "a585594bfe72145931ec5cf01d6510d92e091acc",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
44075475 | pes2o/s2orc | v3-fos-license | Systemic scleroderma-related interstitial pneumonia associated with borderline pulmonary arterial hypertension
A 65-year-old woman with a 35-year history of limited cutaneous systemic scleroderma was admitted to our hospital complaining of a 3-month history of progressive dyspnoea on exertion. High-resolution CT images of the chest revealed diffuse reticular opacities and traction bronchiectasis predominantly in the bilateral lower lobes of the lung. Specimens obtained during video-assisted thoracic surgery were consistent with fibrocellular non-specific interstitial pneumonia and accompanied by accumulation of lymph follicles within areas of fibrosis. Although the patient received combination therapy with prednisolone and intravenous cyclophosphamide at a dosage of 500 mg/m2 monthly for 5 months, her clinical condition deteriorated gradually. In addition, right heart catheterisation revealed borderline pulmonary arterial hypertension with mean pulmonary artery pressure of 24 mm Hg. Therefore, we initiated a combination therapy of an antifibrotic agent, pirfenidone for 12 months, and the dual endothelin receptor antagonist, macitentan, with prednisolone. As a result, her clinical condition improved dramatically.
Summary a 65-year-old woman with a 35-year history of limited cutaneous systemic scleroderma was admitted to our hospital complaining of a 3-month history of progressive dyspnoea on exertion. High-resolution Ct images of the chest revealed diffuse reticular opacities and traction bronchiectasis predominantly in the bilateral lower lobes of the lung. specimens obtained during video-assisted thoracic surgery were consistent with fibrocellular nonspecific interstitial pneumonia and accompanied by accumulation of lymph follicles within areas of fibrosis. although the patient received combination therapy with prednisolone and intravenous cyclophosphamide at a dosage of 500 mg/m 2 monthly for 5 months, her clinical condition deteriorated gradually. In addition, right heart catheterisation revealed borderline pulmonary arterial hypertension with mean pulmonary artery pressure of 24 mm Hg. therefore, we initiated a combination therapy of an antifibrotic agent, pirfenidone for 12 months, and the dual endothelin receptor antagonist, macitentan, with prednisolone. as a result, her clinical condition improved dramatically.
BaCkground
Systemic scleroderma (SSc)-related interstitial pneumonia (IP) and pulmonary artery hypertension (PAH) are leading causes of morbidity and mortality in patients with SSc.
The efficacy and safety of novel antifibrotic agents such as pirfenidone and nintedanib in patients with SSc-IP are currently being evaluated in controlled clinical trials. 1 However, with the emergence of evidence-based treatments, including a novel agent such as macitentan, and the focus on early detection of SSc-PAH, there has been considerable improvement in patient survival. Moreover, patients with SSc may develop borderline pulmonary artery pressure (PAP), which may represent the early stage of PAH.
Herein, we describe a case of long-term efficacy and safety of combination therapy of macitentan and pirfenidone in a patient with SSc-IP associated with borderline PAH.
CaSe preSenTaTion
A 65-year-old woman with a 35-year history of limited cutaneous SSc was admitted to our hospital for progressive dyspnoea on exertion with onset at 3 months before admission. Physical examination revealed scleroderma in both arms and legs, with puffy fingers, Raynaud phenomenon and ankyloglossia. Lung auscultation revealed fine crackles in both lung bases. Laboratory test data from serum showed high levels of Krebs von den Lungen-6 (1223 U/mL), surfactant protein D (128 ng/mL), brain natriuretic peptide (BNP, 39.5 U/mL), antinuclear antibody titre (320-fold) and antiribonucleoprotein antibody titre (64-fold). Arterial blood gas analysis showed a pH of 7.41, arterial carbon dioxide tension of 41.5 mm Hg and arterial oxygen tension of 80.6 mm Hg at room air. The pulmonary function test revealed restrictive impairment (forced vital capacity (FVC) of 1.81 L and 77.7% of predicted) with decreased diffusion capacity for carbon monoxide (8.78 mL/min/mm Hg and 49.4% of predicted). Diffuse reticular and widespread ground-glass opacity (GGO) shadows and interlobular septal thickening in both middle and lower lobes, without honeycombing, were evident on chest high-resolution CT (HRCT) scan ( figure 1A and B). The lung biopsy specimens of the left lung segments, S8 and S10, obtained by video-assisted thoracic surgery (VATS), revealed fibrocellular non-specific IP (NSIP) accompanied by accumulation of lymph follicle formation ( figure 2A and B). Additionally, small pulmonary arteries with intimal fibrosis and medial hypertrophy in fibrotic lesions were widespread, resulting in marked luminal narrowing ( figure 2C and D). Consequently, the patient was diagnosed with fibrocellular NSIP novel treatment (new drug/intervention; established drug/procedure in new situation) associated with SSc. Abnormalities of chest HRCT images, especially GGO, were initially ameliorated with the use of methylprednisolone pulse therapy ( figure 3A and B). Although she received a combination therapy of prednisolone and intravenous cyclophosphamide (CY) at a dose of 500 mg/m 2 monthly for 5 months, her clinical symptoms and pulmonary function deteriorated gradually as deduced from pulmonary function tests and chest HRCT (figure 3C). Right heart catheterisation demonstrated borderline PAH with a mean PAP (mPAP) of 24 mm Hg, mean pulmonary capillary wedge pressure of 9 mm Hg, pulmonary vascular resistance of 185.5 dyne/s/cm 5 and cardiac output of 6.47 L/min. The serum level of BNP increased from 39.5 U/ mL to 67.4 U/mL. Moreover, the FVC (%) decreased from 77.7% to 65.8%.
TreaTmenT
We initiated a combination therapy of pirfenidone and macitentan, with prednisolone. After 6 months of receiving this combination therapy, her clinical condition gradually improved, and mPAP decreased from 24 mm Hg to 15 mm Hg. The serum level of BNP also decreased to 30.1 U/mL. In addition, FVC increased to 74.3%. The 6 min walking distance (6MWD) and patchy GGO on chest HRCT scans were associated with a trend towards improvement ( figure 3D). After receiving this combination therapy for 12 months, her clinical condition was stable with an mPAP of 17 mm Hg (table 1). She did not experience any serious adverse effects.
diSCuSSion
To the best of our knowledge, this is the first report of a case where the combination therapy of pirfenidone and macitentan demonstrated efficacy in a patient with limited cutaneous SSc, fibrocellular NSIP and borderline PAH, which may translate into a favourable prognosis.
However, our results suggest that an improvement in pulmonary function with pirfenidone treatment does not necessarily correlate with an improvement in oxygenation.
SSc is a systemic idiopathic autoimmune disease characterised by exaggerated extracellular matrix deposition in skin and various internal organs, severe fibroproliferative vasculopathy and cellular and humoral immune response abnormalities. SSc-IP and SSc-PAH are both leading causes of morbidity and mortality in patients with SSc. In particular, patients with an mPAP between 21 and 24 mm Hg are considered to represent so-called borderline PAH. Additionally, their condition is thought to represent the early stages of pulmonary arterial vasculopathy, which is also an intermediate stage between normal PAP and the manifestation of PAH. In fact, the rate of progression to PAH in patients with SSc and associated borderline PAH was 19% after 3 years and 27% after 5 years. 2 Kovacs et al 3 recently reported that the borderline elevation of PAP was associated with cardiac and pulmonary comorbidities, decreased exercise capacity and a poor prognosis. In the present report, the patient only experienced improvement in mPAP and oxygenation after macitentan administration. Therefore, we believe that early interventions, such as administration of dual endothelin receptor antagonists, may play an important role in improving the treatment and outcome of patients with SSc and borderline PAH.
According to the LOTUSS trial, 1 pirfenidone had an acceptable tolerability profile in patients with SSc-IP. Moreover, preliminary studies evaluating the safety and efficacy of pirfenidone and nintedanib in patients with SSc-IP are currently underway. Two recent case series reported that patients with SSc-IP treated with pirfenidone showed improvement of dyspnoea, increased vital capacity and less GGOs. 4 5 More recently, Tashkin et al 6 reported that efficacy of treatment with mycophenolate mofetil for SSc-interstitial lung disease (ILD) was similar to that of CY therapy and novel treatment (new drug/intervention; established drug/procedure in new situation) better tolerated. In addition, patients with IP related to connective tissue diseases are not usually needed to biopsy under VATS in clinical setting. However, we suppose that this attempt is useful to select whether antifibrotic agents or anti-inflammatory agents such as corticosteroid or immunosuppressants should be introduced. Indeed, this patient was diagnosed with fibrocellular NSIP histopathologically. The long-term efficacy of combination therapy of pirfenidone and macitentan in patients with SSc-IP associated with secondary PAH has not yet been proven. However, improvements in the modified Medical Research Council dyspnoea scale, 6MWD and pulmonary haemodynamic parameters in the present case indicated an increased exercise capacity. Nevertheless, the patient consistently presented the values of lowest peripheral capillary oxygen saturation (SpO 2 ) during the 6 min walking test (6MWT). We presume that an improvement in pulmonary function with pirfenidone treatment does not always correlate with the improvement in oxygenation because, in previous clinical trials of pirfenidone, no statistically significant differences have been detected in changes in SpO 2 during the 6MWT, which remained low. Our patient remained stable throughout the 1-year follow-up period. Of note, our patient did not die or experience serious drug-related adverse events during treatment.
In conclusion, long-term (12-month) combination therapy of pirfenidone and macitentan can provide a clinical and radiological improvement for patients with SSc-IP and borderline PAH when conventional treatments, such as prednisolone and/or CY, are ineffective.
Contributors Ksu and sH: study design, data analysis, manuscript preparation and guarantor of the paper. Ksu, tK and Ksh: data collection and data analysis. Ksu, Ksh and sH: manuscript preparation and review. all authors had full access to all of the data in the study and can take responsibility for the integrity of the data and the accuracy of the data analysis.
Funding the authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.
Learning points
► The long-term (12-month) combination therapy of pirfenidone and macitentan may provide a clinical and radiological improvement for patients with systemic sclerodermarelated interstitial pneumonia and borderline pulmonary artery hypertension when conventional treatments, such as prednisolone and/or cyclophosphamide, are ineffective. | 2018-06-05T01:28:55.640Z | 2018-05-26T00:00:00.000 | {
"year": 2018,
"sha1": "16f707b04453211ba80ee51c1ea1ebcc1c98f72a",
"oa_license": "CCBYNC",
"oa_url": "https://casereports.bmj.com/content/casereports/2018/bcr-2017-221755.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "16f707b04453211ba80ee51c1ea1ebcc1c98f72a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
265005469 | pes2o/s2orc | v3-fos-license | Impact of percutaneous coronary intervention on renal function in patients with coronary heart disease
The relationship between cardiac and renal function is complicated. The impact of percutaneous coronary intervention (PCI) on renal function in patients with coronary artery disease is still unclear. The current study sought to assess renal function change, including the time course of renal function, after elective PCI in patients with improved renal function and to identify renal function predictors of major adverse cardiovascular events. We examined data from 1572 CHD patients who had coronary angiography
Introduction
Coronary angiography (CAG) and percutaneous coronary intervention (PCI) are valuable diagnostic and therapeutic tools in cardiovascular medicine.With expanded use in intermediate-and low-risk patients and widening access, PCI is increasingly performed for patients with coronary heart disease (CHD).Previous studies have shown that renal dysfunction at baseline, unstable hemodynamics and the use of large amounts of contrast media have been associated with the deterioration of renal function [1,2].Contrastinduced nephropathy (CIN) is characterized by a decline in renal function within the first 48-72 h following contrast administration, in the absence of alternative etiologies [3].
CIN occurs in up to 10% of cardiac catheterizations and coronary interventions, resulting in increased morbidity, mortality, and cost.Further, more clearly established is that baseline and post-operative renal function are risk factors for in-hospital and short-and intermediate-term mortality following PCI.Renal dysfunction may be related to coronary microvascular dysfunction and obstruction [4].Different clinical trials have confirmed that renal dysfunction, including reduced glomerular filtration rate (GFR) and albuminuria, is associated with increased risk for cardiovascular (CV) outcomes.However, improved renal function (IRF) after PCI has also been reported, even in patients with renal dysfunction at baseline [5,6].IRF was associated with favorable renal outcomes.Hemodynamic stabilization may be important for improving the short-term and long-term renal outcomes of high-risk patients [7].Obviously, data about the renal function changes in patients with CHD after elective PCI are inconsistent.This led to the hypothesis that although several clinical and procedural variables contribute to change in renal function, some patients may have cardiorenal syndrome that is alleviated with improved hemodynamics post-PCI.
In this study, the prognostic significance of renal function is evaluated with regard to among patients with CHD who were treated with elective PCI.Therefore, the current study was designed to generate evidence regarding the effect of PCI on renal function, as a guide to clinicians.
Study design and patient population
Between January 2013 and December 2018, 1,872 consecutive patients suspected of coronary heart disease with underwent coronary angiography in Shanghai Dongfang and Tongji Hospitals were enrolled in this study, of which 1,731 completed the follow-up of coronary angiography.One hundred and fifty-nine patients were excluded from analysis as per predefined exclusion criteria.The exclusion criteria included chronic peritoneal or hemodialysis treatment, malignant tumors or malignant hematological diseases, refractory heart failure, exposure to radiographic contrast within the previous two days, any allergies to radiographic contrast medium and/or coronary anatomy not suitable for PCI.There were 87 patients with GFRs of <15 mL/ (min 1.73 m 2 ), 26 with urinary system tumors, 32 with other malignant tumors and malignant hematological diseases, and 14 with non-ischemic cardiomyopathy and refractory heart failure (EF ≤25%).The study included 1,572 people (Figure 1).
The Ethics Committee of our institution approved the present study, and all patients provided their written informed consent.
According to the results of coronary angiography, patients were divided into four groups: Group A: coronary artery stenosis ≤50% (including normal coronary blood flow, and previously implanted stent that there was no restenosis); Group B: 50% coronary artery stenosis ≤70% (no PCI treatment and previously implanted stent, there was no restenosis); Group C: coronary artery stenosis >70% undergoing PCI treatment, but residual stenosis >50%; Group D: coronary artery stenosis >70% undergoing PCI treatment and residual stenosis ≤50% (all included analysis of blood vessel diameters are ≥2.0 mm).Group A and B were seemed as CAG group, while group C and D were viewed as PCI group.A nonionic, low-osmolality contrast agent, iopamiron (755 mg iopamidol per milliliter, SINE, Shanghai, China), was used exclusively.The selection of the arterial access site, guide catheters, balloons and stents, contrast dose and supportive pharmacological therapies applied during the procedure was left to the discretion of the interventional cardiologist.The decision to perform PCI was made at the discretion of the operating cardiologist based on the patient's clinical profile, lesion characteristics, and patient preference.If a patient underwent multiple planned PCI during the time frame, only the last procedure was included in the analyses.Each center's medical records were reviewed and patients' demographic, clinical, and procedural data were obtained.All enrolled patients undergoing PCI were either not initiated or if already taken, were discontinued the following drugs [angiotensin converting enzyme inhibitor (ACEI)/angiotensin receptor blocker (ARB)/diuretic/statins].
All patients underwent an echocardiographic evaluation within hospital admission.The SCr concentration was routinely measured, and the eGFR level was calculated before coronary angiography or PCI, 24 hours and follow-up.Relevant baseline and follow-up laboratory data [plasma glucose, hemoglobin A1c (HbA1c]), LDL, uric acid] and all adverse clinical events were recorded during hospitalization.Those who were diagnosed with new-onset diabetes were defined as fasting plasma glucose (FPG] of 7.0 mmol/l and 2-h postoral glucose load plasma glucose [2-h PG] of 11.1 mmol/l and no history of diabetes [10].Uncontrolled LDL was defined as the follow-up LDL ≥70 mg/dl and uncontrolled blood glucose was defined as follow-up HbA1c ≥7 mmol/L.
Endpoints and definitions
The primary endpoint for this study was a change in renal variation from baseline to follow-up following CAG or PCI.Renal function was evaluated at 3-time points; baseline, 24 hours after PCI, and at the latest follow-up (>6 months).
The eGFR was calculated using the four-variable MDRD study equation of renal function was calculated as 175 × plasma creatinine −1.154 × age -0.203 (× 0.742 if patient is female) [11].
According to Uemura et al., improved renal function (IRF) after PCI was defined as a 20% increase in eGFR 7 or 30 days after baseline [7].Since prior studies have demonstrated that a 20% increase in eGFR is associated with favorable renal outcomes, especially in patients with renal dysfunction at baseline [12].Changes in renal function were easier to understand in terms of eGFR instead of creatinine levels.Similarly, this study defined the improvement in renal function as a 20% increase in follow-up eGFR after baseline.
According to the 2016 ESC Heart Failure Guidelines, worsening renal function (WRF) was defined as a decrease in eGFR by ≥ 20% from baseline to follow-up [13].Stable renal function was regarded as neither increase nor decrease 20% from baseline to follow-up.The IRF and WRF was calculated as follows: IRF= (follow-up eGFR-baseline eGFR) / baseline eGFR >20%.WRF= (baseline eGFR-follow-up eGFR) / baseline eGFR ≥20%.
Contrast-induced nephropathy was defined as either a 25% increase in baseline creatinine levels or a 0.5 mg/dL (44 μmol/L) increase in absolute serum creatinine levels within 72 h after the PCI according to the criteria of the main study [3].
Secondary endpoints were major cardiovascular adverse events including heart failure readmission, recurrent myocardial infarction and in-stent restenosis.
Statistical analysis
Demographic data was described across the four groups as mean ± SD for continuous variables and number (%) for categorical variables.The Student's t-test and Mann-Whitney U test were used to compare continuous variables, as appropriate and the chisquared and Fisher exact tests were used to compare categorical variables, as appropriate.Profile plots were drawn for pictorial comparison of pre-and post-procedural creatinine change.The p-value of 0.05 was taken as statistically significant.
Kaplan-Meier analysis with the log-rank test was used to assess the cumulative incidences of heart failure and myocardial infarction.Multiple imputation and survival analyses were performed in R Version 3.1.0(R Foundation for Statistical Computing, Vienna, Austria).Statistical analyses were performed using IBM SPSS ® statistical software (SPSS version 23.0, Chicago, IL, USA) and R software (http://www.R-project.org/).
Baseline characteristics
The baseline characteristics and procedure characteristics in the 1,572 patients are summarized in Table 1.Between January 2013 and December 2018, a total of 1,572 patients were enrolled in the trial.Of these patients, 151 (group A) had coronary angiography and a previously implanted stent with no restenosis, while 181 (group B) had boundary lesions with no stent implantation; 634 patients (group C) undergoing PCI treatment, but residual stenosis is more than 50%; 606 patients (group D) undergoing PCI treatment and residual stenosis is less than 50%.Most risk factors profiles and sex distribution were similar between patients with CAG and patients with PCI.The prevalence of diabetes tended to be higher in patients with elective PCI.The proportion of patients with elective PCI and stenosis is mostly male, had a higher prevalence of current smoking, had higher blood glucose, uric acid, BNP and LDL levels than patients with CAG (p<0.05).
Change in eGFR variation
The more severe coronary artery, the greater the amount of medium used during the operation, whereas the proportion of contrast medium nephropathy between the groups after the operation was not significantly different.The incidence of CIN was 0.6% in patients with CAG and 2.4% in patients with PCI.There was no signifcant diference in the incidence of CIN between patients with CAG and those with PCI.Patients with PCI had a lower rate of renal function deterioration, and the proportion of patients with improved renal function was higher ( In the study cohort as a whole, following coronary angiography, there was a nonsignificant decreasing trend in mean eGFR (Figure 2A). Figure 2 B-D shows eGFR values pre-and post-PCI, in the individual patients of the four study groups separately.
During the follow-up period, Figure 2 shows the number of patients having worsening, no change or improvement in baseline and follow-up eGFR in the four groups.Followup renal function was unchanged or improved in 88.7% of patients with PCI.
Variables associated with improved or worsening renal function
Of the 1,572 CHD patients in whom follow-up events including heart failure readmission, recurrent myocardial infarction, in-stent restenosis and renal function were recorded as a post-procedure.Among them, there are 220 patients in worsening eGFR group, 201 patients in improved eGFR group and 1151 patients with stable eGFR in which we compared the cardiovascular adverse events including heart failure and myocardial infarction.During the follow-up time, it was found that the incidence of cardiovascular adverse events gradually increased in worsening eGFR group.
Cardiovascular adverse events during follow-up of patients with improved renal function will be significantly reduced (Figure 3).Figures 4 and 5 compare patients with improved or worsening eGFR in terms of their baseline and procedural data.ln our study, numerous other clinical variables were associated with improved eGFR, we followed up the low-density lipoprotein, blood sugar, and the rate of new-onset diabetes in patients with improved or worsening eGFR and found that there was no correlation to the prognosis of renal function.The incidence of contrast-medium nephropathy has no significant effect on long-term renal function.
There were significantly more patients with WRF who occurred major cardiovascular adverse events including heart failure readmission, recurrent myocardial infarction and in-stent restenosis during follow-up time than those with IRF (p<0.01).Besides, old age and major adverse cardiovascular events are associated with worsening eGFR.However, on multivariate analysis, the treatment of PCI was found to independently correlate with improved renal function [OR 4.561 (95%CI: 2.556-8.139);p<0.001].The AUC-ROC was 0.763(95% CI: 0.637-0.757) in the model with the treatment of PCI (Figure 6).
Discussion
The findings of the study can be summarised as follows: (1) follow-up renal function was unchanged or improved in 88.7% of patients undergoing PCI and 75.9% of patients undergoing CAG, with a low risk of progression to CKD stage or dialysis; (2) patients with improved renal function significantly reduced long-term major cardiovascular adverse events; (3) in patients with PCI, the proportion of patients with improved renal function was higher.
Previous studies have shown that the deterioration of renal function following cardiac catheterization is closely related to the development or progression of renal dysfunction and dialysis initiation [18,19].Tsai et al. studied consecutive patients undergoing PCI and found that at least 7% of all patients undergoing a PCI, develop CIN [20].CIN is associated with a high in-hospital mortality burden, with 1 potential death avoided for every 9 cases of AKI that are prevented [21].The current analysis focuses on the effect of PCI on renal function and only secondarily, as with other studies, on the effect of renal function on outcomes.The eGFR is the clinical standard for the assessment of risks and complications of renal function and provides risk assessment in diagnostic or therapeutic procedures such as a contrast agent administration [14].As GFR declines, the prevalence of clinical manifestations of CHD increases, in parallel with the prevalence of large-vessel coronary disease, arteriosclerosis, microvascular disease, LVH, and myocardial fibrosis.
Vascular calcification also increases as GFR declines and is associated with mortality in ESKD; calcification of the subintima and media of large vessels are both associated with all-cause and cardiovascular mortality [15,20].Although patients undergoing PCI are also at high risk for subsequent renal damage, their long-term renal outcomes have not been fully elucidated.
ln our study all included patients, there was a nonsignificant decreasing trend in early eGFR and found that no more than 3% of all patients develop CIN.In this study, for patients with more severe coronary artery, the more the volume of contrast medium used during the operation, while the proportion of CIN between the groups after the operation was not significantly different.Probably, there might be a complex situation or a longer procedure for the achievement of complete revascularization in patients with more contrast media compared to those without.Worsening renal function was seen in only 14.0% of the study cohort.Renal function is sometimes improved or unchanged after PCI.It is noteworthy from our analysis that not only was eGFR ≥60 mL/(min × 1.73m 2 ) remarkably prevalent (>80%) among patients with CHD, but perhaps most noticeable, a large number of patients, including those with severe coronary artery exhibited improvement in renal function after PCI.As shown in our analysis, nearly 86% of patients showed unchanged or improvement in renal stage following CAG or PCI.
Although several factors may explain this association between renal and cardiovascular disease, there is growing evidence that hyperlipidemia and diabetes contribute not only to cardiovascular disease but also to renal disease progression.Good control of blood glucose levels is critical in diabetic patients to delay the progression of the underlying metabolic dysfunction and to reduce the risk of renal dysfunction and cardiovascular disease [22].The CREDENCE trial [23] has shown that the cardiovascular and renal protection was observed independently of glycaemic control (as in the EMPA-REG OUTCOME trial) [24].Studies in a variety of animal models have shown that hypercholesterolemia accelerates the rate of progression of kidney disease [25].A highfat diet causes macrophage infiltration and foam cell formation in rats, leading to glomerulosclerosis [26].Therefore, it is crucial for cardiac patients to control risk factors, including lipids and glycaemic.lt is likely that improved lipids and glycaemic control are clearly associated with improved renal function, but in our study we found that improved renal function was independent of these factors and was closely associated with PCI.In our study, many other clinical variables were associated with improvement in eGFR, and we followed up LDL, glucose, and rate of new-onset diabetes in patients with improved or worsening eGFR and found that there is no correlation between them through multivariate logistic regression analysis.This may be due to the standardised pharmacological treatment of the enrolled patients in this study.
In this regard, changes in renal function would be an important factor for predicting outcomes.The presence and severity of renal dysfunction at baseline are well-known prognostic predictors for patients with ACS [16].Renal dysfunction is an established predictor of adverse outcomes in patients with ACS and its negative effect has been reported to increase with the decline in renal function [17].Previously, Uemura et al.
reported that non-dialysis patients with ACS and advanced renal dysfunction have poor prognoses, even after undergoing contemporary PCI [7].This study was inconsistent with the results of my study, mainly because Uemura et al.'s study investigated the cardiovascular outcomes after PCI in non-dialysis patients with ACS and eGFR <30 mL/min/1.73m 2 ).This retrospective observational cohort study is a sub-analysis of those 194 patients with a focus on changes in renal function.The current study aimed to provide a prediction of post-PCI renal function to allow for a more informative patientphysician consultation.Statistical analyses demonstrated that both STEMI and cardiogenic shock were independent predictors for an IRF, and on follow-up, these patients had a lower incidence of initiation of permanent dialysis [20].
In other studies, eGFR is reduced in patients with advanced heart failure (HF), and renal function is a powerful independent predictor of prognosis.Reduction in baseline GFR may be associated with a higher risk of death in HF patients [20,21,27,28].This may, however, represent a clinically appropriate tendency of using less contrast volume in patients with worse baseline eGFR, as well as other preventive measures to avert renal injury.In addition, the model evaluating factors associated with worsening eGFR in follow-up time included many expected baseline clinical variables, such as in-stent restenosis, myocardial infarction and in-hospital heart failure.Indeed, cardiac and renal functions are closely linked, and CRS has been introduced in recent years to characterize this interaction.Patients with HF may develop different degrees of impaired renal function.The term known as "CRS" includes a broad spectrum of diseases in which the heart and renal are both involved.CRS encompasses a spectrum of disorders involving both the heart and renal in which acute or chronic dysfunction in 1 organ may induce acute or chronic dysfunction in the other organ.It represents the confluence of heartrenal interactions across several interfaces [8].Similar to other studies, we found a powerful relationship between worsening eGFR during follow-up and higher MACE.It's well established that variability in the eGFR is greater in patients with HF and associated with mortality [29].
The effect of impaired renal function on outcomes of CHD patients undergoing PCI has been well described, with a worse prognosis.However, In our contemporary analysis, worsening renal function was observed in only 14% of patients.A recent study, the ISCHEMIA-CKD trial showed that intervention in stable CHD patients with advanced renal disease did not increase the need for initiation of dialysis concerning medical management which in a way suggests that all patients with renal dysfunction would not be at higher risk of future renal worsening because of the intervention [30].In our study, cardiovascular adverse events during follow-up of patients with improved renal function will be significantly reduced.Therefore, we demonstrated that the treatment of PCI is a predictor of IRF.PCI may alleviate the cardiorenal syndrome (CRS) and thus improve renal function.
Currently, there are only limited reports addressing the timing of renal insult and its relation to clinical outcomes.In the FAME 2 trial, PCI that was guided by the fractional flow reserve was associated with a lower risk of the primary composite outcome than medical therapy alone, a difference that was driven by a reduction in urgent revascularization [31].We similarly observed a lower incidence of MACE for CHD with coronary angiography or invasive strategy, although the event rates were low.During the follow-up, the major cardiovascular adverse events in patients with improved renal function will be significantly reduced.Besides, the event rates were lower than projected, and together with a low incidence of MACE in the improved renal function group and the trial had less power than anticipated to show a benefit for the invasive strategy.The current study provides practical guidance and reassurance to physicians of patients with CHD reluctant to undergo PCI due to concerns for worsening renal function during the procedure.Finally, although provocative, we propose that in patients with PCI, the proportion of patients with improved renal function was higher.Our data suggest that from the point of view of IRF, there should not necessarily be hesitation to perform PCI based on the level of eGFR at the time of PCI or the concern of causing CIN.
The interactions and feedback mechanisms involved in heart and renal failure are more complex than previously thought.Future studies will be required to understand the proposed entity of cardiorenal syndrome in patients with CHD.Although our study fails to provide clarity to the intrinsic pathophysiological mechanisms, the hemodynamic surrogate of right atrial pressure supports the concept of renal congestion as a contributing factor to cardiorenal syndrome in patients with chronic congestive heart failure [32,33].Recent advances in basic science and clinical understanding of organ crosstalk, including the validation of novel preclinical biomarkers of the cardiorenal syndrome may provide insights into the pathophysiology, diagnosis, and management of this disease over time.In terms of clinical implication, PCI intervention in patients with coronary artery disease should not be postponed because of concerns about worsening renal function.It may be more helpful for cardiovascular physicians to perform revascularisation in patients with coronary artery disease.
Conclusions
In patients with CHD undergoing PCI, renal function is more likely to stay the same or improve than worsen.IRF was relatively common in non-dialysis patients with CHD and advanced renal dysfunction who underwent PCI.Further, IRF was associated with favorable cardiovascular outcomes.
Study limitations
This study has several limitations, related to the database used for the research.This analysis of the trials is an analysis of prospectively collected trial data.However, to mitigate this untenable assumption, we incorporated baseline eGFR as a covariate in the analysis of post-PCI eGFR rather than analyzing the change in eGFR.baseline eGFR was defined simply as pre-PCI eGFR; thus, IRF may simply represent the return to the patients' true baseline eGFR before admission.A further limitation in evaluating renal function was that the adjudicated occurrence of post-PCI dialysis was not recorded.Thirdly, data regarding the completeness of revascularization in the study patients were not available for analysis.the strategies and timing in regard to PCI, as well as peri-procedural management, were left to the discretion of the centers and treating physicians.Therefore, larger multicenter studies with prospective randomized designs are needed to test the hypothesis generated on a larger scale.
Figure 1 .
Figure 1.Enrollment criteria and trial flow.
Figure 3 .
Figure 3. Kaplan-Meier estimates of the incidence of heart failure and myocardial infarction in worsening, stable or improved eGFR.
Figure 6 .
Figure 6.Model for PCI to predict improvement in renal function.
Table 3 .
eGFR changes from baseline to follow-up.
Table 4 .
Proportion in eGFR variation from baseline to follow-up. | 2023-11-05T16:17:52.701Z | 2023-11-02T00:00:00.000 | {
"year": 2023,
"sha1": "bda1abe5416bf7e1b69d8d4f0370136236d51d30",
"oa_license": "CCBYNC",
"oa_url": "https://www.monaldi-archives.org/index.php/macd/article/download/2766/1806",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4a5931d0bef8d0efbc7a5100c760531e998d3d2b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
203265053 | pes2o/s2orc | v3-fos-license | Analysis of the Accuracy Batch Training Method in Viewing Indonesian Fisheries Cultivation Company Development
Analysis of research is very important to do, so that research becomes more precise and directed. As well as analyzing the development of aquaculture companies in Indonesia, studies and the use of appropriate methods are needed to get optimal results. This research is expected to be widely useful, both for the Government of Indonesia and the private sector as one of the study materials in business development in the fisheries sector. The data used in this study is data on the number of aquaculture companies according to the type of cultivation obtained from the Indonesian Statistical Center from 2000 to 2016. This study uses the Batch training method with weight and bias learning rules with 5 architectural models, namely: 8-5-1, 8-10-1, 8-5-10-1, 8-10-20-1 and 8-20-40-1. Of these 5 architectural models, the best is 8-5-1 with an accuracy rate of 75%, MSE 0.0445464533, with an error rate of 0.001 - 0.07.
Introducing
Aquaculture is a maintenance and breeding business for fish or other aquatic organisms. Aquaculture is also referred to as aquaculture or aquaculture gave that marine organism is cultivated not only from fish species but also other aquatic organisms such as shellfish, shrimp and aquatic plants. Whereas aquaculture companies are legal entities that carry out fish farming, other marine animals or aquatic plants with the aim that some or all of the results are for sale. Types of aquaculture companies in Indonesia are divided into four: fishponds, hatcheries, freshwater and sea. The Globalization chain of the value of fisheries continues to change data on production and fishing [1], especially in Indonesia. Viewed globally, the increase in trade in fish and fisheriesrelated products has been quite good, this is due to the significant growth in the aquaculture sector [2]. Impressive industrial growth has important implications for decision makers regarding how to get more significant economic benefits, as well as how to ensure responsible and sustainable fishing practices. Therefore the position of aquaculture companies along the chain has implications for their ability to extract value lasting to get more significant economic benefits from fishery products [3].
Indonesia is known as one of the countries that have a vast fishery potential so that it becomes a significant consideration for the government and the private sector to take appropriate fishing industry planning steps [4]. One of them is by controlling and seeing the level of development of fishery companies, especially aquaculture companies. Because aquaculture companies in carrying out their work activities participate in maintaining biological resources that are carried out on an area of aquaculture land to take advantage/harvest results. This is what needs to be done so that the level of fisheries production in Indonesia should not decrease. Therefore, aquaculture companies need to be improved every year.
In this study, the technique used to see the level of development of fisheries companies in Indonesia is the Batch training neural network method with weight and bias learning rules. The accuracy of this method will be analyzed using several predetermined network architecture models [5] [6]. Metode Batch training with weight and bias learning rules (trainb) trains a network with weight and bias learning rules with batch updates. The weights and biases are updated at the end of an entire pass through the input data. One of the reasons behind the use of this method is because there have been many artificial neural network methods used by previous researchers for similar cases [7]- [11], even with different methods. The results of this study are expected to contribute to the government and the private sector as a reference in determining the policy to be more selective in giving licenses to new companies in the field of aquaculture, as well as for academics to further develop this research for future.
Data Collection
Data on the number of aquaculture companies according to the type of cultivation (Table 1) In this dataset data preprocessing will be carried out to divide the data into two parts: datasets for training and datasets for testing. The next stage is the selection of network architecture to process training data and testing data so that the best results will be obtained.
Data Normalization
The formula used is [12]- [14]: Figure 2 above it can be explained, that after the training data is inputted into Matlab, the next step is to create a new network (Example using the 8-5-10-1 network model). Input data is not included in the formula when building a new network, because the input has been entered and processed first (P and T) which is normalization ( Table 2). The transfer function used is "tansig" and "logsig". While the method used is Batch training with weight and bias learning rules Explanation of Table 5 and Figure 3: that the best architectural model of the 5 models used is 8-5-1 which produces 75% accuracy and MSE 0.0445464533.
Conclusion
The conclusion of this study is: a. The 8-5-1 architectural model using Batch training with weight and bias learning rules method produces 93% accuracy. b. Although MSE from the 8-5-1 architectural model is not the smallest the accuracy is higher and the time is faster than the other 4 models. | 2019-09-17T02:57:50.590Z | 2019-08-01T00:00:00.000 | {
"year": 2019,
"sha1": "b8e0df627c067db415ba23b8c96384515d991c0e",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1255/1/012003",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "6e8ec0d468d0e07a241a2effdcbcea9661f97d05",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
54457845 | pes2o/s2orc | v3-fos-license | Disrupt of Intra-Limb APA Pattern in Parkinsonian Patients Performing Index-Finger Flexion
Voluntary movements induce postural perturbations which are counteracted by anticipatory postural adjustments (APAs). These actions are known to build up long fixation chains toward available support points (inter-limb APAs), so as to grant whole body equilibrium. Moreover, recent studies highlighted that APAs also build-up short fixation chains, within the same limb where a distal segment is moved (intra-limb APAs), aimed at stabilizing the proximal segments. The neural structures generating intra-limb APAs still need investigations; the present study aims to compare focal movement kinematics and intra-limb APA latencies and pattern between healthy subjects and parkinsonian patients, assuming the latter as a model of basal ganglia dysfunction. Intra-limb APAs that stabilize the arm when the index-finger is briskly flexed were recorded in 13 parkinsonian patients and in 10 age-matched healthy subjects. Index-finger movement was smaller in parkinsonian patients vs. healthy subjects (p = 0.01) and more delayed with respect to the onset of the prime mover flexor digitorum superficialis (FDS, p < 0.0001). In agreement with the literature, in all healthy subjects the FDS activation was preceded by an inhibitory intra-limb APA in biceps brachii (BB) and anterior deltoid (AD), and almost simultaneous to an excitatory intra-limb APA in triceps brachii (TB). In parkinsonian patients, no significant differences were found for TB and AD intra-limb APA timings, however only four patients showed an inhibitory intra-limb APA in BB, while other four did not show any BB intra-limb APAs and five actually developed a BB excitation. The frequency of occurrence of normal sign, lacking, and inverted BB APAs was different in healthy vs. parkinsonian participants (p = 0.0016). The observed alterations in index-finger kinematics and intra-limb APA pattern in parkinsonian patients suggest that basal ganglia, in addition to shaping the focal movement, may also contribute to intra-limb APA control.
INTRODUCTION
Anticipatory postural adjustments (APAs) represent a crucial aspect of the voluntary movement organization. Throughout their feed-forward control, APAs are able to limit the displacement of the center of mass (CoM), caused by the interaction forces induced by the voluntary movement. Indeed, such activities build up fixation chains toward the available support point, where to discharge the interaction forces produced by the voluntary movement, in this way granting the whole body equilibrium (Massion, 1992;Bouisset and Do, 2008). Since these activities usually involve several trunk and limb muscles, they may also be referred to as inter-limb APAs (see Cavallari et al., 2016 for a review). However, it has been demonstrated that also movements involving very tiny masses, like an index-finger flexion, are accompanied by APAs (Caronni and Cavallari, 2009). In this case, indeed, specific APAs were observed in arm and shoulder muscles that stabilize the segmental equilibrium of the upper limb and optimize the movement performance . Because of their localization with respect to the moving segment, these postural activities were named intra-limb APAs (see also Aoki, 1991;Caronni and Cavallari, 2009).
Intra-and inter-limb APAs share not only their principal behavioral features, like the flexibility to adapt to the available support points (Cordo and Nashner, 1982;Bruttini et al., 2014) as well as to the direction and speed of the focal movement (Horak et al., 1984;Aruin and Latash, 1995;Caronni and Cavallari, 2009;Esposti et al., 2015), but also many of the neural structures involved in their control, including primary motor cortex, supplementary motor area, sensorimotor areas (Viallet et al., 1992;Schmitz et al., 2005;Petersen et al., 2009;Ng et al., 2013;Bolzoni et al., 2015). In this regard, some studies correlated neurological diseases with APAs modifications. These experiments not only deepened the knowledge of these pathologies, but also elucidated the structures involved in APAs control. So far, the majority of those studies investigated the effects of pathologies of the central nervous system, like stroke and cerebellar lesions, on inter-limb APAs and on whole-body postural control (Diener et al., 1992;Rajachandrakumar et al., 2016), but the effects of cerebellar lesions was also documented on intra-limb APAs by Bruttini et al. (2015) who reported a disruption of the temporal organization of such postural adjustments. Another subcortical structure that plays a role in movement control is composed of the basal ganglia, and also in this case some studies showed that basal ganglia pathologies correlate with impairments in inter-limb APA control (Viallet et al., 1987;Lee et al., 1995). Since a linkage between basal ganglia and intra-limb APAs is still missing, the present study aims to compare the kinematics parameters of the index-finger flexion and the intra-limb APA latencies and pattern between healthy subjects and patients affected by Parkinson disease (PD), assuming the latter as a model of basal ganglia dysfunction.
Considering the well-known role of basal ganglia in shaping the pattern of motor activities driving voluntary movement, one would mainly expect a pattern disruption (i.e., changes in intra-limb APAs sign, excitatory or inhibitory), possibly even associated to a timing alteration.
MATERIALS AND METHODS
Thirteen patients affected by PD (PARKINSON group, mean age 60.8 years ± 9.3 SD, four females) and 10 age-matched healthy subjects (HEALTHY group, mean age 61.4 years ± 6.7 SD, six females) were enrolled in this study. Healthy subjects had no history of orthopedic or neurological disorders.
Individual demographic and clinical parameters of PD patients are reported in Table 1. They had no history of orthopedic disorders and followed pharmacological treatments. However, at the time of the experiment, they were in pharmacological wash-out from at least 36 h.
All participants gave written consent to the procedure, after being informed about the nature of the experiment. The experiments were conducted in conformance with the policies and principles contained in the Declaration of Helsinki and were approved by the Ethical Committee of the University of Milan (counsel 5/16 -15.02.16).
Experimental Design
Participants were tested on the dominant limb; the assessment of the handedness was performed according to Oldfield (1971). Participants were sitting and explicitly asked to keep their back supported, the upper-limb still and both feet on the ground. The non-dominant arm was supported by an armrest while the dominant arm was kept along the body, with the elbow flexed at 90 • . The hand was prone, in axis with the forearm, with the indexfinger pointing forward (i.e., 180 • at the metacarpophalangeal joint) while all the other fingers were hanging freely. Subjects kept the back leaning against the seatback and the feet on the ground. The body position was visually checked by the investigator throughout the experiment.
After an acoustic signal, delivered every 7 s, subjects had to perform a self-paced brisk flexion of the index-finger at the metacarpophalangeal joint. Subjects were specifically instructed to perform the movement at will, so as to exclude any reaction-time effect. Each subject performed 45 movements, divided in three sessions of 15 movements with 5-7 min interval in between, in order to avoid fatigue.
Movement and EMG Recordings
The excursion of the metacarpophalangeal joint was recorded by a strain-gauge goniometer (model F35, Biometrics Ltd R , Newport, United Kingdom), fixed with surgical tape. Angular displacement was amplified by a bridge amplifier (model P122, Grass Technologies R , West Warwick, RI, United States), which gain was calibrated before each experiment.
Electromyographic (EMG) signals were recorded from the prime mover flexor digitorum superficialis (FDS) and from the biceps brachii (BB), triceps brachii (TB), and anterior deltoid (AD) muscles, involved in the upper-limb postural stabilization (Caronni and Cavallari, 2009). After scrubbing the skin with cotton and alcohol, two pre-gelled surface electrodes (model H124SG, Kendall ARBO, Tyco Healthcare, Neustadt/Donau, Germany) were placed on each muscle, 24 mm apart. Electrode placement for BB, TB, and AD muscles followed the SENIAM guidelines (Freriks and Hermens, 1999). For FDS, SENIAM did not provide specific guidelines; however, the same general approach was adopted: the subject kept the arm and forearm in the experimental position and was asked to repeatedly strongly flex one finger at a time, at the metacarpophalangeal joint. Meanwhile, the experimenter palpated his forearm, so as to isolate the belly of the FDS from that of the surrounding muscles. Electrodes were then placed on the FDS belly, at about 1/3 of the distance of the wrist from the cubital fossa. The selectivity of the EMG recordings was verified by checking that activity from the recorded muscle, during its phasic contraction, was not contaminated by signals from other sources. The EMG signals were amplified (gain 2-10 k) and band-pass filtered (30-1,000 Hz, to minimize both movement artifacts and high-frequency noise) by four differential amplifiers (model IP511, Grass Technologies R , West Warwick, RI, United States). Conditioned goniometric and EMG analog signals were then sampled at 2 kHz with 12-bit resolution by an A/D board (model PCI-6024E, National Instruments R , Austin, TX, United States), visualized online and stored for further analysis.
Data Analysis
Each EMG recording was digitally rectified and integrated (time constant: 10 ms). The onsets of FDS EMG were extracted by running a 1-s mobile-window algorithm over the recording, searching for those positions in which the samples in the 50 ms following the window were all above the mean value +2 SD of the samples within the window. Whenever this criterion was met, the end of the window was considered an onset; all onsets were visually validated. In each muscle, all the 45 EMG recordings were then time aligned to the FDS onset and averaged, so as to obtain an average trace extending from −2000 to +300 ms from the FDS onset, which was then considered time 0; the same was done for the 45 goniometric traces. All subsequent measurements were taken on the averaged traces.
The onset of index-finger flexion was identified on the averaged goniometric trace by applying the same mobile-window algorithm used for FDS onset, but searching for the window position in which all samples in the 50 ms following the window were all below the mean value −2 SD of the samples within the window. Movement amplitude and duration were then measured, respectively, as the amplitude and timing difference between peak index-finger flexion and movement onset. The mean values and variability of the movement latency, amplitude and duration were compared between PARKINSON and HEALTHY groups by means of unpaired t-tests and Levene's tests, respectively. Whenever Levene's test was significant, the t-test for the corresponding variable was corrected for unequal variances estimates.
The onset of an excitatory or inhibitory APA in each postural muscle was searched for on the averaged trace by applying the same moving-window algorithm used for FDS onset; however, the search was stopped at the movement onset, in order to avoid any effect due to re-afferentation triggered by the focal movement. In case an onset was found, if the samples in the 50 ms following the window were all above the mean value + 2 SD of those within the window, the APA was recognized as excitatory, while if the samples in the 50 ms were all below the mean value −2 SD the APA was recognized as inhibitory. If the above criteria failed to identify any onset, it was concluded that the APA was lacking for that muscle. The mean values and variability of the APA latencies, for each postural muscle, were compared between PARKINSON and HEALTHY groups by means of unpaired t-tests and Levene's tests, respectively. Data from patients in which the APA was lacking or had an inverted sign (e.g., excitatory instead of inhibitory) with respect to that observed in healthy subjects (in which APAs always have the same sign, see Results), were excluded from the comparisons.
The pattern of APAs for each postural muscle was assessed in each group by counting the number of participants that showed an inhibitory, excitatory, or lacking intra-limb APA. The frequency of occurrence of the three above outcomes was then compared in the PARKINSON vs. HEALTHY group by the Freeman-Halton extension (2 groups × 3 categories) of the non-parametric Fisher Exact test.
For the sake of completeness, as secondary measurements, linear correlations were tested between the intra-limb APA latencies in the PARKINSON group and each of the following demographic and clinical parameters: patient's age, disease duration, Levodopa Equivalent Daily Dose of the pharmacological treatment (Tomlinson et al., 2010) and Unified PD Rating Scale motor part (UPDRS-III, cfr. Movement Disorder Society Task Force on Rating Scales for Parkinson's Disease, 2003; both total score and upper-limb sub-score). Data from patients in which APA was lacking or inverted were excluded also from these analyses. Non-parametric Spearman's R correlation was evaluated between the sign of intra-limb APAs (−1 when inhibitory, +1 when excitatory, and 0 when lacking) and those same parameters.
For all tests, statistical significance was set at p < 0.05. All relevant data for the statistical analyses drawn in this study are included in the manuscript, either in Figure 2 or in Tables 1, 2. Figure 1 illustrates the EMG and kinematics recordings obtained in one representative healthy participant (HEALTHY) and two PD patients (PARKINSON A and B), who were representative of a normal APA pattern in the three recorded postural muscles and of an altered pattern in BB, respectively. Taking the onset of prime mover FDS EMG as time reference, the index-finger flexion occurred with a lower delay in the healthy participant (∼25 ms) than in both patients (∼70 ms in A and ∼40 in B). With regard to postural muscles, in the healthy participant the activation of FDS was preceded by an inhibitory intra-limb APA in BB and AD, whose activity was reduced with respect to the mean reference level, and almost simultaneous to an excitatory intra-limb APA in TB. Postural activities of similar sign, even if slightly delayed, could be observed also in the PD patient A, while patient B showed a change in pattern, as the intra-limb APA in BB muscle was reversed (excitatory instead of inhibitory).
Kinematics Parameters
The individual latencies of index-finger flexion in HEALTHY and PARKINSON participants are illustrated in the upper panels of Figure 2, while inferential statistics are plotted in the lowermost panel. Compared to healthy participants, the latency of movement onset was larger in PD patients and showed a greater between-subjects variability. Statistical analysis confirmed such results, both with regard to mean values (t-test t 16.27 = 2.911, p = 0.010) and to variability (Levene's F 1,21 = 8.677, p = 0.0077). Movement amplitude and duration are reported in Table 2, both as individual values and as mean ± SE. Also mean amplitude was significantly lower in PARKINSON vs. HEALTHY participants (t 21 = 5.030, p < 0.0001), but with no significant differences in between-subjects variability (F 1,21 = 3.118, p = 0.0919). Movement duration, instead, was comparable both in mean value (t 21 = 0.686, p < 0.5004) and variability (F 1,21 = 1.326, p = 0.2624).
Intra-Limb APA Latency
The individual latencies of intra-limb APAs in HEALTHY and PARKINSON subjects are illustrated in the central panels of Figure 2. The fact that in all participants the identified APAs had a lower latency with respect to the index-finger movement witnesses the anticipatory nature of the postural muscles recruitment. Inferential statistics are plotted in the lowermost panel, showing that the latencies of intra-limb APAs in TB and AD muscles were comparable, both in mean value and variability, between the PARKINSON and HEALTHY groups. This was statistically confirmed by t-tests (for TB, t 20 = 0.510, p = 0.6154; for AD, t 19 = 0.372, p = 0.7137) and Levene's tests (for TB, F 1,20 = 1.036, p = 0.3208; for AD, F 1,19 = 2.857, p = 0.1073). Note that the lowermost panel and the related statistical comparisons Significant differences between the two groups are marked by * .
Frontiers in Physiology | www.frontiersin.org For illustration purpose, the mean reference signal level, recorded from 750 to 250 ms before FDS onset, has been subtracted from each EMG trace. FDS amplitude has been down-scaled by a factor 2 in patient B and AD amplitude has been down-scaled by a factor 2 in all participants. Note that in the healthy participant the FDS activation was shortly followed by index-finger flexion but preceded by inhibitory intra-limb APAs in BB and AD and almost synchronous with an excitatory intra-limb APA in TB. In both PD patients, finger flexion delay was larger than in the healthy participant, while APAs timing actually showed a delay. However, while patient A produced the same intra-limb APA pattern observed in healthy individuals, patient B had an inverted intra-limb APA in BB, which underwent excitation instead of inhibition.
regarded data from all HEALTHY subjects (gray solid bars) vs. data from those PARKINSON patients which had an intra-limb APA of the same sign normally observed in HEALTHY subjects (black solid bars). Data from those patients in which the intra-limb APA was lacking (marked by an "X" in the central panels) or inverted (white bars) were excluded from the analyses. Because only four PD patients had an intra-limb APA of normal sign in BB, thus resulting in a very insufficient sample size, latency data from that muscle were excluded from statistics. For the same reason, it was not feasible to subdivide the latency comparison into different subgroups.
Inter-Limb APA Pattern
The central panels of Figure 2 also illustrate that while all HEALTHY subjects presented intra-limb APAs of the same sign (excitatory in TB and inhibitory in BB and AD, gray bars), in some PD patients the intra-limb APA was lacking ("X") or had the opposite sign with respect to what normally observed in HEALTHY subjects (white bars). While this seldom occurred for TB and AD (one patient lacked APAs in both muscles while another had an inverted APA in AD), it was not the case for BB. For this muscle, indeed, only 4 out of 13 patients had an inhibitory intra-limb APA, while 5 had an inverted APA and in 4 the APA was lacking. The Freeman-Halton extension of the non-parametric Fisher Exact test proved that the frequency of occurrence of normal sign, inverted sign, and lacking intra-limb BB APAs was significantly different (p = 0.0016) in the PARKINSON group (4, 5, and 4, respectively) vs. the HEALTHY one (10, 0, and 0).
Secondary Measurements
For the sake of completeness, linear correlations were drawn between the latency of intra-limb APAs in TB or AD and the demographic and clinical parameters of PD patients, which are illustrated in Table 1. Non-parametric correlations were also drawn between the sign of intra-limb APAs in BB and those same parameters. Such correlations never reached significance (in all cases p > 0.28).
DISCUSSION
Present results show that the pattern of intra-limb APAs, that stabilize the arm when briskly flexing the index-finger (excitatory APA in TB and inhibitory in BB and AD), may be disrupted in PD patients, indirectly suggesting that basal ganglia could participate also in intra-limb postural control. The pattern disruption, in particular the presence of an inverted intra-limb APA, mainly regarded the BB muscle, with sporadic occurrence in AD. One possible explanation is that the PD patients enrolled in this study, despite having received the first diagnosis from 3 to 12 years ( Table 2), were in an initial stage of the disease, as witnessed by the moderate UPDRS-III scores. This could also justify the lack of significant correlations we observed between the intra-limb APA latencies or sign and the demographic and clinical parameters of PD patients. Another possible reason for the APA sign reversal occurring more frequently in BB than in TB and AD, could be the fact that if one approximates the whole arm as a rigid body, the reactive torque induced by the index-finger flexion (Continued)
FIGURE 2 | Continued
For PD patients, latencies of intra-limb APAs which had the same sign observed in all healthy individuals (excitatory in TB and inhibitory in BB and AD) are plotted by black bars, latencies of intra-limb APAs of inverted sign are plotted by white bars and lack of intra-limb APAs is marked by "X." Note that in several patients intra-limb APAs in BB were lacking or even inverted. The lowermost panel shows, for both groups, the mean latency (±SE) of the onset of finger flexion and of the intra-limb APAs in TB and AD, excluding data from those PD patients in which the APA was lacking or inverted. The asterisk marks the significant difference. The BB muscle was excluded from the latency comparison because only four patients had an APA of the same sign normally observed in healthy subjects.
should be the same on the elbow and the shoulder, because the lever-arm between the metacarpophalangeal joint and each of those two joints was identical (recall that the upper arm was vertical and hand prone in axis with the horizontal forearm).
However, much more mass should be moved in order to flex the arm at the shoulder rather than at the elbow, so that it could be argued that the TB-BB co-contraction strategy adopted by some of the patients mainly aimed at increasing the elbow stiffness so as to discharge the perturbation on a larger sprung mass and, consequently, attenuate the unwanted displacement at the shoulder level. Moreover, with regard to focal movement kinematics, present data confirmed that PD patients were slower than age-matched healthy subjects, not only for what regards average speed but also in terms of prime mover recruitment, as witnessed by the longer delay between FDS activation and movement onset. However, such result should not have biased the observed intra-limb APA alteration because a previous study demonstrated that (i) intra-limb APAs are affected by the intended movement speed, not the actual one, and in this regard both healthy subjects and PD patients had to move at their fastest speed; (ii) even when moving at 50% of their fastest speed, healthy subjects did never show any reversal of the APA sign. Finally, the control experiments involved a cohort of healthy subjects of comparable age. This was chosen considering that APAs programming is affected by age both in self-initiated movements (Man'kovskii et al., 1980;Inglin and Woollacott, 1988;Rogers et al., 1992;Woollacott and Manchester, 1993) and when APAs are produced in order to respond to an external postural perturbation (Kanekar and Aruin, 2014). The finding that the pattern of intra-limb APAs may be disrupted in some PD patients adds to the observations carried out in ataxic patients , which showed an altered intra-limb APA timing in absence of significant pattern disruptions. These results suggest a pathophysiological frame that well fits with the known roles of basal ganglia and cerebellum in selecting the correct motor program and temporizing the motor output, respectively (Grillner et al., 2005;Diedrichsen et al., 2007). In this regards, two recent results are also worth noting: first, literature reports proofs that basal ganglia and cerebellum are reciprocally interconnected through the pedunculopontine tegmental nucleus (see Wu and Hallett, 2013 for a review; Mori et al., 2016). The information exchange through these connections could justify the partial overlap observed between the symptomatic framework of cerebellum and basal-ganglia pathologies (Bostan and Strick, 2018). This has also been observed in intra-limb APAs, indeed Bruttini et al. (2015) reported cases of lacking intra-limb APAs in cerebellar ataxic patients, while signs of altered intra-limb APAs timing in parkinsonian patients are reported in the present paper (see Figure 1). Second, it has been reported that Parkinson's disease, especially in its later phase, may also affect the cerebellum (Wu and Hallett, 2013). The same review paper also indicate that the cerebellum activation is abnormally high in PD patients performing various upper limb movements and hypothesize that at the initial stage of the disease the cerebello-thalamo-cortical loop may act so as to compensate for the progressive impairment of the striato-thalamo-cortical circuit (see also Blesa et al., 2007). Consequently, once the parkinsonian degeneration had affected the cerebellum, its compensation would fade-out, leading to a quicker development of the motor impairments.
While many observations confirmed that intra-and inter-limb APAs share so many behavioral properties that they are seemingly parts of the same phenomenon , it should be noted that our results only partially fit with data on APAs during gait initiation in PD patients. In the latter framework, indeed, some studies reported an altered APA pattern (e.g., Crenna and Frigo, 1991) while other studies reported delayed APAs in the absence of pattern disruptions (Delval et al., 2014). Such discrepancy could stem from the different mechanical context characterizing the two cases: (i) when the voluntary movement is limited to only part of the body, e.g., one or both arms, APAs counteract the interaction forces so as to grant that the rest of the body stands still; (ii) when the voluntary movement involves the whole body, like in gait, what are commonly called APAs are instead those actions that produce the de-stabilizing forces leading to the movement of the CoM (Jian et al., 1993;Elble et al., 1994;Lepers and Brenière, 1995;see Yiou et al., 2017 for a review). In particular, many studies about gait initiation considered APAs those co-ordinated activities in muscles acting on both ankles (tibialis anterior and gastrocnemius/soleus) that preceded the heel-off, taking the latter as the onset of the focal movement (Crenna et al., 2006;Honeine et al., 2016). However, those actions actually moved the Center of Pressure backward and toward the "future" leading foot, directly producing a shift of the CoM forward and toward the "future" trailing foot. Therefore, it might be even proposed that the forward shift of the CoM "is" the correct onset of gait, so that APAs should be searched for not before heel-off but before CoM displacement. If so, situations (i) and (ii), described above, appear to be conceptually different, so that a direct comparison of what is classically called APA in the two cases is not feasible.
AUTHOR CONTRIBUTIONS
PC and II conceived the study. II, NP, and UR-P recruited the patients and provided their clinical evaluation. FB, RE, and SM conducted the experiments and analyzed the results. PC, FB, and RE drafted the paper. All authors contributed to and approved the final version. | 2018-12-03T14:05:03.979Z | 2018-12-03T00:00:00.000 | {
"year": 2018,
"sha1": "3843927a077dd644609c94b09245a3c62aacda9c",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fphys.2018.01745/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3843927a077dd644609c94b09245a3c62aacda9c",
"s2fieldsofstudy": [
"Biology",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
270073861 | pes2o/s2orc | v3-fos-license | Profound Rhabdomyolysis and Viral Myositis Due to SARS-CoV-2: A Case Report
The novel SARS-CoV-2 introduced several new inflammatory conditions including SARS-CoV-2-associated rhabdomyolysis and viral myositis. We present a 22-year-old man who noted a week of cough followed by myalgias, dark-colored urine, and decreased oral intake. He was found to have acute nontraumatic rhabdomyolysis after an acutely positive SARS-CoV-2 test. Initial creatine kinase (CK) level was above the reference range as were liver enzymes reflective of muscle breakdown. Treatment involved fluid resuscitation and pain control, with close monitoring of kidney, liver, and skeletal markers over five days of hospitalization till there was clinical and symptomatic improvement.
Introduction
In the context of the SARS-CoV-2 era, it is necessary to recognize both the respiratory and extrapulmonary complications of the virus.As many as 2.2% of patients affected with SARS-CoV-2 may have rhabdomyolysis, a condition characterized by the breakdown of skeletal muscle [1] and may be a specific subtype of viral myositis [2].It is important to recognize this condition due to mortality rates as high as 30% [3].In the setting of acute kidney injury (AKI), fatality can be as high as 40% [3].Certainly, rhabdomyolysis seems to be associated with higher risks of decompensation, such as intensive care unit (ICU) admission (90.9%), compared to medical and stepdown comparators (p < 0.001) and mechanical ventilation (86.4%) (p < 0.001) [1].
Risk factors for SARS-CoV-2 rhabdomyolysis include pre-existing health conditions such as advanced age, hypertension, diabetes, cardiovascular disease, and underlying kidney disease [4].Underlying musculoskeletal disorders and prescription medications such as statins, especially lipophilic varieties, and antipsychotics are thought to increase risk [5].There may be a direct underlying interaction between the drug and the infection, causing oxidative stress and mitochondrial dysfunction, which impair energy supply, increase demand, and reduce reserves [5].Dehydration, fever, and respiratory distress can further heighten the body's immune response, leading to muscle inflammation and damage [6,7].As research ensues, an understanding of individual cases and their unique risk profiles is essential to effectively assess and manage the risk of SARS-CoV-2 rhabdomyolysis.
We present a case of rhabdomyolysis in a young patient affected with SARS-CoV2 and his course of management and recovery.Our goal is to contribute to the broader discussion surrounding the incidence, risk factors or absence thereof, and adverse outcomes associated with the novel virus.
Case Presentation
A 22-year-old African American man with no past medical history, not on any home prescriptions or supplements, initially presented to the emergency department (ED) with complaints of cough, fever, chills, nausea, and neck pain.Initial vitals were blood pressure of 128/70 mmHg, oxygen saturation (SpO2) of 95%, pulse rate of 97 beats per minute (bpm), temperature of 100.9 degrees Fahrenheit (°F), and respiratory rate (RR) of 18 breaths per minute.The physical exam was unremarkable including for neurologic or musculoskeletal findings.SARS-CoV-2 reverse transcriptase polymerase chain reaction (RT-PCR) test was positive with a cycle threshold (CT) value of 21.5.A chest X-ray (CXR) showed no infiltrates, effusions, or focality.Due to benign assessment, he was deemed safe for discharge to home with return precautions and symptomatic treatment consisting of Tessalon Perles, naproxen, and ondansetron.He was advised to make an appointment with a primary care provider.It does not appear that a creatine kinase (CK) level had been drawn at that time.
A week later, he returned to the ED with a sequela of worsening proximal myalgias despite improvement in fever, chills, cough, and nausea.The patient described constant generalized muscle aches and soreness, most pronounced in the thighs and shoulders.Myalgias did not improve with rest or over-the-counter therapies, including acetaminophen and nonsteroidal anti-inflammatories.He was also concerned about dark-colored urine but was not eating or drinking as much as his normal due to symptoms in the preceding week.He denied any trauma or specific muscle strain and had taken a hiatus from his resistance training.There was no personal or family history of autoimmune, inflammatory, neurologic, or rheumatologic disease.Vitals were stable with a blood pressure of 128/64 mmHg, pulse rate of 77 beats per minute, temperature of 98.5°F, respiratory rate of 18 breaths per minute, and SpO2 of 96% on room air (RA).Physical exam at this time was notable for proximal upper extremity muscle tenderness and hip flexion weakness.He had appropriate range of motion to all major joints, and there were no rashes or skin lesions.Significant lab results included elevated liver enzymes, with aspartate aminotransferase (AST) 1553 U/L (reference: 0-46 U/L) and alanine transaminase (ALT) 327 U/L (reference: 0-60 U/L).CK was 1658 U/L (0-250 U/L).Urinalysis was brown with turbid clarity, 200 mg/dL of protein, large blood, and five granular casts, suggestive of dehydration (Table 1).Urine drug screen was positive for delta-9-tetrahydrocannabinol, and he admitted to occasional use with further questions as all contributors were in consideration.An acute hepatitis panel was negative for hepatitis A IgM, B core IgM, B surface antigen, and hepatitis C antibody.Chest imaging with CXR still showed no cardiopulmonary manifestation of illness.Given a stable renal function, he was administered intravenous (IV) ketorolac every eight hours as needed for pain limited to five doses and bolused 2L IV crystalloids as normal saline for rhabdomyolysis and dehydration.He was subsequently admitted to the hospital for further care.CK levels, as a direct measure of skeletal muscle protein leak, were monitored every eight hours.Liver enzymes, especially AST and ALT, were considered an indirect measure of skeletal muscle function and were observed daily.Crystalloids, as IV lactated ringers solution, were administered at an initial rate of 100 cc/hr, which we suspect was the reason for an uptrend in serum markers on days 1 and 2. On day 3, an appropriate resuscitation rate of 250 cc/hr was assumed.CK levels remained elevated for five days before finally down trending (Figure 1), and there was an associated improvement in proximal extremity myalgias.AST and ALT were also noted to be responsive to resuscitation rate as they finally down trended after an increase in the fluid rate (Figure 2).After symptoms and lab markers were markedly improved, he was discharged after a week of hospitalization, in stable condition to home.He was instructed to make an appointment with his primary care office in 1-2 weeks with plan for repeat CK and comprehensive metabolic panel.He was instructed to limit nonsteroidal anti-inflammatory drugs (NSAIDs) to 4 g a day to prevent renal injury.For safekeeping and not to influence liver labs or potential hepatic clearance, he was advised to not take any acetaminophen products in the meantime.He did not require or request any new prescriptions.
At a two-week follow-up on day 21, CK level had improved to 792 U/L, and liver markers had normalized (Figures 1-2).Importantly, he maintained full strength and had resolution of all symptoms.
Discussion
Acute viral myositis, rhabdomyolysis, dermatomyositis, paraspinal myositis, and myasthenia are the most common musculoskeletal presentations of SARS-CoV-2 [8].More rarely, axonal neuropathy or cachexia may occur.The manifestation of proximal myalgias in our patient suggests a viral myositis [2].If rashes had been present, for example, a dermatomyositis would be suspected [4].Rhabdomyolysis with SARS-CoV-2 is associated with higher rates of ICU admission, mechanical ventilation, and mortality [1,3].Persistent fatigue and myalgias have been reported as the long-term debilitating symptoms of the condition [8], which our patient fortunately did not endure and had a good clinical response likely due to young age and absence of comorbid risk factors.
It is postulated that SARS-CoV-2 gains direct entry into the muscle tissue via the angiotensin-converting enzyme 2 (ACE-2) receptor, which is present in vascular endothelium, bowel, synovium, and smooth and skeletal muscle.There is an ongoing need for research to better understand the heterogeneity of this complication across different patient populations and geographies [1,3].
Several cases of SARS-CoV-2 viral myositis have been documented in the literature.Our case is added to the collection of Saud et al.'s 22 compiled cases [8], as well as to Jin [6], Chedid [9], and Mukherjee [10] (Table 2).These cases contribute to our understanding of an important causative viral myositis.There were varied demographic risk factors, symptomatology, interventions applied, and response to therapies, especially during the peak of SARS-CoV-2-positive hospitalizations from December 2020 to January 2021 [20].
Best practices for managing SARS-CoV-2 myositis and associated rhabdomyolysis involve a comprehensive history from the onset, and workup involving CK level, which were lacking from our patient's primary ED presentation.Early recognition of signs such as persistent myalgias, dark-colored urine, and elevated CK levels is essential [21].These are the traditional triad of rhabdomyolysis in 10% of patients, although up to 50% of patients present asymptomatically [22].Regular monitoring of CK levels should definitively be obtained in symptomatic patients with persistent myalgias who have tested positive for SARS-CoV-2, as this action can help identify early elevations indicative of impending rhabdomyolysis [22].
Aggressive IV fluid resuscitation with several liter boluses followed by resuscitative fluids is a cornerstone of management to prevent AKI by promoting myoglobin clearance and maintaining renal perfusion [23].Symptomatic management for pain involves conservative therapies such as muscle relaxants and general avoidance of nephrotoxins and acetaminophen so as not to influence important lab trends that are important to honor.Our patient did not present with AKI and maintained successful UOP throughout his heavy fluid resuscitation, especially during the five days of 250 cc/hr, so that it is minimally okay to consider NSAID for conservative management.Alternatives such as acetaminophen affect levels of AST and ALT, which are monitored units, and opiates have been seen to contribute to rhabdomyolysis also.As seen, management of viral myositis and rhabdomyolysis can be intricate and an art based on the patient's unique presentation, background factors, and concerns.Involving nephrology early in the patient's course for comanagement essentially improves patient outcomes [22].Additionally, after the patient has improved objectively and clinically and been discharged, follow-up care with labs and clinical evaluation during the post-acute phase are important to assess for lingering effects of rhabdomyolysis and kidney function [23].
Further research in the realm of SARS-CoV-2 myositis and rhabdomyolysis should explore the cellular mechanisms of illness and nuances of risk factors, investigate the impact of different treatment modalities, and elucidate the long-term consequences of rhabdomyolysis in SARS-CoV-2 survivors and nonsurvivors [1,24,25,26].Establishing a best early detection and guideline for therapy can lead to early interventions and prevent downstream complications and mortality.
Conclusions
SARS-CoV-2 myositis and rhabdomyolysis are important sequelae of the novel virus.They are associated diagnoses that should be recognized early and involve a careful history at initial presentation to divulge the risk factors for severity and decompensation.Prompt aggressive IV crystalloid resuscitation should ensue with strict return precautions for worsening symptoms and consideration for hospitalization.Recognizing the similarities and differences among published cases improves our understanding and management of how to improve our medical care and patient outcomes.Ongoing research should involve early detection methods, refined guidelines, and alternate nuanced options for complex presentations.
FIGURE 1 :FIGURE 2 :
FIGURE 1: Patient's chronologic CK trend CK: Creatine kinase The X-axis indicates the day from index hospital admission.The Y-axis indicates the CK level in U/L, where the normal range is 0-250 U/L | 2024-05-29T15:03:28.623Z | 2024-05-01T00:00:00.000 | {
"year": 2024,
"sha1": "e470bc573be2c8b785fead170cc5310d8bd9e7d8",
"oa_license": "CCBY",
"oa_url": "https://assets.cureus.com/uploads/case_report/pdf/256813/20240527-24293-ap2su3.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "49322f8f19939750f6b246769b9da0a4b5fd6095",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": []
} |
16878005 | pes2o/s2orc | v3-fos-license | Axial Couplings on the World-Line
We construct a world-line representation for the fermionic one-loop effective action with axial and also vector, scalar, and pseudo-scalar couplings. We use this expression to compute a few selected scattering amplitudes. These allow us to verify that our method yields the same results as standard field theory. In particular, we are able to reproduce the chiral anomaly. Our starting point is the second-order formulation for the Dirac fermion. We translate the second order expressions into a world-line action.
Recently it has been argued [1,2,3,4,5,6] that conventional quantum field theory, at least in some problems, can be profitably substituted by a first quantized formalism dealing with relativistic particles on a world-line. This is a very old idea [7,8,9,10], but for a long time it was only applied to the description of one-loop determinants and propagators, whereas in ref. [3,4] a unified description for whole classes of higher loop Feynman graphs was proposed. The Bern-Kosower formalism [11,12,13], which triggered much of the recent research in this subject, was derived from string theory. It beautifully simplifies tree and one-loop calculations in field theory but, unfortunately, becomes very difficult to develop for higher loops. This is one of the reasons why Strassler's observation [1] that their one-loop rules can already be derived in the first quantized relativistic particle approach, i. e., in a one-dimensional QFT on the world-line, is extremely useful. Indeed, already in the calculation of one-loop effective actions this leads to impressive simplifications [2,14] compared to a conventional heat kernel approach. Higher loop-calculations [3,4], even though still at a preliminary stage, look very promising.
In order to perform the analog of conventional QFT calculations it is necessary to know the proper world-line Lagrangian. Whereas for scalar φ 4 or φ 3 potentials and vector gauge coupling to scalar and Dirac particles this is well known, only very recently we constructed a world-line Lagrangian [15] for the Yukawa couplings of scalars and pseudo-scalars to Dirac particles.
In this letter we present a general formalism for the simultaneous coupling of abelian vector, axial vector, scalar, and pseudo-scalar background fields to Dirac particles in loops in the world-line formalism. Our starting point is the second order description for the fermionic one-loop effective action. The expression for the effective action in the second order formulation allows us to justify the expressions for the effective action in the world-line formalism which we propose later on. Whereas in the examples of ref. [15] only even numbers of outer pseudo-scalars were allowed 1 here we do not have that restriction. Among the examples we present here, we also compute the chiral anomaly.
Let us begin with the second order description for the one-loop effective action of a fermion in an arbitrary background. The main difference between the usual (first order) description of a Dirac particle and the second oder description is in the form of the propagator. The first order propagator for a Dirac particle in Euclidean space is i/( p + m). In the second order description, one studies basically the square of the Dirac equation. Therefore, the propagator is now of Klein-Gordon type 1/(p 2 + m 2 ). The space-time prop-agator generated in the world-line formalism (and string theory) is usually of the second type, independent of the nature of the propagating field. For this reason, the second order formalism [16,13,17] is an important link in establishing the connection between expressions in the world-line formalism and ordinary Feynman diagram results [13,15].
In Euclidean space, the Lagrangian for a Dirac particle is given by where with the convention that {γ µ , γ ν } = −2δ µν and γ † µ = −γ µ . The corresponding one-loop effective action is To arrive at a second order formulation, one can pick an arbitrary operator O ′ with the restriction that the free part in OO ′ is quadratic. However, a good choice of O ′ is important for an efficient perturbation theory derived from (3). In particular, it is convenient if we can choose O ′ such that the second term vanishes. It is also convenient if it can be arranged such that in OO ′ no covariant derivative acts to the right, except for those in the kinetic term. For a general O as given in (2), there is no choice for the operator O ′ which satisfies both criteria. For the usual choice O ′ = γ 5 Oγ 5 , the second term vanishes since det O = det O ′ . However, in the presence of axial and pseudoscalar couplings, the second criterion mentioned above cannot be satisfied at the same time. This makes the translation of the expressions in the second order formalism into expressions in the world-line formalism difficult. Instead we choose As we will see later on, this choice makes it straightforward to translate (3) into a world-line expression.
Rewriting the first term in (3) as log OO † , we see that the operator which generates the real part of the effective action in the second order formalism is where D µ = ∂ µ + igV µ + ig 5 γ 5 A µ . We use V µν and A µν to denote the fieldstrength tensors.
The term which is a bit more difficult to handle is the second term in (3), corresponding to the imaginary part of the effective action Γ. The imaginary part of Γ is generated by processes involving at least one axial vector or one pseudo-scalar. This follows immediately from the fact that for O ′ = γ 5 Oγ 5 the second term vanishes while the first term gives the same result as for O ′ = O. If we view the effective action primarily as the generating functional for one-particle irreducible Green's functions the next step is clear: Instead of looking at the original term we look at the term after one functional differentiation. Using and the cyclicity of the trace we can rewrite the derivative of the second term of (3) as where U is either A µ or φ ′ . This way, the expression we have to study is of the form of some operator times a second order propagator. From equations (5, 7) we can construct Feynman rules for their perturbative evaluation (tables 1, 2). The perturbative evaluation of (5) using the rules from table 1 is the standard procedure. To evaluate the extra terms generated by (7), one has to take one vertex from table 2 (corresponding to the term in braces) and all others from table 1. All propagators are of Klein-Gordon type. Using this set of rules it is straightforward to compute the one-particle irreducible one-loop Green's functions. (5). We use p (= i∂) to denote the momentum of the incoming boson, and q for the momentum of the incoming fermion. A global factor of 1/2 and a negative sign for the fermion loop are required. For the calculations we present here, we use dimensional regularisation. We verified, however, that the same results are obtained using Pauli-Villars regularisation. Since in this paper, we only deal with one-loop processes, we use a naïve scheme for the treatment of γ 5 in dimensional regularisation [18].
To warm up, let us compute the axial-axial two-point function. In principle, we have to compute four terms: The regular term (eq. (5)) and the extra term (eq. (7)) for diagrams (a) and (b) of figure 1. As it turns out, only the regular terms contribute while the extra terms vanish. The result of this calculation is for arbitrary, even dimension where n F = tr(1) is the number of fermionic degrees of freedom. This is related to the standard first order result by integration by parts and by Γ-function identities. A more interesting example is given by the axial-vector and vector pseudoscalar two-point functions in two dimensions. With those two processes we can study the axial current Ward identity and the chiral anomaly in two dimensions. It turns out that for this case, the contribution from the regular second order expression vanishes. Instead, the process is described using the extra Feynman rules (table 2). Here, we chose the vertex to be a special vertex for the coupling to the axial current. The other vertex for the vector current is an ordinary vertex. Besides this, the calculation uses standard Feynman techniques. and Again, these expressions are equivalent to the results of an ordinary, first order calculation. If we check the axial current Ward identity we find, as expected, on the right hand side an extra, anomalous term A first order field-theory calculation verifies that this is indeed the axial anomaly in two dimensions.
Our goal now is to translate these results into a world-line formulation. Since we study the couplings to an internal fermion we start from the usual description of a spinning particle by a supersymmetric worldline Lagrangian [19,20,21] using a curved superspace description. The particle is described by a superfield X µ (τ , θ) = x µ (τ ) + θ √ e ψ µ (τ ), where x is a normal commuting number, and θ is a Grassmann variable. To keep the manifest reparametrisation invariance of the super world-line, we introduce the super-einbein Λ = e + θ √ e χ. Furthermore, to couple the spinning particle to scalar and pseudo-scalar fields, we need two auxiliary fieldsX, X ′ = √ e ψ 5,6 + θx 5,6 [15]. Since we deal with a curved superspace on the world-line, we also have to distinguish the two derivatives D θ = Λ −1/2 ∂ ∂θ − θ ∂ ∂τ and D τ = Λ −1 ∂ ∂τ . Translating the second order action (5) into world-line form we suggest the world-line action Before we can do any calculations using this action, we have to fix the reparametrisation invariance by fixing Λ. For calculations on the circle, a valid and convenient gauge is e = 2, χ = 0. In this gauge, the world-line action in component notation is A mass term for the fermion is generated by shifting φ → φ + m/λ [15].
Having this world-line action is not all. For example, one can check immediately that it is impossible to generate diagrams with an odd number of pseudo-scalars. This corresponds to the situation in the second order formalism, so we also need to describe the terms corresponding to (6). They can all be generated from where we have U = A µ or φ ′ . We can translate the terms in table 2 in a suggestive manner. The inserted operators are then The quartic vertices in table 2 are generated by δ-functions which appear in some of the world-line Green's functions. The explicit p µ in the terms is really a derivative acting on U.
The operator (−1) F anti-commutes with all fermionic fields and this way implements the usual γ 5 in the Dirac algebra. The presence of (−1) F changes the boundary conditions for world-line fermions from anti-periodic to periodic and is known to play a rôle in the computation of anomalies [22,23]. This and the introduction of the auxiliary fields X ′ andX are the only new ingredients in the action.
All these terms are of a surprisingly simple form, which is almost a product of the kinetic part of the Lagrangian in (15) with the interaction part of the field under consideration, all projected to the lower component of the superfield. This is not manifestly supersymmetric.
Before we can perform any calculations in this world-line formalism, we have to expand the action in components and perform the integral over the auxiliary fields. This step is necessary to resolve some ambiguities in the evaluation of products of Green's functions on the world-line (Similar ambiguities appeared in the world-line calculations of [24]). These ambiguities arise sinceG B and x 5 x 5 contain a δ-function whileĠ B and G F contain a step-function. After the removal of the auxiliary fields from the world-line action, we can always use G 2 F = 1 in all calculations, even for identical arguments (relevant if multiplied by a δ-function). An example of this ambiguity occurs in the calculation of the axial current two-point function. We see that both the ψ 5 ψ 6ẋµ A µ term and the (ψ 5 x 6 − ψ 6 x 5 )ψ µ A µ term in (15) generate G 2 F (u)δ(u). However, to reproduce the second order calculation, only the first term is allowed to contribute while the contribution from the second term has to vanish.
Integrating out the auxiliary fields removes the second term in the axial coupling in (15) and introduces quartic couplings instead. The resulting world-line action bears a much closer relation to the second order expression for the effective action (5). Some of the subtleties mentioned here will be further discussed in a forthcoming publication [25]. Now let us look at our examples again. The evaluation of the axial-vector axial-vector two-point function is fairly simple. The calculation follows the usual world-line rules [1,2,15]. For the axial vector two-point function we find (18) where G B is the bosonic Green's function on the circle. Replacing G B = u(1 − u),Ġ B = 1 − 2u, andG B = 2δ(u) − 2 it is easy to check that this is indeed the result from eq. (8). In the Green's function, the δ-function is understood to be on the circle. As in the second order calculation, the extra terms do not play a rôle in this process.
More interesting is the computation of the vector-axial vector and pseudoscalar vector two-point functions. We can see immediately that the direct term generated by (18) vanishes. So we have to look at the terms generated by (16). Here, the (−1) F appears, which corresponds to the presence of a γ 5 in the field-theory calculation. This means that the boundary conditions for the fermions get changed and both fermions and bosons have periodic boundary conditions. In this case there are zero modes for the fermions which we have to take into account. Furthermore, the fermionic Green's function is changed and we have i. e., the world-line supersymmetry is no longer broken by the boundary conditions. We get now Here the extra subscripts 1, 2 denote the vertex insertion which the fields belong to. Evaluating this term we notice that we have to include one zero mode for each fermion present. For ψ 5 and ψ 6 this is already done. For the ψ µ , this is indicated by the modified contraction ′′ . In n dimensions this automatically produces a factor n i=1 ψ µ i which is nothing but the ndimensional ǫ-tensor. The result is then where we used the identity ǫ µν p 2 = ǫ µα p α p ν − ǫ νβ p µ p β . We use this identity again and substitute the world-line Green's functions to recover the second order result (10). For the pseudo-scalar vector coupling we find Both expressions agree with the second order results obtained above. This implies that also in the world-line formalism we were able to compute both the axial current Ward identity and the axial anomaly. It is possible, even though slightly more cumbersome, to do the same calculation using a Pauli-Villars regulator instead of dimensional regularisation. The final result is not affected by this choice.
In four dimensions, we can do the same for the triangle graphs with one or three axial currents [25]. Again, the ǫ-tensors are produced by the zeromodes of the fermion fields.
We have seen how useful the second order formalism is as a tool for constructing of world-line actions for a spinning particle in a loop. By constructing an appropriate second order representation for a theory with Dirac fermions we were able to find a world-line representation which reproduces exactly the second order expressions we started with. This indicates strongly that the world-line formalism suggested here indeed reproduces the usual field theory results. An important new ingredient are auxiliary fields X ′ andX, and the inclusion of (−1) F in the representation of γ 5 . This allowed us to treat processes with an odd number of vertices with γ 5 -couplings, especially those connected to the chiral anomaly. A computerized higher order effective action calculation to verify the agreement between the standard approach and the world-line formalism is in preparation [25]. Furthermore it is interesting to see if this kind of world-line formulation of axial couplings allows for the generalization to multi-loop processes and to processes with open fermion lines.
In a recent preprint [26], which we received while finishing this letter, D'Hoker and Gagné also introduce the operator (−1) F for the calculation of the imaginary part of the effective action as a mean to generate the ǫ-tensor from the fermionic zero modes. They do not include axial vector couplings, though. Furthermore, they derive an elegant integral representation for the imaginary part of the effective action. This representation has the advantage of providing a closed form for the imaginary part which is still missing in our approach. The resulting perturbation theory introduces an additional Feynman parameter-like integration. It seems to us that the construction of the world-line representation as we present it here, including the superfield formulation, can also be done starting from their integral representation.
An alternative construction of the Yukawa coupling of a spinning particle and a bosonic representation of γ 5 are given in [27]. | 2014-10-01T00:00:00.000Z | 1995-10-08T00:00:00.000 | {
"year": 1995,
"sha1": "f0dceadb47de1ee42321f797295666e96bd00e4b",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-th/9510036",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "e5973b6e0a879a1236cb0116a80dd220d98ba6ff",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
218665236 | pes2o/s2orc | v3-fos-license | Recent results on cold-QCD from RHIC
Polarized proton-proton collisions at the Relativistic Heavy Ion Collider (RHIC) provide unique opportunities to study the spin structure of the nucleon. We will highlight recent results on the nucleon spin structure from the STAR and PHENIX experiments at RHIC: (1) A sizable gluon polarization in the proton is measured with longitudinal double spin asymmetries of jet and hadron production; (2) Longitudinal single spin asymmetries in W boson production improve constraints on the sea quark polarization. The new spin asymmetry results for W boson confirmed the SU(2) flavor asymmetry of the light sea quark polarization in the proton; (3) Transverse spin effects in hadronic systems offer new implications on parton distribution functions in the collinear and transverse momentum dependent frameworks. We will also discuss near term plans for the STAR forward detector upgrade and prospects for proton-proton and proton-ion collisions in the years beyond 2021 at STAR.
Introduction to RHIC spin program
In addition to colliding heavy ions, RHIC at Brookhaven National Laboratory is the world's only polarized proton-proton (pp) collider. By colliding high-energy beams of polarized protons, RHIC provides unique opportunities for exploring the spin structure of the proton [1]. In 1988 the EMC experimental results showed that the contribution from quark/antiquark spin to nucleon spin is surprisingly smaller than expected, leading to the so called "spin puzzle". The results from polarized Deep-Inelastic-Scattering (DIS) experiments in the past 30 years have shown that the spins of the quarks and antiquarks account for only about 30% of proton's spin in the measured x-range.
Since 2001, RHIC has been providing pp collisions at center of mass energies of 200 GeV and 500 GeV with beams longitudinally or transversely polarized. There have been two main detectors, PHENIX and STAR, at RHIC to perform spin experiments as well as heavy ion ones. PHENIX completed its data taking in 2016. STAR is currently the only experiment running at RHIC.
In the following, we will highlight recent results on nucleon spin structure from the STAR and PHENIX experiments. RHIC spin experiments have been providing information on how much gluon spin contributes to the spin of the proton, which is observed to be sizable. Significant constraints on the sea quark polarization are also obtained beyond the semi-inclusive DIS measurements, in particular the SU(2) flavor asymmetry of the light sea quark polarization is confirmed through W boson production. Information on how quarks move around inside the proton can be gained by studying e-mail: xuqh@sdu.edu.cn transverse spin effects at RHIC, which offer new implications on parton distribution functions in the collinear and transverse momentum dependent (TMD) frameworks. Finally we will discuss future plans for the STAR forward detector upgrade and prospects for pp and proton-ion (pA) collisions in the years beyond 2021 at RHIC.
Gluon polarization determination with jet/hadron production
The gluon polarization was once expected to account for most of the "missing" part of the proton spin. In lepton-nucleon DIS process, it can be determined through scaling violations and gluon splitting process, but was limited by kinematic range and statistics. In pp collisions at RHIC, the gluon helicity distribution ∆g(x) can be accessed with strongly interacting probes, in jet or hadron production, by measuring the longitudinal double-spin asymmetries (A LL ), where σ denotes the cross section for jet/hadron production and the superscripts "+" or "-" denote the helicities of the two proton beams. P 1 and P 2 are the beam polarizations, R = (L ++ +L −− )/(L +− +L −+ ) is the relative luminosity ratio, and N's are the corresponding jet/hadron yields. In QCD, assuming factorization, A LL is proportional to the convolution of parton helicity distributions, the double spin asymmetry for the hard partonic scattering, and the fragmentation function for hadron production. A significant fraction of the partonic scatterings is gluon-involved such as gluon-gluon or quark-gluon scattering for jet and hadron production at RHIC, which thus provide sensitivity to the gluon helicity distribution function [1]. The first evidence of non-zero gluon polarization has been provided by A LL measurements for inclusive jets at STAR [4] and for π 0 at PHENIX [5] in pp collisions at 200 GeV. The global analyses from the DSSV and NNPDF groups indicated that the integral of the gluon helicity distribution within the kinematic range of 0.05 < x < 0.2 could be as large as about 0.2 [2,3]. To probe the gluon polarization in the lower x region, STAR recently published A LL results for inclusive jets at 510 GeV [6], and PHENIX also published their A LL results for inclusive π 0 at 510 GeV [7]. Both results are shown in Fig. 1, which can provide constraints on gluon polarization with x down to 0.015. In addition, di-jet correlation measurements can provide information on the x-dependence of the gluon helicity distribution. Figure 2 shows the A LL results for di-jet measurements with STAR at 200 GeV and 510 GeV with different detector coverages [6,8,9]. The recent STAR di-jet results have been included in a global analysis and indicated their significance in constraining the shape of ∆g(x) [10].
Probing sea quark polarization via W boson production
The spin contribution of the sea quarks is also an important piece to the complete understanding of the nucleon spin structure. The production of W ± bosons in pp collisions with one beam longitudinally polarized provides an unique clean probe of the sea quark helicity distributions without the complication of hadron fragmentation, as in semi-inclusive DIS [1]. At RHIC, the longitudinal single-spin asymmetry A L for W boson production in pp collisions is defined as: where σ +/− is the W cross section with a positive/negative longitudinal spin orientation of the proton beam. At RHIC, the W's are detected via leptonic decay, which are characterized by an isolated e ± or µ ± with a sizable transverse energy, which peaks near half the W mass. The final A L results for W ± boson versus lepton pseudorapidity from the STAR 2013 data sample [13] are shown in the left panel of Fig. 3, in comparison with STAR results from the 2011+2012 data [12] and the final PHENIX results of A L for leptons from W/Z decay [14,15]. The 2013 data sample corresponds to an integrated luminosity of about 250 pb −1 with an average beam polarization of about 56%. This is about 3 times the previous data sample taken in 2011 and 2012 [12], and thus the most precise measurement of W A L at RHIC. The combined STAR data from years 2011-2013 are shown in the right panel of Fig. 3, in comparison with theoretical expectations based on the different inputs of polarized parton densities. [13] and PHENIX [14,15] from pp data at 510 GeV. (right) Combined results on A L for W boson from STAR data 2011-2013 [13], and compared with theoretical calculations.
To assess their impact in constraining the sea quark helicity distribution, the STAR 2013 A L data were used in the reweighting procedure of NNPDF global fit based on NNPDFpol1.1 parton densities [3]. The results from the reweighting are shown in Fig. 3 (right plot, blue hatched bands), and the uncertainties are significantly reduced compared to before the reweighting. The helicity distributions ofū andd quarks in the proton and their difference ∆ū(x) − ∆d(x) from the reweighting are shown in Fig. 4 [13]. The new results confirm the existence of a flavor asymmetry in the polarized quark sea, ∆ū(x) > 0 > ∆d(x), in the range of 0.05 < x < 0.25 at a scale of Q 2 = 10 (GeV/c) 2 . This is opposite to the flavor asymmetry observed in the unpolarized quark distributions, whered(x) >ū(x) over a wide x range has been observed in the Drell-Yan process [16].
Transverse spin physics results
Significant transverse single spin asymmetries (A N ) have been observed for different hadron production in hadron-hadron collisions over a wide range of colliding energies since the 1970's. STAR measurements have demonstrated the persistence of sizeable A N for forward π 0 production at RHIC energies [17], where different mechanisms including the higher twist effect, and TMD effects like the Sivers or Collins fragmentation effect could all contribute. Therefore, it is important to study different effects separately for a full understanding of the underlying mechanism.
The hadron production within a jet in pp collisions provides direct access to the Collins fragmentation function, and the corresponding A N is also connected to proton transversity. Figure 5 shows the results of the mid-rapidity Collins asymmetries as a function of z for charged pions in pp collisions at 500 GeV at STAR [18], where z is the momentum fraction of the jet carried by the pion. KPRY KPRY-NLL Figure 5. Collins asymmetries as a function of pion z for jets reconstructed with < p T >=31 GeV and 0 < η < 1 in pp collisions at 500 GeV at STAR [18]. The asymmetries are shown in comparison with model calculations.
STAR also made measurements of transverse single spin asymmetries in di-hadron production at mid-rapidity [19], which provide another channel to access proton transversity through the analyzing power of the interference fragmentation functions (IFF). Figure 6 shows the results for the IFF asymmetry A sinφ UT at mid-rapidity as a function of di-hadron invariant mass at 200 GeV and 500 GeV. The transverse single spin asymmetry for W and Z production in pp collisions provides an excellent opportunity to test the sign-change of the Sivers function, in comparison to semi-inclusive DIS. STAR published the first results on W boson A N [20], which indicated a preference for the signchange. The analysis of a 14 times larger data sample taken in 2017 at STAR is underway, and will provide an important test of the sign-change of the Sivers function.
The possible nucleus dependence of single spin asymmetry has been predicted and also measured at RHIC. PHENIX measurements of the single spin asymmetry of forward neutron in proton-nucleus collisions indicated a possible A-dependence [21], while the STAR measurements of forward π 0 do not show this trend [22]. In addition, STAR made the first measurement of the transverse spin transfer for Λ andΛ hyperons in transversely polarized pp collisions at 200 GeV [23], which may provide insights into the transversity distribution of the strange quark. Figure 6. The azimuthal asymmetry A UT for π + π − pairs as a function of invariant mass at 200 GeV and 500 GeV [19], and compared with predictions.
Summary and future plan for cold-QCD physics at RHIC
RHIC continues its efforts to deepen our understanding of the nucleon spin structure. Sizable gluon polarization in the proton has been determined from RHIC measurements with polarized proton beams. New results on double spin asymmetries A LL for inclusive jets, di-jets and π 0 production from the STAR and PHENIX experiments in pp collisions at 200 and 510 GeV, are providing further constraints on the gluon helicity distribution. Both PHENIX and STAR experiments also published their final results on the single-spin asymmetry A L for W boson production from the largest data sample taken in 2013. The new A L results from STAR further confirmed the SU(2) flavor asymmetry in sea quark polarization, i.e., ∆ū(x) > 0 > ∆d(x) in the range 0.05 < x < 0.25 at Q 2 ∼ 10 (GeV/c) 2 . On the transverse spin program, new results on single spin asymmetries of the Collins effect for pion production within a jet at mid-rapidity are reported from STAR. STAR also made measurements of di-hadron spin asymmetries (IFF), which provide insights into the proton transversity distribution. The transverse single spin asymmetry for W and Z production with a large data sample taken in 2017 at STAR will provide a great opportunity to test the sign-change of the Sivers function.
STAR is currently performing a detector upgrade in the forward rapidity region 2.5<η<4, which includes a Forward Tracking System (FTS) and a Forward Calorimeter System (FCS). The FTS consists of 3 layers of silicon mini-strip disks and 4 layers of small-strip Thin Gap Chamber, and the FCS includes both electro-magnetic and hadron calorimeters. The upgrade will be completed in late 2021 and will provide improved calorimetry, tracking, and charge identification and photon hadron separation in the forward region, which will enable the measurements of full jets, Drell-Yan, and prompt photons in pp and pA collisions. Dedicated polarized pp running is expected at RHIC-STAR in the year of 2021/2022, and then STAR will continue to run pp/pA/AA in parallel with sPHENIX beyond 2021. In particular, the proposed pp/pA measurements will be essential to fully realize the scientific promise of the EIC collider [24].
The author is supported partially by the National Natural Science Foundation of China (No. 11520101004). | 2020-05-18T01:00:17.885Z | 2020-05-15T00:00:00.000 | {
"year": 2020,
"sha1": "32cdf1ea863991884f6ba62a7a8b6bcfd6fa808d",
"oa_license": "CCBY",
"oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2020/11/epjconf_ismd2019_03002.pdf",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "32cdf1ea863991884f6ba62a7a8b6bcfd6fa808d",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
85513777 | pes2o/s2orc | v3-fos-license | Effect of Salmeterol-Fluticasone Combination and Tiotropium on Clinical and Physiological Improvement of Bronchial Anthracofibrosis: a Double Blind Randomized, Cross Over, Placebo Controlled, Clinical Trial.
Background: Bronchial anthracosis is the black discoloration of bronchial mucosa that exhibits similar manifestations to Chronic Obstructive Pulmonary Disease ( COPD). The etiology of this obstructive lung disease has not been elucidated and standard therapy for this disease has not been introduced in the literature. The objective of this study is to determine the efficacy of the salmeterol-fluticasone inhaler and tiotropium as two safe treatments of obstructive lung disease for the treatment of symptomatic subjects of anthracofibrosis of the lung. Materials and Methods: Twenty anthracofibrosis subjects who suffered from dyspnea were enrolled in this three-phase, cross over, placebo-controlled clinical trial. The primary outcome variable was quality of life (evaluated with the CAT questionnaire). Clinical findings and spirometry were the secondary outcome variables. Both of these drugs were delivered by an inhaler and were made identically by the reference manufacturer. Salmeterol-fluticasone was prescribed with a spacer and tiotropium by its special device, and the method of utilization was taught to the patients. Results: Twenty anthracofibrosis subjects were enrolled in this three-phase, five-month course of treatment with either salmeterol-fluticasone or tiotropium inhalers. The response to therapy was not good; neither for salmeterol-fluticasone nor for tiotropium in the short course of the treatment. However, the overall results of 5 months of therapy with both of the drugs have shown improvement in 57% of the subjects. The most prominent results were found in the CAT score [25.1±5.54 before the trial, which decreased to 19.2±5.14 (Z score=2.7, P=0.007)] and clinical findings especially sputum, chest pain, and wheezing (81, 94 and 92% before the trial and 50, 56, 54% after the trial, respectively). Neither clinical findings nor spirometry was able to predict a good response to salmeterol-fluticasone or tiotropium. Conclusion: The combination of salmeterol-fluticasone and tiotropium inhaler was able to improve the clinical findings of symptomatic anthracofibrosis patients.
INTRODUCTION
Bronchial anthracosis is the black discoloration of bronchial mucosa and is an old disease that is being increasingly reported in Asia, especially in rural areas (1,2).
Sometimes this is an accidental finding during bronchoscopy, but in a more severe form called Randomization and random allocation: Placebos for the salmeterol-fluticasone inhaler and tiotropium were obtained from the producer of the drugs (Cipla Co., Goa, India) and were completely similar in appearance to the originals. The drugs were coded before the study and then a pharmacist, blinded to the situation of the subjects, distributed the drugs to them. The physician or any of the patients did not know the type of drugs they were started on. The drugs were randomly distributed to the two groups using a computational random number generator (SPSS software). Anthracofibrosis subjects were divided into two groups randomly based on study protocol ( Figure 1).
First phase
For the first group (10 subjects), the salmeterolfluticasone combination was prescribed and a placebo in place of tiotropium for one month. The second group (10 subjects) was prescribed tiotropium and a placebo for the salmeterol-fluticasone combination. After one month, subjects were reassessed by three tools to measure outcome variables as mentioned above, and then all of the subjects were administered a placebo for a drug washout.
Second phase
Next, the group's subjects were crossed over to the other group and administered the other drug and placebo for one month. Afterward, they were evaluated again for outcome variables and were administered the placebo for drug washout.
Third phase
The subjects in both groups were combined (20 subjects) and received salmeterol-fluticasone and tiotropium together for one month. At the end of this phase outcome variables were evaluated for the fourth time and the subjects were questioned regarding t he side effects of the drugs. At the end of this phase, the study was completed.
Statistical analysis
According to the frequency of anthracofibrosis in our
RESULTS
Twenty anthracofibrosis subjects, whose diagnosis were proven by bronchoscopy and suffered from respiratory symptoms, were enrolled in this study.
Bronchoscopic findings showed that anthracofibrosis was localized in some lobar bronchi in five (25%) subjects and it was diffuse in 15 (75%) subjects. One subject showed associated lung fibrosis in addition to anthracofibrosis.
These 20 subjects completed one-month courses of salmeterol-fluticasone combination/placebo and tiotropium/placebo therapy sequentially. From these two groups, two subjects improved and did not continue with the third phase of the salmeterol-fluticasone combination/tiotropium therapy.
Demographic data
Mean Computed tomography was performed in eleven subjects. Lymph node high attenuation (calcified like lesion) was observed in 5/11 (45%) and a mass with or without high attenuation in 8/10 (80%). Table 1 shows the clinical findings of the anthracofibrosis subjects who received salmeterolfluticasone and a placebo of tiotropium. Among these five major clinical findings, dyspnea was the most frequent and wheezing in the physical examination showed the best improvement, although none of the changes after the treatment were statistically significant. Evaluation of the CAT questionnaire showed improvement in some parameters especially step tolerance (P=0.02), disability during work, and sleep quality ( Table 2).
Effects of the salmeterol-fluticasone inhaler on anthracofibrosis
Total CAT score as the cumulative result of the CAT questionnaire showed severe disability and low quality of life, which improved non-significantly with treatment (Table 2).
Spirometry parameters showed moderate mixed restrictive and obstructive type impairment (Table 3).
Treatment with salmeterol-fluticasone was able to improve the small airway obstruction as shown by significant improvement of FEF25-75 and FEF25-75/FVC (Table 3).
Restrictive pattern was the predominant pattern of the spirometry before and after the trial (Figure 3). Although some improvement could be found in some parameters such as cough, sputum, and sleep quality, the statistical analysis did not show significant differences.
Effects of tiotropium on anthracofibrosis
Sputum type was mostly whitish in appearance, but after treatment with tiotropium, it changed to no sputum
Effects of the combination of tiotropium and salmeterolfluticasone inhaler on anthracofibrosis
Improvement in some clinical findings were evident by using the tiotropium inhaler and the salmeterol-fluticasone in our anthracofibrosis subjects, especially in those with sputum and wheezing (41 and 31% improvement, respectively) ( Table 1), but the statistical analysis did not show significant changes. Figure 2 shows the different types of sputum in these subjects and it shows nonsignificant decrement of whitish and mucoid sputum toward subjects without sputum (X 2 =12, P =0.21). On the other hand, analysis of the CAT questionnaire results showed significant improvement in dyspnea and outside activity (P-value 0.01 and 0.05, respectively).
Other parameters of the CAT score, including total score, showed some improvement, although not significant ( Table 2). Spirometry analysis also did not show significant changes before and after treatment with the tiotropium and salmeterol-fluticasone inhaler ( (Figure 3). (Table 1).
After the complete course of the study, sputum disappeared in five subjects (31%) and the color of sputum changed significantly to a whitish color in six (75%) subjects and in two (25%) subjects it turned to yellow ( Figure 2) (X 2 =20, P=0.01).
Assessment of quality of life by the CAT
questionnaire also showed improvement of quality of life.
The changes during the trial were not significant for cough, sleep quality, and weakness, but the remaining parameters were improved significantly ( Table 2).
Predicting factors for good response
Demographic data, clinical findings, CAT score, and spirometry results were compared between the two groups of responders and non-responders to determine the best predicting factors for good response to treatment. Tables 4 and 5 show the most suitable demographic, occupational, clinical, radiological, and physiological (spirometry) findings that could help determine a predictive factor for good response to long term bronchodilator with or without inhaled corticosteroid.
Comparison of these risk factors did not show any significance for any of them, but the odds ratio and 95% confidence interval showed a role for baking, cough, wheezing, and chest pain for predicting a good response.
Furthermore, the type of sputum in improved subjects was equally whitish and yellowish in color in both groups; therefore, it was not a predicting factor for improvement. (7).
In Western countries, as biomass use has decreased considerably, occupational exposure to coal has remained the most important risk of acquiring native anthracosis (8).
However, as the origin of an anthracotic nodule in the vesicles of bronchial macrophage is undefined, none of these studies were able to introduce an effective treatment targeting the pathophysiology of the disease. Most comprehensive reviews recommend avoiding exposure to organic and in-organic dusts (9), but this strategy will not relieve the clinical symptoms and illness of anthracofibrosis patients.
Treatment of associated tuberculosis usually makes a considerable clinical and radiological improvement (10,11).
Decisions about starting anti-tuberculosis therapy are quiet easy, as the diagnosis of anthracofibrosis requires bronchoscopy and during bronchoscopy enough samples for the diagnosis of tuberculosis can be accumulated (12).
We should assume that all subjects of anthracofibrosis may not be associated with tuberculosis (13) Since there is no method presently to remove an anthracotic nodule from the bronchial mucosa, authors of this research and many other physicians in our region prescribe the long acting bronchodilator for the treatment of anthracofibrosis. The rational of selecting this treatment is adapted from COPD. We should remember the influence of biomass in the formation of both COPD and anthracofibrosis. Therefore, choosing long acting bronchodilators might be prudent in anthracofibrosis such as COPD. In this study, long acting beta2-agonist (salmeterol) and long acting anti-muscarinic agent (tiotropium) were chosen for the treatment and were compared with each other. Salmeterol was combined with inhaled corticosteroids to better cover asthma-like bronchial disease. All of these drugs are safe and are available in low-income countries. The results of this research have shown that beneficial effects of these drugs cannot be shown in the short course of therapy, but longterm usage is helpful. Due to the low mortality and morbidity rate of this disease (1,5) in comparison to COPD, we recommend using these drugs for the long term; usually more than three months, especially during the winter season and to start with one drug and add another one if an appropriate response is not seen. We also recommend another research to compare the effect of LABA alone with the combination of LABA-ICS. In conclusion, the long acting bronchodilator including long acting beta2-agonist and long acting muscarinic antagonist with or without inhaled corticosteroids are safe and effective drugs for the treatment of symptomatic bronchial anthracofibrosis.
Conflict of interest
The authors of this article did not receive any grant from any drug company and the provider of placebo delivered the placebo`s free of charge and did not get involved in any part of design and analysis of this study. | 2019-03-28T13:02:31.384Z | 2018-03-01T00:00:00.000 | {
"year": 2018,
"sha1": "d5c0a9c9769c314cb46c04fefb3405a3f69fa263",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "d5c0a9c9769c314cb46c04fefb3405a3f69fa263",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
244773164 | pes2o/s2orc | v3-fos-license | Using Reconfigurable Intelligent Surfaces for UE Positioning in mmWave MIMO Systems
A reconfigurable intelligent surface (RIS) consists of massive meta elements, which results in a reflection path between a base station (BS) and user equipment (UE). In wireless localization, this reflection path aids in positioning accuracy, especially when the line-of-sight (LOS) path is subject to severe blockage and fading. We develop a RIS-aided positioning framework to locate a UE in environments where the LOS path may or may not be available. We first estimate the RIS-aided channel parameters from the received signals at the UE. To reduce algorithmic complexity, we propose a linear combination of the estimated UE positions from the direct and reflection paths, which is shown to be approximately the maximum likelihood estimator under the large-sample regime when the estimates from different paths are independent. We optimize the RIS phase shifts to improve the positioning accuracy, and extend the proposed approach to the case with multiple BSs and UEs. We derive the Cramer-Rao bound (CRB) and demonstrate numerically that our proposed method approaches the CRB.
be applied in a non-LOS (NLOS) environment. In [21], the received signal measurements are structured as a tensor, based on which the channel parameters such as ToAs, AoAs and AoDs are extracted. In [22], a tensor-based channel estimation method for positioning and mapping was proposed for diffuse multipaths.
Since a RIS creates a reflection path between a BS and UE, the UE can utilize the measurements from this reflection path as additional information for positioning. Some works have shown that the positioning accuracy improves with the size of the RIS. The Cramér-Rao lower bound (CRB) of the positioning accuracy is analyzed in [26]- [30]. However, few existing literature have developed practical positioning algorithms for a RIS-aided system. Indoor positioning using the RSS is investigated by [31], [32], which estimates the position of a UE using the probability distribution of the RSS. In [33], the authors consider channel estimation and geometric mapping for positioning under the twin-RIS scenario.
In this paper, we develop a novel positioning and inference framework for RIS-aided systems using channel estimation techniques. Our approach is not limited to using the RSS measurements. Different from the existing works [31], [32], we formulate our problem under the general case where there may be more than one RIS. In contrast with existing RIS-aided channel estimation methods [34]- [36] that estimate the cascaded channel by assuming that the direct channel is estimated in advance, we estimate the channel parameters such as ToAs, AoAs and AoDs of the direct and reflection paths jointly. In addition, different from the geometric mapping in [33], our proposed inference model considers the estimation accuracy of the channel parameters, which yields a UE position estimation error close to the theoretical CRB.
The main contributions of this paper are summarized as follows: • We consider the down-link MIMO-OFDM setup in this work. Direct estimation of the UE position from the received signals is computationally expensive as it involves a nonlinear and non-convex optimization. Therefore, we propose a two-step positioning framework. In the first step, we estimate the channel parameters of the direct and reflection paths. In the second step, we obtain an estimate of the UE position from the channel parameters of each path. We derive the CRB of the UE position estimate under our positioning framework.
• To infer the UE position from the different estimates corresponding to the direct and reflection paths, we perform a linear combination of these estimates. The linear combination weights depend only on the covariance of the UE position estimates. We show when the estimates from different paths December 2, 2021 DRAFT are independent, the proposed linear combination is approximately the MLE of the UE position in the large-sample regime.
• To optimize the positioning framework, we propose an approach for designing the RIS phase shifts.
Specifically, the phase shift design problem is to maximize the expectation of the reflection path gain, which can be then solved using singular value decomposition.
One challenge is to distinguish the direct and reflection paths. In this work, different from the existing works where the path with the smallest delay is assigned as the direct path, we distinguish the direct and the reflection paths by ranking a path quantity related to its power level. This method is more robust if the SNR is low. Our proposed RIS-aided positioning framework is also readily extended to the multi-UE and multi-BS scenarios.
The rest of this paper is organized as follows. In Section II, the signal and channel model, and our system assumptions are introduced. In Section III, we derive the CRB of the UE positioning error under the signal and channel model. The proposed RIS-aided channel parameter estimation approach is discussed in Section IV. In Section V, we propose the fusion method to infer the UE position from the estimated channel parameters. In Section VI, we propose the method to optimize the RIS phase shifts and discuss the extension of our positioning framework to the multi-UE and multi-BS scenarios. We present numerical results in Section VII. Finally, we conclude in Section VIII.
Notations: A bold lower case letter a is a vector and a bold capital letter A represents a matrix.
A. Channel Model
We assume that the BS has a uniform rectangular array (URA) with N antennas. There are Q RISs and each is equipped with a URA of M elements. The UE has D antennas. In this work, we assume that the position of every RIS is known by the BS and UE. Without loss of generality, we adopt a coordinate system with the BS at its origin and the URA of the BS in y − z plane (see Fig. 1 for an illustration).
Each RIS' URA is assumed to be contained in a x − z plane perpendicular to the y − z plane of the BS URA.
We also assume that the UE's antennas are contained in a horizontal plane parallel to the BS URA, but with a possibly different orientation. Let M R ∈ R 3×3 be the rotation matrix associated with the UE, given by where α 1 , α 2 , α 3 are the Euler angles with respect to (w.r.t.) the UE. For convenience, we define M R = [M R ] 2:,3,: . In this work, we assume that M R is known a priori by the UE.
We suppose that the communication system uses OFDM with K subcarriers. For the kth subcarrier, the channel from the BS to the qth RIS is denoted as G k,q ∈ C M ×N , the channel from the UE to the qth RIS is H r,k,q ∈ C D×M , and the channel from the BS to the UE is H d,k ∈ C D×M .
1) BS-RIS links:
In this work, we model the BS-RIS channel as a mmWave channel. We assume that each RIS is placed at a sufficient height (e.g., on a tall building) so that there is a LOS path between the BS and the RIS.
From the OFDM assumption, the kth subcarrier of the qth BS-RIS channel is [37], [38] where i = √ −1, h R1,q = α R1,q β R1,q M N with β R1,q being the large scale path gain and α R1,q being a complex-valued channel coefficient. W is the transmission bandwidth, and τ r1,q is the propagation delay of the signal from the BS to the qth RIS. In particular, a R (f R1,q , v R1,q ) ∈ C M ×1 and a B (g Br,q , v Br,q ) ∈ C N ×1 are, respectively, the URA response vectors of the RIS and BS, where g Br,q = sin θ Br,q sin φ Br,q , v Br,q = cos θ Br,q , with the θ R1,q (or θ Br,q ) and φ R1,q (or φ Br,q ) being the elevation and azimuth AoAs (or AoDs) associated the BS-RIS link, respectively. To be more precise, the URA response vectors a R (f, g) and a B (f, g) in (2) are given by where aR(f ) = 1 M 1/4 1, exp(iπf ), . . . , exp(iπf (M 1/2 − 1)) and aB(g) = 1 N 1/4 1, exp(iπg), . . . , exp(iπg(N 1/2 − 1)) . 2) RIS-UE link: For the channel between the qth RIS and the UE, we again assume that a LOS path exists between the RIS and the UE. The kth subcarrier channel of the RIS-UE link is given by where h R2,q = α R2,q β R2,q M D with β R2,q being the large scale path gain and α R2,q being complexvalued channel coefficient, and τ r2,q is the delay. The URA response vector a R (f R2,q , v R2,q ) is given in (5) and a U (g Ur,q , v Ur,q ) ∈ C D×1 is the URA response vector of the UE, where f R2,q = sin θ R2,q sin φ R2,q , v R2,q = cos θ R2,q , with θ R2,q and φ R2,q being the elevation and azimuth AoDs associated with the RIS-UE link. Abusing terminology, we refer to (f R2,q , v R2,q ) as the AoD of the qth RIS, and (g Ur,q , v Ur,q ) as the AoA of the UE on the reflection path.
3) BS-UE link:
We model the BS-UE link channel using the Rician fading model, given by where K d is the Rician factor, H d,k is the deterministic component or the LOS path, and Z d,k denotes the small-scale fading whose entries are independent and identically distributed (i.i.d.) according to CN (0, β d ) with β d being the large scale path gain. The expression of H d,k is given by where we let h d = Kd 1+Kd β d N Dα d with α d being complex-valued channel coefficient, and the URA response vectors of the UE and the BS, a U (g Ud , v Ud ) and a B (g Bd , v Bd ) are defined in (10) and (6), respectively. We have where θ Bd and φ Bd are the elevation and azimuth AoDs associated the BS-UE link. Abusing terminology, we refer to (g Bd , v Bd ) as the AoD of the BS, and (g Ud , v Ud ) as the AoA of the UE on the LOS path.
In summary, using the channel models of the BS-RIS link in (2), the RIS-UE link in (7), and the BS-UE link in (11), the effective channel between the BS and UE on the kth subcarrier can be written as ] denoting the phase shift of the qth RIS, and h r,q = Here, we define the channel parameters as where We denote the position of the UE as p U = [x U , y U , z U ] , and the position of the qth RIS as p R,q = [x R,q , y R,q , z R,q ] . To relate the channel parameters η to the UE position, let ξ = [p U , Re{h d }, Im{h d }, Re{h r,1 }, Im{h r,1 }, . . . , Re{h r,Q }, Im{h r,Q }] .
Then, we can define a function F (ξ) = η from the relations of (3), (4), (8), (9), (13), (14), and the following equalities: where x k (t) ∈ C N ×1 is the transmitted signal from the BS at time t, and n k (t) ∈ C D×1 is a noise vector with entries i.i.d. according to the complex Gaussian distribution CN 0, σ 2 and independent across time.
We assume that the transmitted signals are orthogonal, i.e., XX H = T /DI, where I is the identity matrix. Moreover, the transmit power is assumed to be unit, i.e., x(t) 2 2 = 1, for t = 1, . . . , T . The compact form of the received signal in (19) is given by Right multiplying (20) by D/T X H , we have The entries in (D/T )N k X H are i.i.d. Gaussian CN (0, σ 2 D/T ) random variables. Here, we defineR k = (D/T )R k X H and recalling the definition of H k in (15), we obtaiñ where we denoteÑ k =Z d,k +(D/T )N k X H , and its entries follows Our objective is to infer the position of the UE by using the observations {R k } K k=1 in (22). Because directly estimating the UE position from (22) is challenging, we first estimate the channel parameters, from which the UE position is then inferred.
III. CRB FOR UE POSITION ESTIMATION
In this section, we derive the CRB for the UE position estimation based on the observations in (22).
We will compare the performance of the proposed method against this bound in the numerical results in Section VII.
A. FIM of the channel parameters η
Recall that our observations areR k = H k +Ñ k in (22). We perform two steps to obtain the Fisher information matrix (FIM). In the first step, we compute the FIM w.r.t. η of (16). For any unbiased estimatorη, we have where F is the FIM of η based on the observations from the kth subcarrier.
Accordingly, the FIM of η based on the observations from all the K subcarriers is Because the noise in (22) is Gaussian, we have the following where C is a normalization constant.
η is then given by After simplifications, we have Appendix A provides detailed derivation for terms in the FIM.
B. FIM for the UE position parameters ξ
To derive the FIM for the UE position parameters ξ, we use the relation F (·) in (18). The Jacobian The FIM for ξ is then given by Accordingly, a lower bound for the MSE of the UE position is as follows: When we only utilize parameters associated with the direct path for the UE positioning task, the error covariance satisfies the following bound, where When we only utilize parameters associated with the qth RIS path for the UE positioning task, the error covariance matrix satisfies the following bound, where C ξr,q = (J r,q C −1 ηr,q J r,q ) −1 with C ηr,q = [F −1 η ] 5q+3:7+5q,5q+3:7+5q , J r,q = ∂ηr,q ∂ξ r,q ∈ R 5×5 , and F ηr,q ∈ C 5×5 is the FIM of η r,q .
Proof. See Appendix C.
IV. ESTIMATION OF CHANNEL PARAMETERS
In this section, we formulate optimization problems to estimate the AoDs from the BS (g Bd , v Bd ), and propagation delays τ d and {τ r2,q } Q q=1 along the LOS path from the BS to the UE and the reflection paths from each RIS to the UE, respectively. We also estimate the AoAs at the UE (g Ud , v Ud ) and (g Ur,q , v Ur,q ) along the LOS path and reflection paths, respectively.
Because the noiseÑ in (22) is Gaussian, the MLE of η of (16) is given by the following: However, directly solving the above problem is challenging because it is nonlinear and nonconvex in η. However, we note that the rank of H k in (22) is Q + 1. We can leverage this low-rank property to estimate the channel parameters.
The AoD (g Bd , v Bd ) is for the BS-UE link given in (13). We discuss the estimation of g Bd . The estima- Since g Br,q , for all q, in (4) is known a priori as we assume that the position of the qth RIS is known, we only need to estimate g B,d from (34) by solving where A B = A Br , aB(g Bd ) with A Br = [aB(g Br,1 ), . . . , aB(g Br,Q )]. We assume g Br,q is distinct for each q, which can be achieved by carefully deploying the RISs. Thus A H Br A Br is invertible and we have the following result.
where aB(g Bd ) = Proof. Note that P r aB(g Bd ) is the orthogonal projection onto the column space of A Br , and aB(g Bd ) is the residual vector of projection with normalization. Therefore, [A Br , aB(g Bd )] spans the same subspace . For convenience, we define A Br as the Gram-Schmidt orthogonalization of columns in A Br . We have , one can check that A H A = I. Therefore, from the equivalence in subspaces, the residual of R B w.r.t. Col(A) is same as Col( A). The objective function in (35) is then given by where
and the last inequality comes from
which is exactly the problem provided in (36). This concludes the proof.
The variable of optimization g Bd in problem (36) is scalar and various standard optimization techniques can be applied to find the optimal solution. Supposeĝ Bd is the optimal solution found. Let This path order is utilized to distinguish the direct and reflection paths in the following subsections.
We define and reshape where Q D ∈ C 2×DN and N D ∈ C K×DN . We use the multiple signal classification (MUSIC) method to estimate delays from the observations in (40). (40) is Intuitively, when the noise level is low, the covariance matrix C D in (41) Suppose the estimated delays are {τ i } Q+1 i=1 . A heuristic way to distinguish the delay for the direct path is to use the minimum delay estimated. However, this approach may result in errors when the SNR is low, as our simulation in Section VII shows. Therefore, we use (38) instead to assign the delays for the direct and reflection paths. Specifically, after estimating Then, we use the path ordering s t from the sorting of (38) We present only the method to estimate g Ur,q and g Ud . The same approach can be applied to the estimation of v Ur,q and v Ud . we reshape {R k } K−1 k=0 over the dimension of aŨ (g Ud ) and where
V. UE POSITION ESTIMATION
In this section, we present a fusion method to infer the position of the UE from the estimated channel parameters.
A. Fusion via Linear Combination
Recall that the error covariance matrices ofp U } Q q=1 are given by (29) and (31), respectively.
The following lemma presents the proposed fusion method based on the error covariance matrices.
Lemma 2. Assume that the estimations ofp (29) and (31). Then, the optimal linear combination ofp where Proof. To obtain a linear combination ofp where A d ∈ C 3×3 and B q ∈ C 3×3 , ∀ q. In order to obtain an unbiased estimator, it must have A d + Q q=1 B q = I. To minimize the MSE ofp U , we need to solve the following problem: Substituting the expression of (46) and taking first order derivative of the objective function in (47) give This concludes the proof.
Remark 1. Since we assume independence among the UE estimations from these paths, according to Proposition 1, we have the following bounds where we denote the bounds asC Therefore, when the exact error covariances in (45) are not available, we can employ the lower bounds in (48) and (49),p
B. Asymptotic MLE
We now show that the proposed linear combination in (50) is approximately the MLE in the asymptotic regime of large sample size. We first introduce the extended invariance principle (EXIP), which is asymptotically equivalent to the MLE. Then we show that the proposed linear combination is approximately the optimal solution of EXIP.
is asymptotically equivalent toξ as Z → ∞, where W is In (18), the UE position parameters are related to the channel parameters η via a function F (·). We can apply the EXIP approach to obtain the UE position estimate from the channel parameter estimates.
Specifically, from the estimatorη in Section IV, applying Theorem 1, we can solve the following weighted least squares problemξ where the weight matrix W is given by (33)). Note that the inference model in [33], [40] with geometric mapping is equivalent to letting W = I in (52). In particular, the gradient-based method can be utilized to find the optimum in (52), which is, however, sensitive to the initialization. In what follows, we will show that (50) is the approximate solution of the problem (52).
Let F d and F r,q , q = 1, . . . , Q, be functions such that η d = F d (ξ d ) and η r,q = F r,q (ξ r,q ). The objective function in (52) is approximate to the following, whereξ d is inferred fromη d , andξ r,q is inferred fromη r,q . Further details are in Section V-C. The first approximation is from that F d (ξ d ) ≈η d and F r,q (ξ r,q ) ≈η r,q . The second approximation in (53) holds from the Taylor series expansion. Letting the first order derivative of (53) be zero giveŝ Therefore, the solution in (54) is the approximate solution of (52), which is asymptotically MLE in the large-sample regime.
The following proposition shows that the optimal linear combination in (50) is equivalent to the solution in (54) when the paths are independent.
Proposition 2. Suppose the paths are independent, in other words, Fη has the following form Then, the solution in (54) is equivalent to (50), that is [ξ] 1: Proof. See Appendix D.
In summary, we have shown that the optimal linear combination in (50) is approximate to the optimal solution of EXIP method when the paths are independent. Therefore, it is approximately equivalent to the MLE in large-sample region . with f B,d = sin θ Bd cos φ Bd . From (13) and (14), we have f = [M R ; 0, I]z =Ãz. However, the estimation resultf = [ĝ Ud ,v Ud ,ĝ Bd ,v Bd ] may not satisfy the above relation due to corruption of noise.
C. Estimation ofp
We employ the weighted least squares method by solving where Cf = [F −1 η ] 2:5,2:5 . If we ignore the constraint, the solution is given byẑ Then we project this solution to the feasible region of the problem in (56). The estimation of (d,θ Bd ,φ Bd ) is given byd The estimated UE positionp U from the direct path is then given by
VI. DISCUSSIONS
In this section, we propose methods to optimize the phase shifts of a RIS for the purpose of positioning a UE. We also discuss the extension of our proposed framework to the multi-BS and multi-UE scenarios.
A. Design of RIS Phase Shifts
We consider the RIS-aided positioning scenario in Fig. 2, where the RIS aims to serve mutiple UEs.
Specifically, the phase shifts of a RIS are designed to serve the UEs with elevation angles in the range Recall that the gain of the reflection path is proportional to |a H R (f R2,q , v R2,q )Θ q a R (f R1,q , v R1,q )|. Since the quantities f R1,q and v R1,q are unknown a priori, we make an unbiased design Θ q based on the served UEs in the following.
For convenience, we combine Θ q a R (f R1,q , v R1,q ) as one variableθ q ∈ C M ×1 . If the UEs are uniformly distributed in elevation range [θ l , θ u ] and azimuth range [φ l , φ u ], then we consider the following optimization problem: Since directly solving (58) is challenging, we define a matrix D A ∈ C M ×Z with its column having the form of aR(f ) ⊗ aR(v), where (f, v) is chosen in a discretized range. Therefore, we have the following Then, we can reformulate the problem in (58) aŝ If there is no constraint forθ q , the solution is the dominant left singular vector of D A . In order to satisfy the constraint imposed onθ q , we letθ q be the complex angle of the dominant left singular vector of A.
The design of phase shifts of qth RIS Θ q is then given by diag(θ q ) with
B. Extension to Multiple UEs
Assume there are L users. Let the true position of UE l be p U,l = p U,1 + ∆ l with ∆ l being the relative position w.r.t. the reference UE 1 with position p U,1 . We assume that the relative positions of the UEs are known through inter-UE measurements and message exchanges [41]- [43]. Assume each UE l first estimates its position independently asp U,l with error covariance C pU,l . To leverage on inherent correlations among the UEs, we employ the linear combination of estimates as described in Section V-A, where A l ∈ C 3×3 is the combining matrix. By using the similar statements as Lemma 2, we can minimize the MSE of the estimates, i.e., L l=1 tr(E[(p U,l − p U,l )(p U,l − p U,l ) H ]), and obtain the expression of A l in (62) as follows, Therefore, the estimated positions of UEs are given bŷ p U,l =p U,1 + ∆ l .
After the fusion, the resulting error covariance of each UE is given by ( L l=1 C −1 pU,l ) −1 , ∀ l. This result can be utilized for the case where there are multiple BSs, which we discuss in the following subsection.
In particular, when the error covariance C pU,l in (63) is not available, we can employ the lower bound as an alternative, which can still achieve near optimal performance as we analyzed in Section V-A and Section V-B.
C. Extension to Multiple BSs
We now consider the case where there are P BSs. The received signal of the kth subcarrier at the UE is given by where X i ∈ C D×T . As in the case of a single BS, we assume X i X H i = T /DI. Here, we further assume that X i X H j = 0, ∀ i = j. Thus, right multiplying both sides of (64) with D/T X H i yields where B i ∈ C 3×3 is the combining matrix. Similarly, by minimizing the MSE ofp U in (66), i.e., , the final estimate from different BSs is given bŷ
VII. NUMERICAL RESULTS
In this section, we evaluate the proposed RIS-aided positioning method. We verify the UE positioning accuracy achieved by comparing to the CRB under varying the noise level. We also verify the channel parameter estimation accuracy. Numerical experiments are also conducted to provide insights into the for reflection path is L r = 2. We evaluate the positioning accuracy following methods: • The proposed positioning method that distinguishes the direct path based on the path energy, labeled as "Direct+reflection paths, proposed method".
• The proposed positioning method that distinguishes the direct path based on the estimated delay, labeled as "Direct+reflection paths, delay-based".
• The positioning method that utilizes only the direct path, labeled as "Direct path only".
We observe from Fig. 3 that our proposed method outperforms the benchmark approaches, with RMSE close to the CRB when the SNR is high. The result also verifies that distinguishing the paths based on the path energy provides better performance compared to the delay-based approach. We also observe that fusing the estimates from the direct and reflection paths achieves a better accuracy than using only the direct path, which validates the effectiveness of the RIS.
B. Channel Parameter Estimation Accuracy
In Figs. 4 and 5, we evaluate the RMSE of the channel parameters, i.e., (τ d , θ Bd , φ Bd ) and (τ r2,1 , θ R2,1 , φ R2,1 ), by using proposed method. The CRBs of the estimators are also plotted as the benchmark. The simulation settings are the same as those in Fig. 3. As we expect in Figs. 4 and 5, the RMSEs of the estimated parameters are all close to their CRBs, which validates the effectiveness of the proposed method.
C. Direct Path Loss Exponent
In Fig. 6 The path loss exponents are L d = 4.5 and L r = 2. In Fig. 7, by using the techniques in Section VI, the proposed positioning method in this scenario also achieves performance close to the theoretical bound.
From Fig. 7, when more than one BS and UE can cooperate and exchange information, the positioning accuracy can be further improved.
VIII. CONCLUSIONS
In this paper, we have developed a RIS-aided positioning framework. The framework consists of first estimating the RIS-aided channel parameters from received signals, and then using these estimates to infer the UE position. Through an optimal linear combination of estimates from the direct and reflection paths, the proposed fusion method is shown via the EXIP framework to approximate the MLE asymptotically when the estimates are independent and the number of samples is large. We also have In this appendix, we derive the Jacobian matrix J ∈ R (7+5Q)×(5+2Q) used in (27). Recall that η in (16) and ξ in (17), we write J in the following form: J = J dJ r,1 · · ·J r,Q , whereJ d = ∂ηd ∂ξ ∈ R 7×(5+2Q) andJ r,q = ∂ηr ∂ξ ∈ R 5×(5+2Q) . We first derive the Jacobian matrix of the direct pathJ d , whose entries are can be obtained through with s d denoting any entry in η d . We have We next derive the Jacobian matrix of the reflection pathJ r,q , whose entries are can be obtained through ∂τ r2,q ∂p U = p U − p R,q c p U − p R,q 2 , ∂ Re{h r,q } ∂ Re{h r,q } = 1, ∂ Im{h r,q } ∂ Im{h r,q } = 1 ∂s r ∂p U = ∂s r ∂θ R2,q θ R2,q ∂p U + ∂s r φ R2,q ∂φ R2,q ∂p U , with s r denoting any entry in η r . Here, we denotep U,q = p U − p R,q = [x U,q ,ỹ U,q ,z U,q ] . Therefore, ∂θ R2,q ∂p U = x U,qzU,q ,ỹ U,qzU,q , −x 2 U,q −ỹ 2 U,q p U,q 3 2 1 −z When we utilize only the BS-UE link for UE positioning, the FIM is given by where C ηd = F −1 η 1:7,1:7 , and J d = ∂ηd ∂ξ d ∈ R 7×5 . Thus, the error covariance matrix satisfies the following: Since F −1 ηd F −1 η 1:7,1:7 , the following equation holds, Therefore, combining (68) and (69) .
This concludes the proof in (30).
Similarly, when only the qth RIS link is utilized for UE positioning, we can also obtain C (r,q) pU C ξr,q 1:3,1:3 J r,q F ηr,q J r,q The components for reflection paths can be proved similarly. Thus, it is sufficient to prove the following two equalities: Here, we write where F (h) ηr,q ∈ C 3×3 , and the remaining matrices have matching dimensions. Then, for the first term in the product on left-hand side (L.H.S.) of (70), we can calculate J FηJ = Z 1,1 Z 1,2 where we denote ηr,q J (p) r,q , From (73), we have Combining (75) and (76) We can verify that Thus, we have proved (70) and (71). This concludes the proof. | 2021-12-02T02:16:30.512Z | 2021-12-01T00:00:00.000 | {
"year": 2021,
"sha1": "ac49675f9e57513a823090e81ce6f7101850711b",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "ac49675f9e57513a823090e81ce6f7101850711b",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Engineering"
]
} |
236360288 | pes2o/s2orc | v3-fos-license | Quality of sleep among caregivers of Alzheimer disease patients: cross-sectional study from Saudi Arabia
Alzheimer’s disease (AD) is a chronic neurodegenerative disease that typically has a slow progression but gradually worsens over time. It causes dementia in 60%–70% of cases. The earliest presenting symptom is difficulty in remembering recent events. In the advanced stages of the disease, symptoms can include problems with language, disorientation (which may include getting lost easily), mood swings, loss of motivation, inability to care for oneself, and behavioral issues. As the patient’s condition declines, they often withdraw from family and society. Gradually, bodily functions are lost, ultimately leading to death. Although the speed of progression can vary, the typical life expectancy following diagnosis is three to nine years.
friends. The majority (80%) of people with AD and related dementias receive care in their homes. 10 Each year, more than 16 million Americans provide more than 17 billion hours of unpaid care for family and friends with AD and related dementias. 11 In 2019 alone, these caregivers provided an estimated 18.5 billion hours of care. Approximately two-thirds of dementia caregivers are women, and about one in three caregivers (34%) is aged 65 years or older. Approximately one-quarter of dementia caregivers are "sandwich generation" caregivers; they care not only for an aging parent, but also for children under the age of 18 years. 12 Care giving for AD patients is emotionally and cognitively exhausting. Many studies indicate that the caregivers' overall health is adversely altered. [13][14][15] Their cognitive functioning may also decline. Among these deteriorations, sleep disturbances exacerbate the observed changes to mental, physical, and cognitive health. 16,17 The current study aimed to assess the sleep quality among caregivers of patients with AD in Aseer region, Saudi Arabia. Moreover, it attempted to identify the different predictors of sleep disturbance among the sampled caregivers.
METHODS
A descriptive cross-sectional approach including 110 caregivers of AD patients at Abha Mental Health Hospital, Saudi Arabia, was conducted during the period from January to September, 2018. Patient data were collected directly from the patients' medical records, while the caregivers were requested to complete a prestructured questionnaire. The questionnaire was developed by the authors with the help of a literature review and expert consultation. An informed consent was obtained from all participants in the study. The collected data included caregivers' demographic information, such as age, gender, education level, work data, and relationship with the patient. The duration/time for daily care giving was calculated for each caregiver. Caregivers' quality of sleep was assessed using the Pittsburgh Sleep Quality Index (PSQI), a self-administered questionnaire that assesses sleep quality over a one-month time interval. 17 The measure consists of 19 individual items, creating 7 components that produce one global score. The component scores include perceived sleep quality, sleep latency, sleep duration, habitual sleep efficiency, sleep disturbances, use of sleeping medication, and daytime dysfunction. Each item is weighted on a 0-3 interval scale. The global PSQI score is then calculated by totaling the seven component scores, providing an overall score ranging from 0 to 21, where lower scores denote healthier sleep quality. The total score was categorized as good (score: 0-5), average (score: 6-13), or poor (score: 14-21) sleep quality.
The study was conducted in accordance with the Declaration of Helsinki, and the protocol was approved by the ethics and research committee of the college of Medicine, King Khalid University. After the data were extracted, they were revised, validated, coded, and statistically analyzed using IBM Statistical package for social sciences (SPSS) version 22 (SPSS, Inc., Chicago, IL). All the statistical analyses were conducted using the two-tailed test. P value of less than 0.05 was considered to be statistically significant. Descriptive analysis based on frequency and percent distribution was conducted for all variables, including the caregivers' demographic data and sleep quality items. Univariant relations between caregivers' bio-demographic data and their sleep quality were assessed using the Pearson Chi-squared test.
RESULTS
A sample of 110 caregivers of AD patients was considered in this study. Female caregivers' constituted 53.6% of the sample, and 50% were below the age of 40 years. Moreover, 69.1% of the caregivers were married. Approximately 69% of the caregivers were the patients' siblings, and 11.8% were spouses. Only four of the caregivers were non-Saudi, and 40% were university graduates, while 30.9% had completed the intermediate education level. As for work status, 40% of the caregivers were working, and 44.5% reported a monthly income below SR3000. Moreover, 60% of the caregivers spent more than 10 hours daily with their patients, and 46.4% were anxious about contracting the same disease or beginning to present the same symptoms as their patients (Table 1).
DISCUSSION
Care for patients with Alzheimer dementia is burdensome and demanding compared with care giving for patients with other disorders due to the progressive nature of the former in terms of the impairment it causes in cognitive and physical functioning. Sleep abnormalities are frequent in neuropsychiatric diseases such as AD and Parkinson disease (PD). 18 Sleep disturbances often affect the quality of life of these patients and their caregivers alike because of decreased daytime attention and altered circadian rhythms, which in turn disturb the sleep habits of caregivers and increase their burden. 19,20 This study aimed to assess the sleep quality of caregivers for patients with AD and identify its determinants. The results revealed that while slightly more than a quarter of the caregivers had poor sleep quality, the majority had average sleep quality. The most affected sleep component was sleep latency, as nearly half of the caregivers required a long time to fall asleep more than three times a week. Moreover, sleep efficacy was poor among more than half of the caregivers, but sleep disturbances were recorded among very few of them. Further, daytime dysfunction was high among less than 16% of the caregivers, possibly due to the overall changes in sleep hygiene. This result indicates that about 40% of the patients subjectively rated their sleep quality as either fair or very poor. Regarding the determinants of sleep quality, poor quality was more common among non-Saudi caregivers, who are mostly employed for care giving and spend more time with AD patients. Poor sleep quality was also evident among more than one-third of the caregivers who spent 10 hours or more with their patients. Caregivers who were spouses also experienced somewhat poorer sleep quality than their counterparts, which can also be explained by their spending a considerable amount of time with their affected partners.
Much of the literature on sleep hygiene among AD patients' caregivers reports sleep disturbances in the caregivers and observes that several factors are directly related to the disease, including nocturnal agitation and sun downing, insomnia, sleep-related movement disorders, obstructive sleep apnea, circadian rhythm disorders, and medication-induced sleep impairment. 18,21,22 Cupidi et al conducted a cross-sectional study on 40 patients with probable AD, 40 patients with PD without dementia, and their primary caregivers during their routine visits to outpatient clinics. 23 The researchers reported that 18 AD (45%), 22 PD (55%), and 45 (30%) controls reported poor sleep quality. The mean global PSQI score of the PD patients was 6.25 (total score: 21). Sleep disturbances in caregivers of persons with dementia were studied by McCurry et al via literature review, which reported sleep disturbances in 19% [25] to 68% of caregivers. 24,2 Alhazzani et al conducted a cross-sectional study in Saudi Arabia and revealed that AD caregivers tended to be sons or daughters (69.1%) or spouses (11.8%) and that the majority of caregivers had poor quality of sleep. 27 The global PSQI score positively correlated with the duration of caregivers' daily stay with AD patients (r=0.272, p=0.004), but it did not correlate significantly with either the caregivers' or the patients' ages. Alshammari et al conducted a study in Saudi Arabia to discover the characteristics of informal caregivers of elderly patients; identify the socioeconomic, psychological, and physical consequences experienced by informal caregivers; and measure their burdens and needs. 28 The researchers concluded that most caregivers (78.1%) suffered from musculoskeletal problems. The mean Zarit Burden Interview score was 31.3, which indicated a moderate burden. More than half of these caregivers requested blood pressure-(55.6%) and blood sugar-measuring devices (53%). Three quarters (74.9%) of these caregivers wanted educational training to cope with emergencies. Most caregivers expressed a need for frequent healthcare for themselves (58.4%) and a home health visit service (72.9%) to support them in the care of their elderly.
It is evident that sleep quality is inadequate among the caregivers of persons with AD. Many precipitating, predisposing, and perpetuating factors, including poor sleep routines and increased physical and psychological burdens on the caregivers, are frequently associated with sleep complaints. The findings of this study should be viewed in light of certain limitations, including the small sample size, the fact that it was conducted in one region only, and the lack of objective measures of sleep quality.
CONCLUSION
The burden on caregivers of AD is huge and often underrecognized. The sleep quality of AD patients' caregivers in this study was not adequate; in particular, most caregivers were young and of working age. Poor sleep quality affected the caregivers' day life activities, but typically remained undertreated. Continuous training and education for caregivers with regard to the nature of AD and their patients' needs will help improve caregivers' quality of life in general and sleep hygiene in particular. Furthermore, caregivers should learn how to cope with the stress and exhaustion that they face during their care giving. It is crucial to allocate the resources to raise awareness about the burden of caregivers to patients with AD and to promptly identify, treat, and support them as much as the patients themselves. | 2021-07-27T00:05:43.880Z | 2021-05-25T00:00:00.000 | {
"year": 2021,
"sha1": "7e8c68193a316a25ffd001df57b94d1e7f80fafd",
"oa_license": null,
"oa_url": "https://www.ijcmph.com/index.php/ijcmph/article/download/8020/5001",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f32d9506f5f92e576c6ed40d28dfff292b14b653",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
91384887 | pes2o/s2orc | v3-fos-license | Perspectives and Uses of Non- Saccharomyces Yeasts in Fermented Beverages
Fermented beverages such as wine, cider and beer are normally fermented with Saccharomyces yeasts due to their well-known fermentative behavior. These yeasts have been extensively investigated and are used in commercial processes. On the other hand, non- Saccharomyces yeasts were always considered contaminants in winemaking and brewing. Most researchers in the past argued that these yeasts produce several compounds that may alter the sensory quality of wine and beers. However, recent studies have demonstrated that their fermentative metabolism can be regulated and addressed to the production of compounds of sensory importance. Currently, some non- Saccharomyces yeasts belonging to the genera Kloeckera, Candida, Hanseniaspora are getting importance due to their high potentiality to be used in the production of fermented beverages such as special wines and craft beers. The emergence of new consumption patterns and market niches demanding products with new sensory characteristics has catapulted the exploitation of these yeasts.
Introduction
Fermentation of wines, beers and ciders is traditionally carried out with Saccharomyces cerevisiae strains, the most common and commercially available yeast. They are well known for their fermentative behavior and technological characteristics which allow obtaining products of uniform and standard quality. Saccharomyces cerevisiae is the most used yeast in fermentative processes. In wine fermentation, strains with specific characteristics are needed, for instance, highly producers of ethanol to reach values of 11-13% v/v, typically found in this beverage. On the other hand, beers and ciders contain less amounts of ethanol with a balanced and distinctive sensory profile characteristic of each one. In recent years, new consuming trends and requirements for new and innovative products have emerged. This situation led to rethink about the existing fermented beverages and to meet the demands of consumers. Yeasts are largely responsible for the complexity and sensory quality of fermented beverages. Based on this, current studies are mainly focused on the search of new type of yeasts with technological application. Non-Saccharomyces yeasts have always been considered contaminants in the manufacture of wine and beer. Therefore, procedures for eliminating them are routinely utilized such as must pasteurization, addition of sulfite and sanitization of equipment and processing halls. In recent years, the negative perception about non-Saccharomyces yeasts has been changing due to the fact that several studies have shown that during spontaneous fermentations of wine, these yeasts play an important role in the definition of the sensory quality of the final product. Based on this evidence, the fermentative behavior of some non-Saccharomyces yeasts is being studied in deep with the purpose of finding the most adequate conditions and the most suitable strain to be utilized in the production of fermented beverages.
Yeasts
Yeasts are eukaryotic microorganisms that inhabit a variety of ecological niches such as water, soil, air and the surface of plants and fruits. Commonly, they are present during the decomposition of ripen fruits and participate in the fermentation process. In this natural environment, the yeasts find nutrients and substrates necessary for their metabolism and fermentative activity [1,2]. Yeasts are not nutritionally demanding compared to other microorganisms such as lactic acid bacteria. For supporting their growth, they need common compounds such as fermentable sugars, amino acids, vitamins, minerals and also oxygen. Morphologically the yeasts are very diverse, being the round, ellipsoidal and oval shapes mostly predominant. During the identification, the microscopic evaluation is the first resource followed by microbiological and biochemical tests; subsequently, assays of sugar fermentation and assimilation of amino acids are necessary [3]. The production and tolerance to ethanol, organic acids and SO 2 are also important tools to differentiate among species. The reproduction of yeasts is mainly by budding, which results in a new and genetically identical cell. Budding is the most common type of asexual reproduction, although cell fission is a characteristic of yeasts belonging to the genus Schizosaccharomyces ( Figure 1). Cultivation conditions leading to the starvation of nutrients such as the lack of amino acids induce sporulation, which is a mechanism used by yeasts to survive under unfavorable conditions. As a consequence of the sporulation, yeast cells undergo genetic variability. In industrial fermentation processes, asexual reproduction of yeasts is preferable to ensure the conservation of the genotype and to maintain their fermentative behavior over time. Regarding their metabolism, yeasts are usually characterized by fermenting a broad spectrum of sugars, among them, glucose, fructose, sucrose, maltose and maltotriose, which are found in ripen fruits and processed cereals. In addition, yeasts tolerate acidic environments with pH values around 3.5 or even less. According to technological convenience, yeasts are divided into two large groups namely Saccharomyces and non-Saccharomyces. Morphologically, Saccharomyces yeasts can be round or ellipsoidal in shape depending on the growth phase and cultivation conditions. S. cerevisiae is the most studied species and the most utilized in the fermentation of wines and beers due to its excellent fermentative capacity, rapid growth and easy adaptation. They tolerate concentrations of SO 2 that normally most non-Saccharomyces yeasts do not survive. However, despite these advantages, it is possible to find in the nature representatives of S. cerevisiae that do not necessarily present these features.
Non-Saccharomyces yeasts
Non-Saccharomyces yeasts are a group of microorganisms genetically diverse with specific metabolic characteristics and high potential for using in fermentation processes. In the past, many of them have been considered contaminants due to the production of compounds that alters the sensory quality of wines [4,5]. With the purpose of eliminating them and avoiding their fermentative activity, for instance, in wine processing, disinfection of fermentation tanks and containers with sulfite is commonly performed. However, over time, the importance of non-Saccharomyces yeasts in spontaneous fermentation has been demonstrated since they contribute positively to the definition of the sensory quality of wines. These yeasts predominate at the initial stage of the spontaneous fermentation [6][7][8] until certain concentration of ethanol is reached (usually between 4 and 5% v/v), which are then inhibited due to the effect of the ethanol and the depletion of dissolved oxygen [9,10]. At the end of the process, Saccharomyces yeasts, the most resistant to ethanol, predominate and complete the fermentation. It has been reported that some non-Saccharomyces yeasts are able to survive toward the end of the spontaneous fermentation and exert their metabolic activity, thus contributing positively to the sensory quality of wines. Based on this evidence, in recent years, many researchers have focused their studies in understanding the nature and fermentative activity of the non-Saccharomyces yeasts [8,[11][12][13][14][15][16][17][18][19][20][21]. The findings demonstrated the enormous potential of these yeasts for use in the fermentation of traditional and nontraditional beverages. Despite the fact that most non-Saccharomyces yeasts show some technological disadvantages compared to Saccharomyces cerevisiae such as lower fermentative power and production of ethanol, non-Saccharomyces yeasts possess characteristics that in S. cerevisiae are absent, for instance, production of high levels of aromatic compounds such as esters, higher alcohols and fatty acids [22,23]. In addition, it has been reported that the fermentative activity of these yeasts is manifested in the presence of small amounts of oxygen which leads to an increase in cell biomass and the decrease in ethanol yield, a strategy that can be used to reduce the ethanol content of wines produced in coculture with S cerevisiae [24][25][26]. With the aim of exploiting the positive characteristics of non-Saccharomyces yeasts and reducing their negative impact, fermentations with mixed and sequential cultures with S. cerevisiae can be performed to produce fermented beverages with different sensory profiles [27][28][29]. The most important fact is related to the potential for producing a broad variety of compounds of sensory importance necessary to improve the organoleptic quality of wines and beers. The findings reported so far in literature have led to rethink the role of these yeasts in fermentative processes and to evaluate their use in the development of new products. Among the most studied non-Saccharomyces yeasts that reached special importance for researchers include Candida, Kloeckera, Hanseniaspora, Brettanomyces, Pichia, Lanchacea and Kluyveromyces, among others.
Fermentative metabolism of sugars
Either non-Saccharomyces or Saccharomyces yeasts share common pathways for the central metabolism of carbon; thus, both groups metabolize glucose through glycolysis. However, the mechanisms involved in the regulation of respirefermentative metabolism can differ significantly among them [30]. The glycolysis operates indistinctly under aerobic and anaerobic conditions, and through it, the glucose is metabolized to pyruvate by means of a series of biochemical reactions ( Figure 2). Under anaerobic or oxygen-limited conditions, pyruvate is converted to acetaldehyde and then to ethanol, and as a result, two net moles of ATP are generated. Under fully aerobic conditions and in the absence of any repression effect, the generation of energy is greater since glucose undergoes a complete oxidation, and as a result 36 net moles of ATP per mole of glucose are generated. The low-energy yield obtained by yeasts under anaerobic conditions forces the cell to increase the flow of glucose consumption in order to obtain a higher amount of energy in the form of ATP. As consequence, the ethanol accumulates in the fermentation medium and exerts its inhibitory effect, thus stopping the fermentative activity of the yeasts [31]. The low amount of energy generated under anaerobic conditions is used by the yeast cells in requirements for maintenance and growth. Glucose is easily transported and metabolized inside the cell; however, disaccharides such as sucrose, maltose or lactose must be first hydrolyzed to their simple forms (hexoses) which are then catabolized in the glycolysis pathway. Sucrose is hydrolyzed to fructose and glucose, maltose to two glucose units and lactose to glucose and galactose. The disaccharides are preferably hydrolyzed in the periplasmic space before entering the cytosol. Under anaerobic conditions besides ethanol, glycerol is also produced, thus contributing to restore the redox balance inside the cell. The production of glycerol increases in fermentations with musts of high specific gravity as a response to the osmotic stress [32]. It has been found that yeasts unable of metabolizing dihydroxyacetone ( Figure 2) are not capable of producing glycerol, and as a consequence dihydroxyacetone accumulates and inhibits the fermentation. Moreover, glucose apart of being metabolized via glycolysis, it is also broken by complementary pathways that are not necessarily related to the generation of energy. The hexose monophosphate pathway (HMP) also known as the pentose phosphate cycle usually accompanies the glycolytic pathway [33]. In addition, yeasts during fermentation produce small amounts of acetic acid either from acetaldehyde or acetyl-CoA ( Figure 2). Acetic acid is the main organic acid produced by yeasts during the fermentation of glucose, and it is responsible for the acidification and the decrease of pH of the medium. Ethanol is the most important fermentation by-product, and from the technological point of view, the production capacity of yeasts is an important parameter that determines their usability in fermentative processes. Gay Lussac defined a stoichiometric theoretical relationship to explain the production of ethanol by Saccharomyces cerevisiae yeasts which is: (1) According to this relationship, from 180.0 grams of glucose, 92.0 grams of ethanol and 88.0 grams of carbon dioxide are produced, which results in a theoretical yield of 0.511 g ethanol/g glucose. However, in practice, besides ethanol and CO 2 , the production of biomass, glycerol and other minority compounds also happens, that is: At industrial scale, a yield of 0.45 g ethanol/g glucose is acceptable [34]. In the case of fermentations with non-Saccharomyces yeasts, lower yields are commonly observed. Regarding to glycerol, in the case of S. cerevisiae, its production represents approximately 3% of the utilized sugar. Minor compounds are represented by higher alcohols, esters, aldehydes and organic acids, among others.
Importance of oxygen
Oxygen is an important element during the complete oxidation of glucose since it serves as final acceptor of electrons under aerobic conditions. It is also essential for other metabolic processes such as the synthesis of structural components of the cytoplasmic membrane of yeasts. During alcoholic fermentation, as ethanol accumulates, it exerts a detrimental effect on the integrity and stability of the cytoplasmic membrane [31]. Under this condition, the supply of small amounts of oxygen to the medium through aeration promotes the synthesis of unsaturated fatty acids and sterols (mainly ergosterol) which are important components of the yeast cell membrane. Thus, the produced compounds can be used to replace the damaged fraction caused by the effect of ethanol that acts as a solvent [35,36]. The replacement of unsaturated fatty acids and sterols is important to maintain the cell viability and allow the yeasts to complete successfully the fermentation. From the technological point of view, the supply of small amounts of oxygen is recommended in fermentations with musts of high specific gravity in order to avoid some drawbacks such as sluggish fermentation. It is also necessary for promoting the fermentative metabolism of non-Saccharomyces yeasts which are unable to ferment under fully anaerobic conditions [37]. The optimization of the aeration rate is very important to ensure the predominance of the fermentative metabolism and to reach the highest ethanol yield. In Crabtree-negative yeasts, as the concentration of oxygen in the medium increases above a certain value, the metabolism may become predominantly oxidative; thus, the ethanol yield decreases and the production of biomass increases. The highest ethanol yield is possible to achieve, adjusting properly the aeration rate of the fermentation medium. Aeration also affects the production of glycerol by yeasts; thus, as the concentration of oxygen increases, the production of glycerol decreases. From the technological point of view, aeration of the fermentation medium is an interesting tool to control the metabolic activity of non-Saccharomyces yeasts during fermentation, for instance, wines and beers [38,39]. In addition, aeration can be also used in winemaking to improve the quality of wines since it provokes the transformation of phenols, which reduces the astringency.
Production of higher alcohols
During alcoholic fermentation, either non-Saccharomyces or Saccharomyces yeasts produce diverse volatile compounds of sensory importance such as higher alcohols, aldehydes, fatty acids and esters in different concentrations depending on the species of yeasts and the fermentation conditions. The harmonic balance of the compounds determines the sensory quality of the fermented beverage. Higher alcohols are a group of compounds that mostly confer unpleasant organoleptic character when present at high concentrations [40,41]. In adequate concentrations, they contribute positively in defining the organoleptic quality of alcoholic beverage such as wines, beers and ciders. They are produced in the cytosol and then exported outside the yeast cell where it accumulates. Higher alcohols result from the decarboxylation of ketoacids that leads to the formation of the respective aldehydes, which are then reduced to form the corresponding higher alcohols (Figure 3). Ketoacids can be originated either from the metabolism of glucose or the catabolism of amino acids [42,43], which are taken by the yeast cell from the fermentation medium. The synthesis of higher alcohols involves the participation of at least three enzymes: a transaminase, a carboxylase and an alcohol dehydrogenase. Factors that increase the metabolism of sugar and amino acids promote the synthesis of higher alcohols. The factors include temperature of fermentation, amino acid concentration and composition of the fermentation medium.
Production of esters
Esters are a group of compounds that mostly impart positive sensory characteristics to fermented beverages such as wines, beer and ciders. They are formed by the action of specific enzymes that catalyze the reaction between an alcohol and a volatile fatty acid (Figure 4). The synthesis of esters by yeast initially involves the activation of fatty acids to acyl coenzyme A mediated by energy and the subsequent condensation of the active compound with an alcohol present in the medium to form the corresponding ester [44]. From the sensory point of view, acetate esters are the most important compounds present in fermented beverages, which include ethyl acetate, butyl acetate, propyl acetate, phenyl ethyl acetate and amyl acetate, among others. The esters produced by S. cerevisiae involve the activity of at least three acetyltransferases (AAT, EC 2.3.1.84): an alcohol acetyltransferase, an ethanol acetyltransferase and an isoamyl alcohol acetyltransferase [45,46]. Other enzymes such as ester synthase were also reported to participate in the synthesis of esters. However, the relevance attributed to the activity of this enzyme is quite limited. Ethyl acetate is the most abundant ester present in wines and largely responsible for the sensory character. Studies carried out with non-Saccharomyces yeasts related to the ability of producing esters allowed to select species of Hanseniaspora and Pichia able to promote esterification of various alcohols such as ethanol, isoamyl alcohol and 2-phenyl ethanol to produce the corresponding esters [47].
Most important non-Saccharomyces yeasts 4.1 Candida yeasts
In the last years, the fermentative behavior of some Candida yeasts has been studied with respect to the production of wines and beers. The most studied species include Candida stellata, C. zemplinina and C. pulcherrima, among others [16,20,21,[48][49][50]. Representatives of Candida yeasts have been isolated from the early stages of spontaneous fermentation of different types of wines [8,19,51,52]. The isolated species were characterized by being round in shape and smaller than S. cerevisiae. These yeasts are able to sediment toward the end of fermentation in a similar manner as S. cerevisiae [20]. Currently, the most important characteristics reported include the production of considerable amounts of ethanol and glycerol and a balanced production of volatile compounds of sensory importance, for instance, esters, fatty acids, aldehydes and higher alcohols. The production of ethanol is an important feature to define the use of yeasts in the production of fermented beverages with high ethanol contents such as wines. It has been reported that C. zemplinina strains are capable of producing ethanol up to 11.0% v/v [53], amount normally reached during the fermentation of sweet and semidry wines with S. cerevisiae. In addition, it has been demonstrated that Candida yeasts are capable of producing high amounts (up to 25.0 g/L) of glycerol [53][54][55][56], compound that contributes positively to the sensory quality of wines, beers and other beverages. The fermentative behavior of these yeasts was also evaluated as mixed cultures with S. cerevisiae [57]. The results were promising and interesting for being scaled-up to pilot fermentations. For instance, fermentation experiments of mixed cultures of C. stellata with S. cerevisiae produced higher levels of esters and fatty acids than monocultures of S. cerevisiae [19,57]. Fermentations with mixed and even sequential cultures of yeasts are an interesting field of research to evaluate the potential use of non-Saccharomyces yeasts to produce sensory differentiated beverages. In addition, individual fermentations with C. stellata and C. zemplinina strains using immobilized systems have been also performed [53,58]. The results showed the improvement of some technological properties such as the fermentation rate, ethanol production and the reusability of the strains in successive fermentations. Currently, studies to evaluate the usability of C. zemplinina strains in beer fermentation have been carried out using malt wort of 14 and 20°P, typically used in beer fermentation processes [21,22]. The yeast strains showed a suitable fermentative behavior for the production of lager and ale beers. One interesting feature is that Candida zemplinina is unable to ferment maltose, the main fermentable sugar of the malt wort. This characteristic is of special importance since it would enable the production of beers with low ethanol content and particular sensory profiles.
Kloeckera yeasts
Yeasts species belonging to this genus have recently become of interest for the production of fermented beverages. Species such as Kloeckera apiculata, K. javanica and K. corticis were isolated from a variety of niches including the spontaneous fermentation of grape must and ciders [6,8,49,59]. Most representatives present a lemon shape (apiculate yeasts) and asexual reproduction with bipolar budding. It was reported that these yeasts participate positively in the early stage of the spontaneous fermentation of wine [59,60], strains of Kloeckera apiculata being the most dominant [19,49,51,52]. During spontaneous fermentation, as the ethanol concentration increases, the fermentative activity of these yeasts slows down and stops toward the end of fermentation by the effect of the ethanol [61]. These yeasts are characterized by producing amounts of ethanol around 4-5% v/v, values typically found in commercial beers. It was reported that the control of aeration during fermentation has effect on the production of ethanol and compounds of sensory importance such as esters, higher alcohols and organic acids [14]. Based on the information available in literature, these yeasts are promissory for being used in brewing; however, before defining a strategy of exploitation, it is necessary to carry out more in-depth studies on the effect of temperature, wort composition and inoculation rate in the fermentative activity of these yeasts. In addition, it is also necessary to carry out studies on the behavior of these yeasts in fermentations with mixed and sequential cultures with Saccharomyces cerevisiae and the production of compounds of sensory importance. Studies carried out with pure cultures of Kloeckera corticis showed that these yeasts are capable of producing acetic acid, acetaldehyde, ethyl acetate and acetoin at high concentrations [62]. In addition, it has been reported that strains of Kloeckera apiculata are capable of producing higher concentrations of ethyl and isoamyl acetate than other non-Saccharomyces yeasts [14,63]. From the technological point of view, techniques of cell immobilization can be an additional strategy to improve the fermentative behavior and the production of compounds of sensory importance. The ability of these yeasts to produce a variety of aromatic compounds with positive impact on the sensory quality makes them attractive and potentially exploitable in fermentation processes.
Hanseniaspora yeasts
Few studies have been conducted regarding the potential use of yeasts belonging to the genus Hanseniaspora (apiculate yeasts) in the production of fermented beverages. The studied yeasts were isolated from the spontaneous fermentation of grape musts [6,8,59] and include species of Hanseniaspora uvarum, H. osmophila and H. guilliermondii, among others. It has been shown that these yeasts play an important role during the early stage of spontaneous fermentation of wine and strains of Hanseniaspora uvarum (also called Kloeckera apiculata) are dominant [19,51,52]. They are characterized by tolerating and producing low amounts of ethanol that do not exceed the values of 5.0% v/v [61]. This limitation explains why these yeasts do not participate actively toward the end of spontaneous fermentation of wines where the ethanol content reaches values even higher than 10%v/v. However, the fermentative capability of these yeasts is enough to produce beers of standard ethanol content similarly to those found in the market (4.5-5%v/v). In addition, they are able to ferment a wide range of sugars including maltose, which is an important feature needed for the production of beers. Regarding the production of compounds of sensory importance, studies have reported that strains of Hanseniaspora osmophila are characterized by producing high concentrations of acetic acid, acetaldehyde and ethyl acetate [62]. Additionally, it was also found that strains of Hanseniaspora uvarum are able to produce a variety of esters that confer fruitiness to fermented beverages [11,62,64]. However, other studies reported that mixed cultures of H. uvarum with S. cerevisiae produce higher amounts of higher alcohols than monocultures with S. cerevisiae [4,19]. Regarding fermentation parameters, the control of aeration and temperature exerts an important effect on the dynamics and activity of Hanseniaspora yeasts. Both parameters are important to control the production of compounds of sensory importance, which influence the quality of fermented beverages [11,65]. However, in view of the scarce information on the fermentative behavior of Hanseniaspora yeasts, particularly referring to the production of fermented beverages, additional studies are needed to perform in order to find the adequate conditions for their usage, for instance, in the production of beers with new sensory profiles.
Brettanomyces yeasts
Yeasts of this genus do not have a good reputation in fermentation processes such as in winemaking. For instance, representatives of Brettanomyces bruxellensis are considered detrimental due to the production of compounds such as 4ethylguaiacol, 4-ethylphenol and 4-ethylcatechol which impart unpleasant sensory character to wines known as "Bretty" [5,66]. These compounds result from the activity of a decarboxylase that acts on hydroxycinnamic acids followed by a reduction reaction [67]. The hydroxycinnamic acids are phenolic compounds naturally present in the skin and seeds of grapes. The common representatives of this genus were isolated from the spontaneous fermentation of wine, beer, cider and even kombucha [68][69][70]. It was also isolated from equipment and utensils utilized in fermentation processes, which are difficult to sanitize. The commonly isolated species include Brettanomyces bruxellensis, B. lambicus, B. intermedius and B. anomalus, among others [68,69]. Particularly, strains of B. bruxellensis are able to ferment only in the presence of oxygen (positive Crabtree effect), a broad spectrum of sugars and even maltooligosaccharides which are not fermentable by S. cerevisiae [71]. Under anaerobic conditions, these yeasts are unable to ferment and produce ethanol; thus, at low concentration of sugar in the medium, the fermentation of glucose to ethanol is blocked. On the contrary, the fermentation is stimulated in the presence of oxygen, an effect known as Custer or negative Pasteur [72]. Apart from producing ethanol in the presence of oxygen, Brettanomyces bruxellensis also produces high concentrations of acetic acid, which acidifies and lowers the pH of the medium. However, yeasts of this genus are not entirely undesirable; some representatives participate, for instance, during the fermentation of certain beers known as "Lambic" and "Gueuze" consumed commonly in Belgium and "Coolship Ales" in North America. The fermentation of "Lambic" beer is a spontaneous process which goes through a complex succession of microorganisms where Brettanomyces bruxellensis participates during the final stage acidifying the product [73]. The participation of these yeasts gives the beer its characteristic acidity and dryness and additionally is responsible for the production of compounds such as ethyl phenol, ethyl acetate, ethyl caprylate, ethyl decanoate and ethyl lactate, which synergistically confer their typical aroma character [18,74]. It has been shown that esters soften the sour taste and add fruity notes to this kind of beers [75]. Based on these findings, it was demonstrated that these yeasts and particularly B. bruxellensis contribute positively to defining the floral and fruity character of "Lambic" beers [18]. Beyond the contribution of Brettanomyces yeasts in spontaneous fermentation processes, in recent years, their use in controlled fermentations has been investigated, both in pure and in coculture with S. cerevisiae [15,17]. Interesting findings were reported, indicating that the control of aeration during fermentation is a critical point to guide the fermentative metabolism toward the production of important volatile compounds that may contribute to the organoleptic character of fermented beverages.
Production of special wines
It is of common agreement that non-Saccharomyces yeasts contribute beneficially to the sensory quality of spontaneously fermented wines, an evidence that served as a starting point to pay attention to particular yeast species that could be exploited in fermentations of commercial and noncommercial fermented beverages. Non-Saccharomyces species are characterized by producing a greater diversity of compounds of sensory importance than S. cerevisiae yeasts. Although these yeasts show a low fermentation power, some species possess important fermentative features, for instance, representatives of Kloeckera and Hanseniaspora yeasts produce a variety of compounds of sensory impact, particularly esters at concentrations even higher than S. cerevisiae. On the other hand, Candida zemplinina, a fructofilic yeast, has been shown to produce glycerol in higher concentrations than S. cerevisiae. It is also capable of producing ethanol in concentrations high enough to produce different types of wines. In view of the complementary characteristics of both groups of yeasts (Saccharomyces and non-Saccharomyces), the use of non-Saccharomyces yeasts can be proposed in fermentations with mixed or sequential cultures with S. cerevisiae as an important strategy to improve sensory complexity and mouthfeel of wines [19,73]. The fermentative versatility of non-Saccharomyces yeasts would enable the production of special wines with different and innovative sensory characteristics. In addition, among the techniques that can be implemented for enabling their practical exploitation include the selection of new strains, the development of fermentation strategies (mixed or sequential cultures with two or more yeast strains), the ratio of both strains in the inoculum (non-Saccharomyces/Saccharomyces cerevisiae) and the inoculation rate at the beginning of fermentation [57,76]. Finally, some technological characteristics of non-Saccharomyces yeasts can be also modified by using cultivation techniques in bioreactors with the aim of improving, for instance, the fermentation rate. The possibility of commercializing as starter cultures is an attractive opportunity for the production of different types of wines with special sensory qualities.
Production of craft beers
In the last 10 years, the market of craft beers has increased in the USA, Latin America and some countries of Europe [77,78]. This phenomenon is related to the expectation of consumers for discovering in these beers sensory characteristics different from those routinely found in commercial beers [74]. Current consumers are curious and interested in sensing new flavors and aromas that can satisfy their preferences. As consequence, new market segments have emerged in response to the broad possibility of offering new types of beers produced using different methods and techniques of fermentation. The production of craft beer is generally carried out in small-scale breweries and involves the use of non-technified processing methods. Craft beers are not usually filtered; due to this, their shelf life is relatively short, and therefore, their consumption must be within few days after bottling. There are a variety of innovative alternatives to produce different types of craft beers which include the use of new types of adjuncts either amylaceous (cereal grains) or non-amylaceous (fruit pulps or juices) and selected strains of non-Saccharomyces yeasts which have an enormous exploitation potential. Although most non-Saccharomyces yeasts produce low concentrations of ethanol, the fermentative capacity of some representatives of Kloeckera and Hanseniaspora yeasts is adequate to produce beers with an ethanol content typically found in the market (4.5-5%v/v). Among non-Saccharomyces yeasts considered important in beer fermentation, Brettanomyces lambicus is the most representative which is involved in the production of "Lambic" and "Gueuze" beers. Currently, some studies with Candida zemplinina strains were performed in fermentations with pure malt wort and with different adjuncts (grape or apple juice) at different temperatures and specific gravities. The findings were promissory and showed the capability of these yeasts to ferment at low temperatures (14°C) and in medium with high specific gravity (16°P), which demonstrates the possibility for being exploited in the production of craft beers. In addition, it was also proposed that these yeasts can be used for the production of beers with low ethanol content since they are not able to ferment maltose, the main and most abundant sugar present in the wort [20,21]. Additionally, other non-Saccharomyces yeasts such as Dekkera anomala, Naumovozyma dairenensis and Debaryomyces spp. have been also reported with a high potential for being used in the fermentation of beers. In view of the different fermentative behavior of non-Saccharomyces yeasts and the variety of compounds of sensory importance that they can produce during fermentation, their use in controlled fermentations has aroused the interest of brewers for producing beers with distinctive sensory features [23,79].
Conclusion
Non-Saccharomyces yeasts show a great potential to be used in the production of fermented beverages mainly wines and beers. These yeasts show a variety of fermentative patterns, and depending on the fermentation conditions, they produce a wide range of volatile compounds of sensory importance. For their practical application in a particular fermentative process, it is necessary knowing the parameters that directly influence on the fermentative activity and the production of desirable volatile compounds. Among the non-Saccharomyces yeasts that have attracted interest of researchers due to their fermentative qualities include strains of Candida stellata, C. zemplinina, Kloeckera apiculata and Hanseniaspora uvarum. Particularly, strains of Candida stellata and C. zemplinina have become very attractive for using in fermentations of different types of wines and beers. These yeasts are capable of producing significant concentrations of glycerol, an important compound that imparts a positive impact on the sensory quality of wines and beers. Candida yeasts, especially C. zemplinina, also produce high concentrations of ethanol, high enough to drive fermentation processes of wines. On the other hand, species of Kloeckera and Hanseniaspora yeasts are characterized by producing considerable amounts of acetate esters, valuable compounds that contribute positively to the sensory character of beers. Based on this, if a fermentation process that involves the use of non-Saccharomyces yeasts is going to be implemented, it is necessary to select the best representatives and then define the appropriate fermentation conditions for the production of fermented beverages with the desired sensory qualities.
Conflict of interest
The author certifies that he has no affiliation with or involvement in any organization or entity with any financial interest or non-financial interest in the subject matter or materials discussed in this manuscript.
Author details
Waldir Desiderio Estela Escalante Laboratory of Bioprocessing and Technology of Fermentation, Faculty of Chemistry and Chemical Engineering, Universidad Nacional Mayor de San Marcos, Lima, Peru *Address all correspondence to: waldir.estela@unmsm.edu.pe © 2018 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. | 2019-04-03T13:09:26.142Z | 2018-12-03T00:00:00.000 | {
"year": 2019,
"sha1": "ec86ab8e199a4392485706dbbc9de2da79b15878",
"oa_license": "CCBY",
"oa_url": "https://www.intechopen.com/citation-pdf-url/64367",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "c0168108ed4400a5ce600415d8018262ee93cfba",
"s2fieldsofstudy": [
"Environmental Science",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
255475387 | pes2o/s2orc | v3-fos-license | Protocol for aerosolization challenge of mice with Bordetella pertussis
Summary Bordetella pertussis causes whooping cough and is transmitted via respiratory droplets. Here, we present a protocol to challenge mice with Bordetella pertussis. We describe bacteria preparation and long-term storage, followed by manufacturing a challenge dose for use in a commercial exposure chamber with controlled nebulization of B. pertussis into aerosols. We then detail the aerosol challenge of mice through a more natural administration than intranasal instillation and post-challenge data collection. This protocol allows for better comparisons between preclinical pertussis studies.
BEFORE YOU BEGIN
The human disease pertussis, also known as whooping cough, is caused by the bacterium Bordetella pertussis (Bp) and was identified in 1906 by Jules Bordet and Octave Gengou. 1 During the century after the identification of Bp, several different animal and challenge models were implemented to study pertussis pathogenesis and aid in vaccine development efforts. The only known reservoir for B. pertussis is humans and it is transmitted through the spread of respiratory droplets. This is a highly contagious disease with a R value of 12-18 in unvaccinated populations and a R value of 5-6 in populations that are fully vaccinated. 2 In the baboon model of pertussis, animals are able to transmit bacteria through respiratory droplets to naive animals housed together and in separate cages up to 7 feet away. 3 The intranasal droplet instillation method (IN) was published in 1937 by Burnet and Timmins in which 25 mL-50 mL of a bacterial solution was pipetted directly on the external nares of mice to induce infections. 4,5 Although, at the time, this method had some concerns about the efficiency of establishing uniform infections in test mice; it has since been shown to produce reliable results in our lab and others. [6][7][8] In addition to our group, other groups have also examined Bp pathogenesis upon IN challenge with doses between 2 3 10 2 -2 3 10 7 CFUs/mL and have shown a difference in timing of infection and severity of bacterial burden, leukocytosis, and pulmonary proinflammatory cytokine levels 4,6 (publication in progress). However, given the challenge volume utilized in the IN method, a large quantity of bacteria is deposited in the lungs of challenged animals which is not observed in a natural pertussis infection. Therefore, other groups took this ''natural infection'' a step further and developed an aerosol challenge model of pertussis. 4,[9][10][11][12][13] The current aerosol model typically utilizes a 1-3 3 10 9 CFU/mL dose of bacteria aerosolized for 15-30 min for 5-100 mice at a time. 9,10,13 The aerosol challenge model allows for multiple animals to be challenged at once. This decreases the chance of human error when instilling a challenge dose such as in the IN method. While infections established through these studies cited above are more uniform from study to study, most studies use custom aerosol chambers and commercial nebulizers. We sought to establish a standardized protocol using a commercial chamber, nebulizers, and a controller unit to create a consistent protocol that can be implemented across laboratories.
The protocol below lists detailed steps for a standardized aerosol challenge model using the DSI Buxcoâ FinePointeä mass dosing controller with a mass-dosing aerosol Chamber. The controller allows the researcher to set airflow, nebulizing time, and duty cycle (percentage of time the nebulizer operating during a 6-s cycle). In this protocol, Bp is grown first on BG agar plates, transferred to SSM liquid media to get log-phase growth, and then diluted in SSM to a specific aerosol dose. Next, mice are placed in the chamber, and 20 mL of the challenge dose is administered through aerosolized droplets. The mice inhale the infectious droplets, and a respiratory infection is established. This protocol may be used to test bacterial pathogenesis, host immune response, and vaccine efficacy. This protocol can be adapted to examine the effects of different challenge doses and could be applied to other animal models not listed here, e.g., other murine models (data not shown), guinea pigs, and rabbits. 14
Institutional permissions
All animal work done in this protocol was performed in strict accordance with recommendations outlined in the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health. West Virginia University's Institutional Animal Care and Use Committee (IACUC) approved these protocols under IACUC protocols #1602000797R1 and #1901021039. Any work done with Bordetella pertussis was completed in Biological Safety Level-2 (BSL-2) conditions under the approved Intuitional Biosafety Committee (IBC) protocol #17-01-11. Please ensure that when attempting to follow this protocol, all care is given to following your own IACUC and IBC guidelines, and work must be done under approved protocols.
Preparation and storage of B. pertussis bacterium Timing: 3 days 1. Prepare 8 plates (100 3 15 mm) with 15 mL of Bordet-Gengou (BG) agar supplemented with 15% defibrinated sheep's blood and an appropriate antibiotic (e.g., Streptomycin 100 mg/mL for Bp strain UT25sm1). a. Solidify and dry the plates in a biosafety cabinet with the lids cracked open for 15 min. 2. Obtain a frozen stock of bacteria and spray off the tube with 70% ethanol before moving it into the biosafety cabinet ( Figure 1A). a. Sanitize the working area inside the biosafety cabinet before opening the stock. 3. Scrape a frozen chunk of the bacterial stock (approximately 5 3 2 3 2 mm) using a sterile 20-gauge needle to onto a BG agar plate ( Figure 1B). a. Discard the needle into an approved sharps container and use a fresh needle for each plate. b. Re-cap the bacterial stock tube to avoid introducing contaminants to the tube. c. Close the plate and set it aside until all plates have received the bacterial stock.
CRITICAL: Once all plates have received an inoculum of the stock bacteria, place the bacterial stock back into À80 C, so it does not thaw.
4. Next, spread the bacterial stock in a 3-phase streak on the plate using a cooled flame-sterilized metal inoculating loop ( Figures 1C-1F). 5. Invert the plates and place them in a 36 C incubator for 48-72 h.
a. Bordetella pertussis grows optimally at 36 C but will also grow at 37 C.
CRITICAL: To obtain the growth necessary for saving stocks of Bp, the temperature in the incubator needs to remain at 36 C ( Figure 1G).
OPEN ACCESS
6. Following incubation, set 4 mL of defibrinated sheep's blood in the biosafety cabinet and allow it to reach room temperature (20 C-25 C). a. Remove the plates from incubation and set them in a biosafety cabinet. 7. Swab all bacteria off the plate using a polyester swab and deposit with a swirling motion into the blood. a. Use a new polyester swab for each plate.
Note: 2 plates per 1 mL of blood will give an inoculum dose of $ 5.5 3 10 9 CFU/mL.
CRITICAL: Be sure to use only polyester swabs for Bp because the fatty acids in cotton swabs will inhibit Bp survival.
8. Gently vortex the blood with the bacteria to mix and aliquot 150 mL into new sterile 1.5 mL Eppendorf tubes. 9. Label the tubes with the corresponding bacteria and place in a À80 C freezer to be used for challenge dose preparation.
Note: In our experience, saving aliquots at À80 C in defibrinated sheep's blood results in bacterial viability for at least 10 years.
Preparing the challenge dose
Timing: 4 days 11. Pipette 20 mL of Bp aliquot (from the preparation described above) onto 6 individual BG agar plates. a. Perform a three-phase streak on the BG agar using a sterilized metal inoculating loop ( Figures 1C-1F). b. Place the inoculated BG plates inverted in an incubator set to 36 C for 48-72 h.
CRITICAL: Avoid freeze/thaw cycles with Bp and aliquots should be disposed after use.
12. Remove the BG plates from the incubator.
a. Swab the entire plate using sterile polyester swabs, paying attention to not pull up chunks of agar with the swab.
Note: One polyester swab used per plate.
b. Deposit the bacteria from the swabs in Stainer-Scholte Medium (SSM) with a swirling motion and discard the swab once completed. i. The swab should be disposed of in biohazard bag.
Note: 1 plate per 1 mL of media will give an inoculum dose of $10 10 CFU/mL. Note: Small footprint shaking incubators dedicated to bacterial growth and rarely interrupted result in the best growth rate (e.g., Benchmark Incu-shaker mini). 13. Once the bacterial solution reaches an OD 600nm $0.6, remove the flasks from the incubator.
a. Dilute the solution in SSM to an OD 600nm of 0.240 G 0.05 using a UV-VIS standard spectrophotometer (e.g., Beckman Coulter DU-530 with 1 cm pathwidth cuvette).
Note: SSM used as the blank.
CRITICAL: The OD 600nm of 0.240 G 0.05 gives us a reliable 10 9 CFU/mL of viable bacteria, but a pilot study should be conducted with each spectrophotometer to determine the proper OD for 10 9 CFU/mL.
MATERIALS AND EQUIPMENT
Note: Wait until agar cools enough to be handled with gloved hands before blood and antibiotics are added, $45 C.
Note: Swirl to mix blood and antibiotic in to minimize bubbles present in the plate.
Note: Discard any leftover media.
Note: Plates may be poured up to 3 days before use and stored inverted in a container at 4 C.
Note: Basal SSM should not be heated to dissolve contents.
Adjust pH to 7.6 using 10N NaOH, then bring volume up to 980 mL.
Note: Basal SSM should be autoclaved on a liquid cycle for 15 min at 121 C and 15 PSI.
Note: Media stored at 4 C for 3 months, or À20 C for 6 months.
Note: Add L-cysteine to 1N HCl and vortex to dissolve, once dissolved add distilled H 2 O.
Note: Be sure to use L-cysteine and not L-cystine. Note: This solution cannot be stored.
Note: Mix solution by vortexing.
Note: This solution cannot be stored.
Note: Mix solution by vortexing.
Note: Filter-sterilize and aliquot 600 mL into sterile 1.5 mL Eppendorf tubes.
Note: Solution may be stored up to 6 months at À20 C.
Note: Mix solution by vortex.
Note: Filter-sterilize and aliquot 600 mL into sterile 1.5 mL Eppendorf tubes.
Note: Solution may be stored up to 6 months at À20 C.
Note: Supplements should be added right before use.
Reagent Final concentration (mM) Amount
Step 2 solution 50 mL CRITICAL: 1N HCl-Corrosive to the eyes, skin, and mucous membrane. Appropriate PPE (e.g., gloves, lab coat, and goggles) should be worn when handling this substance.
Alternatives:
We do not substitute any of these materials; therefore, we cannot comment on alternatives.
Note: 13 PBS should be autoclaved on a liquid cycle for 15 min at 121 C and 15 PSI.
Note: 13 PBS solution may be stored at room temperature (20 C-25 C) for 6 months.
STEP-BY-STEP METHOD DETAILS
Aerosol challenge of mice
Timing: 30 min
Upon completion, the aerosol challenge of mice will accomplish the goal of instilling bacteria into their respiratory system. 10-week-old, female CD-1 mice were used in this challenge model and afterward housed in filtertop cages with 5 mice per cage and food and water ad libitum. Setup and usage of the chamber followed the manufacturer's instructions in their application manual found here: Mass Dosing System Application Manual (011283-001).pdf.
1. Move mouse cages into the biosafety hood and remove the lids of the cage and dosing chamber. 2. Moving one mouse at a time, transfer mice from the cage into a compartment inside the mass dosing chamber. a. Once an entire cage has been transferred, replace the lid, and move the cage from the biosafety hood. b. Repeat this process for all mice being challenged. 3. Open the silicon stoppers on top of the nebulizer heads and carefully pipette 5 mL of the challenge dose into the nebulizer basin.
Note: 5 mL will be nebulized in each of the 4 nebulizers for a total of 20 mL of liquid challenge dose.
Note: This dose is approximately 10 11 CFU per aerosol chamber volume (36.6 L). Note: A mist will start descending from the nebulizer heads once started.
7. Increase the fresh air flow rate to 5 LPM after the 10-min nebulizing time is complete.
Note: This increases the clearance of the dose from the chamber.
a. Set a 5-min timer to ensure the challenge dose has cleared the chamber or settled out of the air. 8. Remove the chamber's lid and use a grasping tool/ protective sleeve to remove the mice from their compartments and return them to their original cage.
CRITICAL: Be sure to replace the chamber lid in between mice so that no animals can escape by climbing the dividers in the chamber.
Sanitizing the chamber and nebulizers
Timing: 20 min This step ensures the chamber is sanitized from all BSL2 level agents before leaving the challenge location.
9. Spray the inside of the chamber down with non-alcohol-based cleaner (e.g., peroxiguard) and wipe up debris with the solution. 10. Remove waffle dividers and floorboard, spraying with a non-alcohol-based cleaner and allow to air dry. 11. Spray the inside of the chamber down with a non-alcohol-based cleaner (peroxiguard) and wipe up any debris.
Note: Be sure to spray and wipe down the underside of the chamber lid as well.
12. Gently remove the nebulizer heads from the chamber lid and place them in a container of warm soapy water. a. Gently swish the nebulizer heads around and transfer them to a container with clean distilled water. b. Swish nebulizer heads to remove any remaining detergent and move to an autoclave sleeve. c. Following the manufacturer's instructions, flash steam sterilize the nebulizer head.
i. We used the following settings: Max temp 121 C, time at temp 3 min, drying time 2 min. 13. Remove nebulizer heads from autoclave sleeves and allow them to dry on their side before being stored.
Euthanasia and data collection
Timing: 1 h This section is to provide humane euthanasia at the selected endpoint for the study (1 h post-challenge) and for prompt organ collection and bacterial enumeration.
Note: We use a working concentration of 39 mg/mL and inject 10 mL/g of body weight.
EXPECTED OUTCOMES
After the successful completion of the aerosol challenge, the mice should have obtained a sufficient infectious dose of bacteria to establish a respiratory infection in the lung ( Figure 3B), trachea (Figure 3C), and nasal lavage ( Figure 3D). Bacteria are still detectable for at least 7-days post-challenge in all the tissues and for all the doses investigated (data not shown).
LIMITATIONS
One limitation of this protocol is that the mass dosing chamber can simultaneously hold up to 25 mice. There will be a need for a larger volume of challenge dose and multiple rounds of aerosolization if a challenge is needed for more than 25 mice. Of note, we challenged up to 15 mice simultaneously with reproducible results during our experiments. However, we did not challenge the maximum 25 mice at a time that the manufacturer states will fit in the chamber. We cannot speak to the reproducibility of challenging an upper limit of mice in the chamber. Although we challenged mice with a challenge dose as low as 10 6 CFU/mL at 20 mL volume and could enumerate CFUs 1 h post-challenge, there may be a lower challenge dose concentration limit where not all mice will establish an infection. Further studies will be needed to suggest this limit.
Problem 1
Low OD 600nm when preparing the challenge dose in step 13.a. in before you begin.
Potential solution
If the OD is lower than desired, ensure that all bacterial growth is collected and deposited into the SSM media when swabbing the plate. The use of polyester/dacron (not cotton) swabs is required due to the acids within cotton that inhibit Bordetella spp. growth. New, freshly autoclaved Erlenmeyer flasks should be used to ensure no detergents that limit bacterial growth are present. Flasks can be incubated for another 2-6 h if needed to allow for more growth.
Problem 2
No aerosol vapors are emitted through the nebulizer head during challenge in step 6.d.
Potential solution
Thorough cleaning of nebulizer heads to essential to their functionality; however, if the head is clogged, the following steps may be followed.
Gently remove the nebulizer head from the chamber lid. Empty the contents of the nebulizer into a waste container and sit the nebulizer with the top of the head resting on a table. Pipette 10-20 mL of dH 2 O onto the nebulizer membrane.
Ensure that you do not touch the membrane with the pipette tip.
Set the controller for a 100% duty cycle and start the program. After 30 s, a vapor should rise from the nebulizer membrane. Repeat this process 3-5 times.
OPEN ACCESS
Return the nebulizer head into its place and ensure its proper function by pipetting 1 mL dH 2 O into the nebulizer head and starting a cycle. Vapor should now be emitting from the nebulizer head.
RESOURCE AVAILABILITY
Lead contact Further information and requests for resources and reagents should be directed to and will be fulfilled by the lead contact, F. Heath Damron-fdamron@hsc.wvu.edu.
Materials availability
This study did not generate any new reagents, animal/ cell lines, or other such materials.
Data and code availability
This study did not generate or analyze any novel data outside of the bacterial burden per tissue that is shown in Figure 1. | 2023-01-06T22:11:31.157Z | 2023-01-03T00:00:00.000 | {
"year": 2023,
"sha1": "841c13d5f1dcafdc4e1b55138b1dcf73606fb01d",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.xpro.2022.101979",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "a592838ecf19d42c71187177fa8c799d9d20ddd0",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
233699046 | pes2o/s2orc | v3-fos-license | A Concise Synthesis of a Methyl Ester 2-Resorcinarene: A Chair-Conformation Macrocycle
: Anions are important hydrogen bond acceptors in a range of biological, chemical, environ-mental and medical molecular recognition processes. These interactions have been exploited for the design and synthesis of ditopic resorcinarenes as the hydrogen bond strength can be tuned through the modification of the substituent at the 2-position. However, many potentially useful compounds, especially those incorporating electron-withdrawing functionalities, have not been prepared due to the challenge of their synthesis: their incorporation slows resorcinarene formation that is accessed by electrophilic aromatic substitution. As part of our broader campaign to employ resorcinarenes as selective recognition elements, we need access to these specialized materials. In this article, we report a straightforward synthetic pathway for obtaining a 2-(carboxymethyl)-resorcinarene, and resorcinarene esters in general. We discuss the unusual conformation it adopts and propose that this arises from the electron-withdrawing nature of the ester substituents that renders them better hydrogen bond acceptors than the phenols, ensuring that each of them acts as a donor only. Density Functional Theory (DFT) calculations show that this conformation arises as a consequence of the unusual configurational isomerism of this compound and interruption of the archetypal hydrogen bonding by the ester functionality. ring. The hydrogen bonded network of hydroxyl groups enhances the acidity of the phenol while increasing π -basicity inside the cavity [27]. Attenuation or cleavage of the O − H bonds, exo to the upper rim, by bases results in increased electron density on the oxygen, effectively strengthening the hydrogen bonding [26,28–30].
Introduction
Resorcinarenes are (usually) bowl-shaped macrocyclic compounds stabilized by a circular network of intramolecular O···H−O hydrogen bonds [1,2]. These compounds represent a unique family of host compounds which have been extensively studied in supramolecular host-guest chemistry because they display several sites for non-covalent interactions, excellent pK a tunability and an electron-rich bowl-shaped cavity in the C 4v symmetric conformation, among a myriad of other interesting properties [2][3][4]. Their cavities can accommodate a wide range of guest molecules through non-covalent interactions including (but not limited to) hydrogen bonding, halogen bonding, cation···π, C−H···π as well as π···π interactions depending on both the size and charge distribution of the respective guest molecules and the functionalization of the resorcinarene. 4 In addition to their structural role enforcing the upper rim of the macrocycle, the hydroxyl groups at the 1 and 3 positions on the aromatic subunits can participate extensively in hydrogen bonding with hydrogen bond-accepting guest molecules [5][6][7][8][9]. As a direct result of these hydrogen bonded supramolecular networks, resorcinarenes have been extensively exploited as appropriate hosts to accommodate a myriad of guests ranging from alcohols [10][11][12][13][14] to sugars [15][16][17][18], steroids [19][20][21] and even heterocyclic five-and six-membered ring compounds as guest molecules [22][23][24][25][26].
On resorcinarenes themselves, reaction at C2 (see Figure 1 for numbering) is selective over C4 and C6 positions, as these are blocked by the lower rim linkages of the resorcinarene On resorcinarenes themselves, reaction at C2 (see Figure 1 for numbering) i over C4 and C6 positions, as these are blocked by the lower rim linkages of the rene ring. The hydrogen bonded network of hydroxyl groups enhances the aci phenol while increasing π-basicity inside the cavity [27]. Attenuation or cleav O−H bonds, exo to the upper rim, by bases results in increased electron dens oxygen, effectively strengthening the hydrogen bonding [26,[28][29][30]. Functionalization of resorcinarenes at the 2-position tunes the relative acid phenolic hydrogens, allowing for selective reactions with certain guests. Depr of the phenolic hydrogens with amine bases creates protonated ammonium cati form interesting supramolecular complexes with the anionic resorcinarenes. The blies may have enhanced crystallinity that can then be studied both in the solid solution state by single crystal X-ray diffraction and 1 H NMR, respectively, as the gas phase by mass spectrometry. The challenge is to access a wide enough resorcinarenes to take advantage of these potential specific interactions. As p campaign to access a greater variety of these molecules, we wish to report the of a simple ester resorcinarene, and its very un-resorcinarene like conformation
Results and Discussion
A resorcin [4]arene with an ester functionality in the 2-position has not been this moiety would act as an electron-withdrawing functionality that would in acidity of the phenols.
The formation of resorcinarene macrocycles as crystalline solids with hig points through the acid-catalyzed condensation of resorcinol (or functionali cinols) with aldehydes is well established [1]. Högberg was one of the first to di synthesis of resorcinarenes using formaldehyde and resorcinol in acidic condi This approach works extremely well for simple 2-haloresorcinarenes and we h success employing it for other functionalities, so it was the starting point for our [32].
To obtain 2-substituted resorcinarenes, functionalization can take place eit or after cyclization. As macrocycle formation blocks the 4 and 6 positions, th Functionalization of resorcinarenes at the 2-position tunes the relative acidity of the phenolic hydrogens, allowing for selective reactions with certain guests. Deprotonation of the phenolic hydrogens with amine bases creates protonated ammonium cations which form interesting supramolecular complexes with the anionic resorcinarenes. These assemblies may have enhanced crystallinity that can then be studied both in the solid state and solution state by single crystal X-ray diffraction and 1 H NMR, respectively, as well as in the gas phase by mass spectrometry. The challenge is to access a wide enough variety of resorcinarenes to take advantage of these potential specific interactions. As part of our campaign to access a greater variety of these molecules, we wish to report the synthesis of a simple ester resorcinarene, and its very un-resorcinarene like conformation.
Results and Discussion
A resorcin [4]arene with an ester functionality in the 2-position has not been reported; this moiety would act as an electron-withdrawing functionality that would increase the acidity of the phenols.
The formation of resorcinarene macrocycles as crystalline solids with high melting points through the acid-catalyzed condensation of resorcinol (or functionalized resorcinols) with aldehydes is well established [1]. Högberg was one of the first to discover the synthesis of resorcinarenes using formaldehyde and resorcinol in acidic conditions [31]. This approach works extremely well for simple 2-haloresorcinarenes and we have found success employing it for other functionalities, so it was the starting point for our synthesis [32].
To obtain 2-substituted resorcinarenes, functionalization can take place either before or after cyclization. As macrocycle formation blocks the 4 and 6 positions, the post-cyclization strategy can have advantages in terms of regioselectivity, although as four functional group transformations must occur in every step, incomplete substitution can lead to complex mixtures, difficult purification and low yields. Pre-cyclization methods, by contrast, introduce regioselectivity issues, but the use of a purified monomer ensures uniform substitution in the macrocycle. In this case, we pursued a pre-cyclization derivatization protocol because of the ready availability of a suitable precursor; the monomer unit was readily obtainable via a slow Fischer esterification of commercially available 2,6-dihydroxybenzoic acid using sulfuric acid in methanol. Following removal of the solvent in vacuo, the residue was Symmetry 2021, 13, 627 3 of 8 dissolved in dichloromethane and washed with saturated sodium bicarbonate, which removed any unreacted starting material along with the sulfuric acid catalyst. Pure methyl 2,6-dihydroxy benzoate was obtained as a pinkish solid in 51% yield (Scheme 1). was readily obtainable via a slow Fischer esterification of commercially available 2,6-dihydroxybenzoic acid using sulfuric acid in methanol. Following removal of the solvent in vacuo, the residue was dissolved in dichloromethane and washed with saturated sodium bicarbonate, which removed any unreacted starting material along with the sulfuric acid catalyst. Pure methyl 2,6-dihydroxy benzoate was obtained as a pinkish solid in 51% yield (Scheme 1).
With the functionalized resorcinol in hand, several approaches toward macrocyclization were attempted using isovaleraldehyde, as the tetra isobutyl resorcinarenes are typically highly crystalline in our experience. After some exploration of conditions (see SI for discussion), we were able to effect the desired macrocyclization by employing concentrated sulfuric acid in methanol, providing the desired resorcinarene as a white solid in a poor 1-4.4% yield over repeated trials, with a great majority of the lost mass balance attributed to the formation of polymer and oligomer (Scheme 1). Curiously the NMR spectrum was not as we expected and initially gave us grave concern (Figure 2). Generally, resorcinarenes are, as we have emphasized, found in a C4v symmetric bowl-shaped conformation. In this form, the protons on each of the subunits are magnetically equivalent to congeners on the others. Consequently, one only observes a single aromatic signal, a single benzylic signal, and a single set of peaks for the lower rim alkyl chain. This is not what we found. Instead, our spectrum was consistent with a pair of isomers. We spent a significant amount of time attempting to separate these isomers but the apparent mixture behaves as a single compound by TLC, and HPLC. Seeking clarification on this issue, we attempted to recrystallize the material, but this also did not change the ratio of the signals or enrich our sample in either compound. However, it did provide us with material of sufficient quality for X-ray analysis. With the functionalized resorcinol in hand, several approaches toward macrocyclization were attempted using isovaleraldehyde, as the tetra isobutyl resorcinarenes are typically highly crystalline in our experience. After some exploration of conditions (see SI for discussion), we were able to effect the desired macrocyclization by employing concentrated sulfuric acid in methanol, providing the desired resorcinarene as a white solid in a poor 1-4.4% yield over repeated trials, with a great majority of the lost mass balance attributed to the formation of polymer and oligomer (Scheme 1). Curiously the NMR spectrum was not as we expected and initially gave us grave concern (Figure 2). Generally, resorcinarenes are, as we have emphasized, found in a C 4v symmetric bowl-shaped conformation. In this form, the protons on each of the subunits are magnetically equivalent to congeners on the others. Consequently, one only observes a single aromatic signal, a single benzylic signal, and a single set of peaks for the lower rim alkyl chain. This is not what we found. Instead, our spectrum was consistent with a pair of isomers. We spent a significant amount of time attempting to separate these isomers but the apparent mixture behaves as a single compound by TLC, and HPLC. Seeking clarification on this issue, we attempted to recrystallize the material, but this also did not change the ratio of the signals or enrich our sample in either compound. However, it did provide us with material of sufficient quality for X-ray analysis.
Crystals suitable for single-crystal X-ray diffraction could be obtained from the white powder by slow evaporation of a chloroform solution (Figure 3). The crystal structure of the obtained compound revealed two unusual features. Firstly, the configuration of the isobutyl groups around the lower rim of the resorcinarene is reminiscent of C 2 symmetry (this can be seen in the 2D representation in Figure 4a), in contrast to the more commonly observed C 4v isomer. Secondly, in the majority of resorcinarene crystal structures, the observed conformation is the archetypal bowl shape. This crystal instead exhibited a "chair" conformer with a pseudo-C 2 rotation axis where two of the resorcinol subunits (2 and 4, see Figure 2 inset for numbering) are coplanar with one another, while the other two (1 and 3) sit orthogonal to the plane and antiperiplanar to one another. This result also clearly contextualizes the doubling of the resonances in the NMR spectra: this conformation is not an artifact of crystallization but appears to persist in solution and not rapidly interconvert or "flip" the pseudochair, in which case we would observe a single set of peaks as the average of the two chemical environments. The 1 H NMR spectrum can now be understood in terms of this conformational preference, where the two 6H singlets at 4.10 and 3.98 ppm correspond to the methyl esters in two different environments, with the two 2H singlets at 7.24 and 6.46 ppm corresponding to the two aryl C−H environments. We were particularly surprised by the chemical shift of the singlet at 6.46 pm, which is unusually shielded for an aromatic C−H bond para to an electron-withdrawing group; in the methyl 2,6-dihydroxybenzoate starting material, the peak for the para C−H is at 7.30 ppm. After careful consideration of the crystal structure, we noticed the close proximity of the C5 atoms of aromatic rings 1 and 3 to the Ar C−H hydrogens of rings 2 and 4; we thus propose that the ring currents of the 1 and 3 π-systems shield the ring 2 and 4 C−Hs. Another unusual feature of the 1 H NMR spectrum is a peak at 4.78 ppm split into a doublet of doublets. This is curious as the dibenzylic protons are expected to split into a triplet by the CH 2 of the isobutyl group. We hypothesize that the isobutyl groups were unable to rotate significantly in this conformation and, because of this, the adjacent methylene protons are in fact magnetically nonequivalent, leading to the observed doublet of doublets. Crystals suitable for single-crystal X-ray diffraction could be obtained from the white powder by slow evaporation of a chloroform solution (Figure 3). The crystal structure of the obtained compound revealed two unusual features. Firstly, the configuration of the isobutyl groups around the lower rim of the resorcinarene is reminiscent of C2 symmetry (this can be seen in the 2D representation in Figure 4a), in contrast to the more commonly observed C4v isomer. Secondly, in the majority of resorcinarene crystal structures, the observed conformation is the archetypal bowl shape. This crystal instead exhibited a "chair" conformer with a pseudo-C2 rotation axis where two of the resorcinol subunits (2 and 4, see Figure 2 inset for numbering) are coplanar with one another, while the other two (1 and 3) sit orthogonal to the plane and antiperiplanar to one another. This result also clearly contextualizes the doubling of the resonances in the NMR spectra: this conformation is not an artifact of crystallization but appears to persist in solution and not rapidly interconvert or "flip" the pseudochair, in which case we would observe a single set of peaks as the average of the two chemical environments. The 1 H NMR spectrum can now be understood in terms of this conformational preference, where the two 6H singlets at 4.10 and 3.98 ppm correspond to the methyl esters in two different environments, with the two 2H singlets at 7.24 and 6.46 ppm corresponding to the two aryl C−H environments. We were particularly surprised by the chemical shift of the singlet at 6.46 pm, which is unusually shielded for an aromatic C−H bond para to an electron-withdrawing group; in the methyl To further investigate the solution phase conformation, a variable-temperature (VT) NMR experiment was performed. The high-temperature NMR was necessary to determine whether this species would, in fact, interconvert to the bowl conformation if enough heat was applied, or if the chair conformation "flips" fast enough so that only an average signal is recorded. The experiment was done in increments of 25 • C up to a maximum temperature of 150 • C, at which point substantial decomposition was observed; however, up to 125 • C, there was no change in the two ester methyl peaks at 4.10 and 3.98 ppm, suggesting that the conformation of the ring remains fixed in the pseudochair conformation, where rings 1 and 3 are related by a C 2 rotation axis (through the C2/C5 atoms of rings 2 and 4) and 2 and 4 are related by a mirror plane that bisects rings 1 and 3. Unlike the methyl ester peaks, there was a change observed in the dibenzylic peak at 4.78 ppm (see Figure 2, inset); at 125 • C, the doublet of doublets had converged to a triplet, suggesting the isobutyl groups were able to rotate fast enough for an averaged coupling constant to be observed on the NMR timescale. We are currently developing a model to explain this unusual conformation and stereoisomeric product, and are also preparing additional electron-poor members that might show similar behavior. However, we speculate that without the phenols working together to form the hydrogen bond network and template the forming resorcinarene, the typical C 4v conformation might not be favored. mation, where rings 1 and 3 are related by a C2 rotation axis (through the C2/C5 atoms of rings 2 and 4) and 2 and 4 are related by a mirror plane that bisects rings 1 and 3. Unlike the methyl ester peaks, there was a change observed in the dibenzylic peak at 4.78 ppm (see Figure 2, inset); at 125 °C, the doublet of doublets had converged to a triplet, suggesting the isobutyl groups were able to rotate fast enough for an averaged coupling constant to be observed on the NMR timescale. We are currently developing a model to explain this unusual conformation and stereoisomeric product, and are also preparing additional electron-poor members that might show similar behavior. However, we speculate that without the phenols working together to form the hydrogen bond network and template the forming resorcinarene, the typical C4v conformation might not be favored. To investigate the unusual conformational preference of the resorcinarenes, DFT calculations were performed at the ωB97XD/6-311G(d,p) level of theory in the gas phase and using the polarized continuum solvation model (PCM) to consider solvent effects. Geometric optimizations of the "chair" conformation and a theoretical "bowl" geometry were performed. The initial geometry for the "chair" conformer was obtained from the solidstate molecular structure, whereas the "bowl" conformer was based on the solid-state molecular structure of known resorcinarenes. The energies and structures of the solvent-corrected optimized conformations are provided in Figure 4a; the optimized structures (as mol2 files), gas phase energies and all thermodynamic parameters can be found in the Supporting Information. These calculations showed that the classic resorcinarene bowl conformation was disfavored by 18.1 kcal/mol using the solvent correction (15.1 kcal/mol in the gas phase), an enormous preference for the observed conformer. This is consistent To investigate the impact of these factors, chair and bowl structures of a C4v resorcinarene were also calculated ( Figure 4b). The chair conformer is still preferred in the solvated structures, although the preference is much reduced compared to the C2 isomer (0.9 kcal/mol using the chloroform solvent correction). Surprisingly, the C4v bowl is slightly favored (by 1.6 kcal/mol) in the gas phase calculations. These energy differences are so small as to be considered within the error of DFT methods, and we conclude that for the C4v isomer there is no preference between the two conformations. This shows that the configuration at the carbons bridging the resorcinol subunits can have a significant effect on the conformational preference of the macrocycle; the steric effects of the C4v configuration To investigate the unusual conformational preference of the resorcinarenes, DFT calculations were performed at the ωB97XD/6-311G(d,p) level of theory in the gas phase and using the polarized continuum solvation model (PCM) to consider solvent effects. Geometric optimizations of the "chair" conformation and a theoretical "bowl" geometry were performed. The initial geometry for the "chair" conformer was obtained from the Symmetry 2021, 13, 627 6 of 8 solid-state molecular structure, whereas the "bowl" conformer was based on the solid-state molecular structure of known resorcinarenes. The energies and structures of the solventcorrected optimized conformations are provided in Figure 4a; the optimized structures (as mol2 files), gas phase energies and all thermodynamic parameters can be found in the Supplementary Information. These calculations showed that the classic resorcinarene bowl conformation was disfavored by 18.1 kcal/mol using the solvent correction (15.1 kcal/mol in the gas phase), an enormous preference for the observed conformer. This is consistent with our VT NMR data, where only signals corresponding to the pseudochair ring conformation were observed until the material decomposed at 150 • C. This large preference has two possible contributing factors: first, the ester functional groups are Lewis basic and therefore have the ability to act as hydrogen bond acceptors to the phenolic hydrogen bond donors when coplanar to the benzene ring. Second, most resorcinarenes are found as the C 4v configurational isomer; the steric hinderance of the isobutyl groups in the isomer obtained in this case may also impact the conformational preference of the macrocycle.
To investigate the impact of these factors, chair and bowl structures of a C 4v resorcinarene were also calculated ( Figure 4b). The chair conformer is still preferred in the solvated structures, although the preference is much reduced compared to the C 2 isomer (0.9 kcal/mol using the chloroform solvent correction). Surprisingly, the C 4v bowl is slightly favored (by 1.6 kcal/mol) in the gas phase calculations. These energy differences are so small as to be considered within the error of DFT methods, and we conclude that for the C 4v isomer there is no preference between the two conformations. This shows that the configuration at the carbons bridging the resorcinol subunits can have a significant effect on the conformational preference of the macrocycle; the steric effects of the C 4v configuration thus work to reinforce the upper rim, whereas in the C 2 configuration, the steric effects work to pull it apart. The upper rim of a resorcinarene bowl comprises a hydrogen bond network; we see here that this is interrupted by the presence of the esters as hydrogen bond acceptors. The esters in the crystal structure are all coplanar with the benzene rings; this maximizes the delocalization of electron density from the electron-rich ring into the carbonyl of the ester, which enhances its Lewis basicity. This likely has a synergistic effect with the hydrogen bond donor phenols, which will hold the ester coplanar. We can therefore conclude that the ester acting as a hydrogen bond acceptor has the most significant effect upon the conformational preference of this macrocycle. Investigations into the generality of this phenomenon are underway in our laboratory.
Conclusions
We have successfully synthesized a novel 2-methyl ester resorcin [4]arene under simple acid-catalyzed conditions, if in poor yield, and the structure and solid-state conformation were determined by single crystal X-ray diffraction and NMR spectra. Computational investigations of this system revealed a significant preference for the observed pseudochair conformation and shed light upon the interplay of configurational and hydrogen bonding effects that are in operation in resorcinarene structures. Our studies in this area will further investigate the conformational preference of this and related systems, and methods to rationalize and control this aspect of supramolecular architecture will be developed. | 2021-05-05T00:08:34.827Z | 2021-03-22T00:00:00.000 | {
"year": 2021,
"sha1": "6d760310ffe13e0f1269097cf4eca0aaa3b7cb17",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-8994/13/4/627/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "11268c0b4df3534d398eedb7e601d50a8c1068ac",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
49190228 | pes2o/s2orc | v3-fos-license | Mitochondrial Lon sequesters and stabilizes p53 in the matrix to restrain apoptosis under oxidative stress via its chaperone activity
Mitochondrial Lon is a multi-function matrix protease with chaperone activity. However, little literature has been undertaken into detailed investigations on how Lon regulates apoptosis through its chaperone activity. Accumulating evidences indicate that various stresses induce transportation of p53 to mitochondria and activate apoptosis in a transcription-independent manner. Here we found that increased Lon interacts with p53 in mitochondrial matrix and restrains the apoptosis induced by p53 under oxidative stress by rescuing the loss of mitochondrial membrane potential (Δψm) and the release of cytochrome C and SMAC/Diablo. Increased chaperone Lon hampers the transcription-dependent apoptotic function of p53 by reducing the mRNA expression of p53 target genes. The ATPase mutant (K529R) of chaperone Lon decreases the interaction with p53 and fails to inhibit apoptosis. Furthermore, the chaperone activity of Lon is important for mitochondrial p53 accumulation in an mtHsp70-dependent manner, which is also important to prevent the cytosolic distribution of p53 from proteasome-dependent degradation. These results indicate that the chaperone activity of Lon is important to bind with mitochondrial p53 by which increased Lon suppresses the apoptotic function of p53 under oxidative stress. Furthermore, mitochondrial Lon-mtHsp70 increases the stability/level of p53 through trafficking and retaining p53 in mitochondrial matrix and preventing the pool of cytosolic p53 from proteasome-dependent degradation in vitro and in clinic.
Introduction
The tumor-suppressor gene p53 is a key regulator of cell cycle arrest, senescence, and cell death including apoptosis and necrosis [1][2][3] . Thus p53 acts as one of the most important barriers against malignant development of cancer cells by linking many stress response pathways such as DNA damage, hypoxia, and oxidative stress 4 . A well-characterized function of p53 in the apoptosis regulation is its role as a transcriptional regulator. In addition to the functions as a transcription factor, p53 acts directly upon the outer membrane of mitochondria via a transcription-independent pathway. Upon onset of apoptosis following DNA damage stress, a part of p53 translocates to mitochondria, where it interacts with Bcl-2 or Bak, resulting in cytochrome C release and caspase-3 activation 5 . In addition, p53 accumulates in the mitochondrial matrix and triggers mitochondrial permeability transition pore (MPTP) opening and necrosis by interaction with the MPTP regulator cyclophilin D under oxidative stress 2 . However, mechanisms of p53-mediated transcription-independent apoptotic pathways in mitochondrial matrix are still lacking.
Mitochondria control cell death and survival by regulating intrinsic apoptosis, autophagy, necrosis, and ferroptosis 2,6,7 . Mitochondrial Lon protease is located in matrix and plays a crucial role in the maintenance of mitochondrial function, biogenesis, and homeostasis 8,9 . In addition to its ATP-dependent proteolytic activity, mitochondrial Lon has been found to show chaperone activity [10][11][12][13] . Mitochondrial Lon is a stress protein and induced by a number of stresses, such as hypoxia, oxidative, and unfolded protein stress 10,12,14,15 . Molecular chaperones including mitochondrial chaperones have been associated with enhanced cell survival under stress by inhibition of apoptotic cell death and increased stability of survival effectors that promote tumor growth [16][17][18] . Indeed, Lon downregulation causes loss of mitochondrial function, early embryonic lethality, reduced cell proliferation, and apoptosis 12,[19][20][21] . Lon upregulation is required for cancer cell survival and tumorigenesis by regulating stress responses induced by oxidative condition 12,20,22 . However, the molecular mechanism of how Lon regulates apoptosis remains largely unclear. We recently identified heat-shock protein 60 (Hsp60) and mitochondrial Hsp70 (mtHsp70) as chaperone Lon clients by utilizing proteomic approach 17 . Interestingly, the ability of increased Lon-inhibited apoptosis is dependent on Hsp60 that binds p53 to inhibit apoptosis 16,17 . These findings allowed us to pursue the detailed mechanism of how chaperone Lon directly regulates apoptosis by interacting with p53.
To our knowledge, the present study for the first time demonstrates that p53 is bound by Lon in the mitochondrial matrix to control apoptosis. In this study, we demonstrated that Lon interacts with p53 in mitochondrial matrix and restrains the apoptosis induced by p53 under oxidative stress by reducing the mRNA expression of p53 target genes and rescuing the loss of mitochondrial membrane potential (Δψm) and the release of cytochrome C. The ATPase mutant (K529R) of mitochondrial Lon decreased the interaction with p53, reduced mitochondrial localization of p53, and failed to inhibit apoptosis, suggesting that the chaperone activity of Lon is important for the control of p53 protein level and apoptotic function by sequestering p53 in mitochondrial matrix. In addition, the level of cytoplasmic p53 significantly correlates that of mitochondrial Lon in oral cancer patients. Thus our findings suggest that targeting the chaperone activity of mitochondrial Lon will increase the efficacy of p53-induced apoptosis in cancer therapy.
Overexpression of mitochondrial Lon increases the accumulation of mitochondrial p53 and restrains p53dependent apoptosis under oxidative stress
We previously showed that mitochondrial Lon physically interacts with Hsp60-mtHsp70 complex and regulates apoptosis through Hsp60 17 . Since Hsp60 binds p53 to restrain its apoptosis function in cytosol and mitochondria 16 , we asked whether Lon regulates p53-induced apoptosis under stress. We first found that the level of Lon and p53 are increased in cytosol and mitochondria after H 2 O 2 and rotenone treatment (Fig. 1a, b, and Supplemental Figure S1). The level of cytosolic and mitochondrial p53 was further increased when Lon was overexpressed in cells (Fig. 1b) and only mitochondrial p53 was decreased when Lon was downregulated under oxidative stress (Fig. 1c), suggesting that the mitochondrial localization of p53 is regulated by Lon under oxidative stress. Since p53 protein can translocate to mitochondrial matrix in response to oxidative stress 2 , we tried to entangle the mitochondrial localization of p53 with Lon under oxidative stress. The immunofluorescence analysis showed that there was no significant colocalization of p53 (green) and Lon (red) in the control cells. Following exposure to H 2 O 2 , both p53 and Lon accumulated in the mitochondria, and they appeared colocalized with each other as yellow spots (Fig. 1d), which raised a possibility of the interaction between Lon and p53 in mitochondria under oxidative stress. Next we found that the signals of pro-apoptotic proteins, Bax, p53, cytochrome C release, and SMAC/Diablo release from mitochondria, are increased after H 2 O 2 treatment. However, the activation of apoptosis was almost inhibited when Lon was overexpressed including the reduction of cytochrome C release (Fig. 1b). The results suggest that increased Lon restrains p53-induced apoptosis during oxidative stress. To confirm the role of Lon in p53mediated apoptosis, terminal deoxynucleotidyl transferase-mediated dUTP nick-end labeling (TUNEL) assay was performed. The TUNEL-positive cells were detected more when the cells were overexpressed by p53 and treated with H 2 O 2 , compared to the vector control cells (a, b, and d in Fig. 1e). However, the TUNEL-positive signals in p53-overexpressed cells and the H 2 O 2 -treated cells were decreased when Lon was also overexpressed (c and e in Fig. 1e). We further found that the positive signals repressed by Lon overexpression were reversed when p53 was overexpressed (f in Fig. 1e). Consistently, the TUNEL-positive signals induced by decreased Lon were rescued when p53 was knocked down under H 2 O 2 treatment (g in Fig. 1e). The mitochondrial membrane potential (MMP) analysis showed that MMP was decreased in p53-overexpressed cells, compared to the vector control cells, and the MMP in p53-overexpressed cells was rescued when Lon was overexpressed (Fig. 1f). These data showed that increased Lon protein restrains p53-dependent apoptosis in cancer cells under oxidative stress, in which Lon is required for mitochondrial p53 accumulation.
Chaperone Lon interacts with p53 in mitochondrial fraction under oxidative stress
Since Lon physically interacts with Hsp60-mtHsp70 to restrain its apoptosis function in mitochondria, we asked whether p53 is a direct client protein of chaperone Lon. The association between Lon and p53 was first examined by co-immunoprecipitation (Co-IP) experiment (Fig. 2a, b), and endogenous p53 was able to be coimmunoprecipitated with Lon under oxidative stress (Fig. 2c), suggesting that mitochondrial Lon interacts with p53 in vivo. To further confirm the finding, we performed Co-IP experiment using the isolated mitochondrial fraction. Consistently, the expression of p53 is increased in mitochondrial fraction when Lon is overexpressed; overexpression of p53 triggers its accumulation in mitochondrial fraction (Fig. 2d, left panel). Lon was able to be co-immunoprecipitated with p53 in mitochondrial fraction and vice versa (Fig. 2d, right panel). Furthermore, the His-tag and GST-tag pull-down assays confirmed a direct interaction between Lon and p53 in vitro (Fig. 2e). These data demonstrate that mitochondrial chaperone Lon interacts with p53 under oxidative stress.
ATPase activity of Lon is required for the interaction with p53 by which attenuates apoptosis in mitochondria
To explore that the chaperone activity of Lon is required for the interaction with p53, we used the ATPase mutant of Lon, Lon-K529R 23 , to examine whether the ATPase activity is critical to the interaction with p53. The association between Lon and p53 was examined by Co-IP experiment. The result showed that the Lon-K529R mutant significantly abolished the interaction with p53 but not in the protease mutant, Lon-S855A (Fig. 3a). Consistently, endogenous p53 was able to be coimmunoprecipitated with Lon-WT under rotenone treatment but not the Lon-K529R mutant (Fig. 3b), suggesting that the ATPase activity of mitochondrial Lon is required for the binding with p53. To confirm the chaperone activity of Lon is important in p53-mediated apoptosis regulation, TUNEL assay was performed by using the overexpression of Lon-K529R mutant. The TUNEL-positive cells in p53-overexpressed cells were significantly decreased when Lon was also overexpressed. However, the Lon-K529R mutant failed to inhibit p53induced apoptosis. Similarly, the Lon-K529R mutant failed to inhibit H 2 O 2 -induced apoptosis (Fig. 3c). However, the TUNEL-positive signals were reduced when p53 was knocked down under H 2 O 2 treatment and the positive signals induced by the overexpression of Lon-K529R mutant were rescued when p53 was knocked down under H 2 O 2 treatment (Fig. 3c). These data showed that mitochondrial Lon interacts with p53 through its chaperone (see figure on previous page) Fig. 1 Increased Lon increases the accumulation of mitochondrial p53 and inhibits p53-dependent apoptosis under oxidative stress in cancer cells. a Immunoblotting analysis of the increase of p53 and Lon in the mitochondrial fraction under oxidative stress. HSC3 cells were treated with 200 μM H 2 O 2 for 4 h. Immunoblotting were performed using the indicated antibodies. The purity of each cell fractions was monitored by immunoblotting for cytoplasmic (Actin) and mitochondrial (VDAC) markers. b Increased mitochondrial Lon restrains p53-dependent apoptosis in cancer cells under oxidative stress. HSC3 cells transfected with or without pcDNA3-Myc-Lon were treated with 200 μM H 2 O 2 for 4 h, then recovered for 4 h. Whole-cell lysates were used to purify the mitochondrial and cytosolic fractions. Apoptosis-associated proteins were detected by western blot analysis using the indicated antibodies. The purity of mitochondrial fraction was monitored by immunoblotting for mitochondrial (VDAC) markers, and anti-Actin and anti-VDAC antibodies were used as loading control. Lon interacts with p53 shown by coimmunoprecipitation. 293T (a) cells were transiently transfected with the plasmids encoding Myc-Lon and p53 followed by co-immunoprecipitation with anti-Myc and anti-p53, respectively. Whole-cell lysates from HSC3 (b) cells transfected with the plasmids encoding Myc-Lon or p53 were immunoprecipitated with anti-p53 or anti-Lon antibodies. The immunoprecipitation complex was analyzed by western blotting using the indicated antibodies. IP immunoprecipitation. c Lon interacts with endogenous p53 under oxidative stress shown by co-immunoprecipitation. HSC3 cells were treated with 2 mM H 2 O 2 for 6 h or 2 μM rotenone for 6 h. Whole-cell lysates were analyzed by western blotting using the indicated antibodies (left panel). Whole-cell lysates from HSC3 cells treated with H 2 O 2 or rotenone were immunoprecipitated with anti-p53 or anti-Lon antibodies. The immunoprecipitation complex was analyzed by western blotting using the indicated antibodies. d Lon interacts with p53 in mitochondrial fraction shown by co-immunoprecipitation. HSC3 cells were transiently transfected with the plasmids encoding Myc-Lon or p53. Whole-cell lysates from HSC3 cells were used to purify the mitochondrial and cytosolic fractions followed by co-immunoprecipitation with anti-p53 and anti-Myc, respectively. The mitochondrial fraction and the immunoprecipitation complex from transfected HSC3 cells were subjected to immunoblotting using the indicated antibodies. The purity of mitochondrial fraction was monitored by immunoblotting for mitochondrial (VDAC) markers. e Mitochondrial Lon interacts with p53 shown by the pull-down assay. Direct interaction between mitochondrial Lon and p53 was verified by His-tag and GST-tag pull-down assay. The His-Lon fusion protein and GST-p53 were added, and the His-tag proteins were pulled down by anti-His antibody and the GSTtag proteins were pulled down by anti-GST antibody. The pulled down complex was subjected to immunoblotting using the indicated antibodies. The His-tag/GST-tag proteins were pulled down by anti-His/anti-GST antibody as a positive control. The His-and GST-fusion proteins added were shown on SDS/PAGE stained with Coomassie brilliant blue (right panel) activity that is required to restrain p53-dependent apoptosis in cancer cells under oxidative stress.
Increased chaperone Lon hampers the transcriptiondependent apoptotic function of p53 under oxidative stress by retaining p53 in the mitochondria
Since p53 is known for its ability to orchestrate cell cycle and apoptosis by a transcription-dependent mechanism 24,25 , we examined whether the suppression of p53-dependent apoptosis by mitochondrial Lon is through affecting the distribution between nuclear and mitochondrial p53. Thus we checked whether the transcription-dependent function of nuclear p53 is affected by increased chaperone Lon. The mRNA expression of Lon gene was significantly increased when Lon was overexpressed (Fig. 4a). The mRNA expression of p53 gene was significantly increased when the cells were overexpressed by p53 and treated with H 2 O 2 , but not in Lon-overexpressed and p53-knocked-down cells (Fig. 4b). This result indicates that Lon overexpression is not able to affect p53 gene expression. Then we checked the expression of several p53-targeted genes, such as Puma/ Bim in the intrinsic apoptosis pathway 24 , Fas that is involved in the extrinsic pathway 25 , and p53R2 that is required for mitochondrial DNA stability and cell protection from oxidative stress 26,27 .
The result showed that increased Lon-WT and the Lon-S855A mutant reduce the induction of p53-dependent apoptotic genes under oxidative stress. However, the overexpression of Lon-K529R mutant increased the expression of p53-dependent apoptotic genes under the same condition (Fig. 4c). Consistently, increased mitochondrial Lon reduced the induction of p53dependent apoptotic genes when p53 was overexpressed. However, the Lon-K529R mutant failed to largely inhibit the expression of p53-targeted genes ( Fig. 4d and Supplemental Figure S2). These results indicate that the ATPase activity of chaperone Lon indeed is important for p53-dependent apoptosis; the mechanism of apoptotic inhibition by increased Lon under oxidative stress is associated with sequestering some p53 in mitochondria that also lowers the nuclear distribution of p53.
Chaperone Lon-mtHsp70 is required for mitochondrial p53 accumulation that is important to prevent the cytosolic distribution of p53 from proteasome-dependent degradation We found that the levels of p53 in cytosol and mitochondria were further increased when Lon was overexpressed in cells under oxidative stress (Fig. 1b), and the level of p53 in mitochondria was decreased when Lon was downregulated but not the level in cytosol under oxidative stress (Fig. 1c), suggesting that the mitochondrial accumulation and level of p53 is regulated by increased Lon under oxidative stress. Since previous findings showed that overexpression and knocked down of Lon both induce the production of ROS 12,28 , we tried to understand the mechanism of p53 accumulation induced by increased Lon without H 2 O 2 treatment. Consistently, we observed that the protein level of p53 was increased when Lon was overexpressed in cancer cells (Fig. 5a), and knocked down of Lon caused a decrease in p53 level (Fig. 5b). We confirmed that p53 was accumulated in mitochondrial fraction when p53 was overexpressed in cells (Fig. 5c). p53 was increased in cytosol and mitochondrial fraction when Lon was overexpressed (Fig. 5d, left panel), and the level in mitochondria was decreased when Lon expression was inhibited by short hairpin RNA (shRNA; Fig. 5d, right panel). Intriguingly, we found that p53 still remained in the cytosol fraction when Lon was knocked down (Fig. 5d, right panel). These data suggest that Lon may be involved in the stability of p53 in the mitochondrial matrix, which may affect the distribution or transportation of p53 between cytosol and mitochondria. To test this idea, we knocked down mtHsp70 (mortalin) to examine the mechanism of p53 accumulation induced by increased (see figure on previous page) Fig. 3 The chaperone activity of Lon is required for the interaction with p53 and the inhibition of p53-dependent apoptosis in cancer cells. a The K529 residue of mitochondrial Lon is required for the interaction with p53. Whole-cell lysates from HSC3 cells transfected with the plasmids encoding p53 and wild-type pcDNA3-Myc-Lon (Lon-WT), an ATPase mutant (LonK529R), a proteolytic mutant (LonS855A), or an ATPase/proteolytic mutant (LonK529R/LonS855A) were immunoprecipitated with anti-Myc antibodies. The immunoprecipitation complex was analyzed by immunoblotting using the indicated antibodies. b The K529 residue of mitochondrial Lon is required for the interaction with endogenous p53 under oxidative stress. Whole-cell lysates from HSC3 cells transfected with the plasmids encoding Lon-WT and LonK529R mutant were analyzed by western blotting using the indicated antibodies (left panel). Whole-cell lysates from HSC3 cells transfected with the plasmids encoding Lon-WT and LonK529R mutant under rotenone treatment (10 μM for 8 h) were immunoprecipitated with anti-Myc-Lon antibodies. The immunoprecipitation complex was analyzed by western blotting using the indicated antibodies. c K529 residue of mitochondrial Lon is required for the inhibition toward p53dependent apoptosis under oxidative stress shown by TUNEL assay. Apoptosis in HSC3 cells was induced by p53 overexpression. HSC3 cells were transfected with sip53 and/or the plasmids encoding p53, wild-type pcDNA3-Myc-Lon, or an ATPase mutant of Lon (LonK529R). TUNEL assay was applied to examine the effect of the K529 residue of mitochondrial Lon on p53-induced apoptosis under oxidative stress (200 μM H 2 O 2 for 4 h). TUNEL-positive cells (green fluorescence) were counted in the transfected cells. DAPI was used for nuclear staining. Scale bar, 20 μm. The error bars shown in the right panel represent the standard deviation from three different experiments. **p < 0.01 Lon because mtHsp70 is involved in both protein import and folding process in mitochondrial protein homeostasis and is recently identified as a chaperone client of Lon 17 . We confirmed that p53 was increased in cytosol and mitochondrial fraction when Lon was overexpressed and the levels of p53 and mtHsp70 in mitochondria were decreased when Lon was knocked down by shRNA (Fig. 5e). However, the mitochondrial p53 accumulation induced by Lon overexpression was diminished when mtHsp70 was knocked down, suggesting that mtHsp70 is required for p53 accumulation in the mitochondria when Lon was overexpressed (Fig. 5e). Since we found that Lon-mtHsp70 is required for p53 accumulation in the mitochondria but not for the p53 level in the cytosol, we next asked whether the level of mitochondrial Lon regulates the distribution between cytosolic and mitochondrial p53. To answer this question, we first treated the Lonknocked-down cells with a proteasome inhibitor, MG132, the level of p53 was gradually recovered, and increased in the cells (Supplemental Figure S3), indicating that p53 level regulated by Lon is mediated by the prevention of proteasome-dependent degradation in cytoplasm. Under MG132 treatment, p53 was increased in both cytosolic and mitochondrial fraction when Lon was overexpressed (revised Fig. 5f), suggesting that cytosolic and mitochondrial p53 are both affected by cytosolic proteasome-dependent degradation. Consistently, p53 was accumulated in the cytosol fraction under MG132 treatment when Lon, Lon-shRNA, or Lon-K529R mutant was overexpressed (Fig. 5f, left panel). However, p53 was decreased in the mitochondrial fraction when Lon was knocked down by shRNA or overexpression of the Lon-K529R mutant even under MG132 treatment (Fig. 5f, right panel). These results indicate that the chaperone activity of mitochondrial Lon and mtHsp70 are important to accumulate p53 in the mitochondria and to prevent the cytosolic distribution of p53 from proteasome-dependent degradation.
To understand the interaction between mitochondrial Lon and p53, we tried to build a complex model structure by using protein-protein docking and the informationdriven approach. Indeed, the most stable complex model showed that p53 core structure binds into a prominent valley formed between the ATPase domain and alphasubdomain of mitochondrial Lon (Fig. 6a, b). This observation is consistent with a previous study which showed that the ATP-dependent helicase domain of SV40 Large T-antigen bind to the core structure (DNA-binding domain) of p53 29 . Since the K529R mutant of Lon abolished the ability to hydrolyze ATP (ATPase activity) but retained the ability to bind ATP 23 , the conformational change of the mutant will be inhibited when ATP accessed into the ATP pocket and occupied the pocket, which may explain the steric effect of the ATPase mutant of chaperone Lon on the binding with p53. In addition, surface-charge representations of the Lon-p53 complex model structure were built (Fig. 6b, c). The structure shows that the surface-charge property in the p53-binding valley of Lon is positively charged (Fig. 6b), and the charge property of p53 in the Lonbinding patches is negatively charged (Fig. 6c). The result implies that electrostatic forces may mediate the interactions between mitochondrial Lon and p53 core structure.
The level of cytoplasmic p53 protein correlates the level of mitochondrial Lon in oral cancer patients We found that total p53 level is controlled by the level of mitochondrial Lon in several cancer cell lines (Fig. 5a, b) and the mitochondrial accumulation of p53 is regulated by Lon (Fig. 5e, f), suggesting that the cytoplasmic level (includes mitochondrial fraction) of p53 protein correlates the level of mitochondrial Lon. We then examined whether p53 stability/level regulated by mitochondrial Lon is clinically relevant. First, we found that the expression of Lon protein correlates with that of p53 in seven oral cancer cell lines (Fig. 7a). To link the clinical significance of increased Lon and p53 level, 123 samples of tumor tissues from oral squamous cell carcinoma (OSCC) patients were used to determine Lon and p53 expression pattern by immunohistochemical (IHC) analysis. The main clinicopathological characteristics of the 123 patients of this study are detailed in Table S1. Mitochondrial Lon was found in the cytoplasm of cancer cells (see figure on previous page) Fig. 4 Increased mitochondrial Lon restrains the transcription-dependent function of p53 under oxidative stress through its chaperone activity. a, b The mRNA expression of Lon and p53 was analyzed by quantitative real-time PCR. c HSC3 cells were transfected with the plasmids encoding wild-type pcDNA3-Myc-Lon, a proteolytic mutant (LonS855A), or an ATPase mutant of Lon (LonK529R) under oxidative stress (200 μM H 2 O 2 for 4 h). The mRNA expression of p53-targeted genes, Bim, Fas, and p53R2, was analyzed by quantitative real-time PCR. The results were presented as fold increase relative to vector-transfected cells (deliberately set to 1). Data are presented as mean ± SD of at least three independent experiments. The error bars shown in the panel represent the standard deviation from three different experiments. d HSC3 cells were transfected with the plasmids encoding p53 and/or wild-type pcDNA3-Myc-Lon or an ATPase mutant of Lon (LonK529R). The mRNA expression of p53-targeted genes, Puma, Bim, Fas, and p53R2, was analyzed by quantitative real-time PCR. The results were presented as fold increase relative to vector-transfected cells (deliberately set to 1). Data are presented as mean ± SD of at least three independent experiments. The error bars shown in the panel represent the standard deviation from three different experiments. *p < 0.05 and **p < 0.01. Fig. 5 Chaperone Lon-mtHsp70 is required for mitochondrial p53 accumulation that is important to prevent the cytosolic distribution of p53 from proteasome-dependent degradation in cancer cells. a, b Mitochondrial Lon is important for the stability/level of p53 protein in cancer cells. For overexpression experiment, oral cancer cells were transfected with the plasmid encoding Myc-tagged Lon. For knocking down experiment, Lon expression was inhibited by Lon-shRNA transfection. Immunoblotting were performed using the indicated antibodies. c Immunoblotting analysis of increased p53 in mitochondrial fraction. p53 was overexpressed in OEC-M1 cells by transfected with the plasmid encoding p53. Immunoblotting were performed using the indicated antibodies. The purity of each cell fractions was monitored by immunoblotting for cytoplasmic (Actin) and mitochondrial (VDAC) markers. d Chaperone Lon is required for mitochondrial p53 accumulation. OEC-M1 cells were transfected with the plasmid encoding Myc-tagged Lon or Lon-shRNA. Immunoblotting were performed using the indicated antibodies. The purity of each cell fractions was monitored by immunoblotting for cytoplasmic (Actin) and mitochondrial (VDAC and COX-4) markers. e Chaperone Lon-induced mitochondrial p53 accumulation is dependent on mtHsp70. OEC-M1 cells were transfected with the plasmid encoding Myc-tagged Lon, Lon-shRNA, and/or mtHsp70-shRNA. Immunoblotting were performed using the indicated antibodies. The purity of each cell fractions was monitored by immunoblotting for cytoplasmic (Actin) and mitochondrial (VDAC and COX-4) markers. f The chaperone activity of Lon is required for mitochondrial p53 accumulation and the prevention of the cytosolic distribution of p53 from proteasome-dependent degradation. OEC-M1 cells were transfected with the plasmid encoding WT-Lon, Lon-shRNA, or the ATPase mutant of Lon (LonK529R). The transfected cells were treated with 10 μM MG132 for 2 h. Immunoblotting were performed using the indicated antibodies. The purity of each cell fractions was monitored by immunoblotting for cytoplasmic (Actin, left) and mitochondrial (VDAC and COX-4, right) markers (a and b in Fig. 7b); p53 was identified in nucleus only (c in Fig. 7b) or in both nucleus and cytoplasm of cancer cells (d in Fig. 7b), which are consistent with our previous reports 12,30 . The high expression (IHC level, median and strong) of Lon was observed in the majority of OSCC tumor tissues (85/123, 69.1%); the high expression of p53 showed a nearly half ratio in tumor tissues (64/123, 52.0%) (Table S2). We next examined the association between Lon and p53 expression in OSCC tissues using Fisher's exact test and measured the correlation in contingency table. The results showed that p53 expression in either nucleus/cytoplasm or nucleus only shows no significant correlation with Lon expression (P = 0.879 and P = 0.264, respectively, Table S2 and S3). However, Table 1). Consistently, when the expression pattern was categorized into three groups: strong, median, and weak expression, the correlation between Lon and p53 expression in nucleus/cytoplasm is statistically significant (p = 0.05) with a Cramer's V coefficient of 0.44 (Table S4) but not in nucleus or cytoplasm or in nucleus only (Table S5 and S6). Taken together, these result indicated that cytoplasmic p53 protein is correlated with the level of Lon protein in OSCC patients.
Discussion
In this study, we have shown that mitochondria Lon interacts with p53 and retains it in mitochondrial matrix to restrain the apoptosis induced by oxidative stress. The ATPase mutant of Lon decreases the interaction with p53 and fails to inhibit apoptosis under oxidative stress. This study reveals that Lon overexpression inhibits apoptosis through the chaperone activity by interacting with and sequestering p53 in mitochondria.
Molecular chaperones of HSP family play important roles in promoting cell survival and tumor growth 18,31 . We previously identified a number of candidate client proteins of mitochondrial chaperone Lon by using a proteomic approach. We identified NDUFS8, Hsp60, and mtHsp70 are client proteins of mitochondrial Lon 12,17 . Mitochondrial Lon physically interacts with Fig. 7 The level of cytoplasmic p53 protein correlates the level of mitochondrial Lon in oral cancer. a The protein level of Lon and p53 in oral cancer cell lines. The extracts of oral cancer cell lines were immunoblotted with the indicated antibodies and antibody to Tubulin as a loading control. b Immunohistochemical analysis of Lon and p53 expression in OSCC patients. Representative immunohistochemical analysis of Lon and p53 was performed by using paraffin-embedded sections of OSCC. The representative results shown here are positive staining of Lon (a, 200×; b, 400×), nuclear staining of p53 (c), and nuclear/cytoplasmic staining of p53 (d) in oral cancer tissues. The microscopic magnification of p53 staining was 400×. Scale bar, 50 μm. c Model of p53 accumulation in mitochondria and apoptosis inhibition by chaperone Lon in cancer cells. Upon Lon overexpression and/or oxidative stress, mitochondrial Lon binds with p53 and induces the accumulation of p53 in the matrix through its chaperone activity (residue K529) and inhibits p53-mediated apoptosis including transcription-independent and -dependent mechanisms. Mitochondrial Lon sequesters p53 to inhibit the opening of MPTP on the outer membrane and the MPTP-cyclophilin D complex on the inner membrane. Meanwhile, mitochondrial Lon retains p53 in the matrix to reduce the transcription-dependent function of nuclear distribution of p53. The stability/level of cytosolic p53 is increased by the prevention of proteasome-dependent degradation through sequestering p53 by Lon-mtHsp70 in mitochondrial matrix Table 1 The contingency table shows a Hsp60-mtHsp70 complex and the protein stability/level of Hsp60 and mtHsp70 depends on the level of Lon under oxidative stress 17 . Hsp60 binds p53 to restrain its apoptosis function: the depletion of Hsp60 increases the level of p53 but does not affect MDM2 level 16 , suggesting that there is an unknown mechanism to regulate the p53 protein level beyond nucleus and cytoplasm. Mitochondrial Hsp60 inhibits apoptosis by antagonizing cyclophilin D-dependent mitochondrial permeability transition 32 , increasing the stabilization of survivin and restraining p53 function 16 . The scenarios for the mechanism underlying Lon-mtHsp70-inhibited apoptosis through binding with and stabilizing p53 are described (Fig. 7c). We found that increased mitochondrial Lon inhibits p53-mediated apoptosis including transcription-independent and -dependent mechanism. p53-mediated cell death is involved in transcription-dependent and -independent regulation 24,33,34 . Regarding transcription-independent mechanism, in response to oxidative stress, the cytoplasmic pool of p53 protein localizes to the outer membrane of mitochondria where it activate transcriptionindependent apoptosis by physically inhibiting antiapoptotic members (Bcl2, BclxL) as well as activating proapoptotic members (Bak, Bax) of mitochondrial outer membrane permeabilization (MOMP) regulators 5,34 . To our knowledge, the present study for the first time demonstrates that p53 is bound by Lon in the mitochondrial matrix to control MOMP and apoptosis. Indeed, we observed that increased mitochondrial Lon rescued Δψm and the release of cytochrome C induced by p53. This finding is consistent with the observation that p53 accumulates in the mitochondrial matrix and triggers MPTP opening and necrosis by interaction with cyclophilin D under oxidative stress 2 . MPTP is a regulated protein channel spanning the inner and outer mitochondrial membranes. Previous reports showed that matrix Lon interacts with proteins located at inner membrane, such as prohibitin, COX, and NDUFS8 12,17,35,36 . We suggest that Lon may serve as a recruiter complex center to trap p53 to disrupt the interaction with cyclophilin D, by which it could prevent apoptosis by inhibiting the opening of MPTP, high levels of cytosolic Ca 2+ , and ROS accumulation during oxidative stress.
This study indicates that the chaperone activity of mitochondrial Lon is important to control p53 protein level by translocating and sequestering p53 in mitochondria under oxidative stress. Consistently, the mutations in the AAA+ domain of Lon caused the aggregation of mtDNA-encoded cytochrome C oxidase subunit II, which reduces the function of mitochondrial respiration in CODAS syndrome patients 37 . Thus the possible mechanism underlying Lon-mediated stabilization of p53 will be controlling p53 translocation into mitochondria.
Accumulating studies have shown that at least three different mechanisms were described by which p53 translocates into the matrix of mitochondria, including the two classical import systems utilizing mitochondrial targeting sequences (MTS) or chaperone carriers and one mechanism that involves redox/respiration-dependent import system 38,39 . However, no MTS were found in the N-or C-terminal domain of p53 protein. Thus it is worth attracting more attention that p53 translocates into the matrix by the mechanism of chaperone carriers. Indeed, the mitochondrial trafficking of cytosolic p53 is mediated by mtHsp70/Tid-1 complex under DNA damage and hypoxia 40,41 . Our previous work showed that mtHsp70/ Hsp60 complex act as Lon-associated proteins and their protein level and stability are dependent on Lon 17 . Therefore, increased mitochondrial Lon may promote the translocation of p53 into mitochondria by stabilizing the chaperone carriers, Hsp60/mtHsp70/Tid-1 complex. Consequently, mitochondrial Lon-mtHsp70 restrains p53-dependent apoptosis under stress, including the function of nuclear p53-dependent transcription and cytosolic p53-dependent mitochondria targeting.
In summary, we identified and validated p53 as a chaperone client of Lon along with Hsp60 and mtHsp70. This study for the first time reported that the function of p53 translocated into mitochondrial matrix in apoptosis regulation. Mitochondrial Lon retains p53 in the mitochondria matrix through its chaperone activity and inhibits p53-mediated apoptosis by transcriptionindependent and -dependent mechanisms. We have shown that mitochondria Lon interacts with p53 and retains it in mitochondrial matrix to restrain the apoptosis induced by oxidative stress. The ATPase mutant of Lon decreases the interaction with p53 and fails to inhibit apoptosis under oxidative stress. Our studies will provide new insights into the chaperone function of Lon in apoptotic cell death exerted by directly sequestering p53 in mitochondria and will allow us to understand that targeting the chaperone activity of mitochondrial Lon increases the efficacy of p53-induced apoptosis in cancer therapy.
Patients and clinical sample
Tissue specimens of 123 patients with OSCC were chosen for IHC analysis based on availability of archival human oral tissue blocks from diagnostic resection specimens in the Departments of Pathology at Mackay Memorial Hospital, Taipei, Taiwan with approval from the Institutional Review Board (IRB number: 13MMHIS188). The main clinical characteristics of the 123 patients selected for this study are detailed in Table S1. All experiments were performed in accordance with relevant guidelines and regulations. The levels of Lon and p53 protein expression are categorized into no available, low, median, and strong based on the scores of IHC staining.
Antibodies
Antibodies to human Lon was produced as described previously 28 . Antibodies used in this study were purchased as indicated: antibody to p53 and Flag from Sigma; Myc (9E10) from Millipore; α-tubulin (ab4074) and COX4 (ab16056) from Abcam (Cambridge, MA, USA); IkB alpha and VDAC from Cell Signaling Technology (Beverly, MA, USA); Bcl-2, Bax, and cytochrome C from Santa Cruz Biotechnology, Inc. (Santa Cruz, CA, USA); GAPDH and beta-actin from GenTex (Hsinchu, Taiwan).
Western blot analysis
Western blot analysis was performed as described previously 12,17 .
Isolation of mitochondria fraction
Mitochondrial fraction was isolated by the Mitochondria Isolation Kit for Cultured Cells (Thermo Fisher Scientific, 89874) according to the manufacturer's instructions.After collecting the mitochondria fraction from cells, the mitochondrial pellets were resuspended with Lysis buffer, NETN buffer with protease inhibitors (100 mM PMSF, 5 µg/ml aprotinin, 5 µg/ml leupeptin) on ice for 30 min and centrifuged at 12,000 × g for 15 min. Then the supernatant was collected and applied for subsequent western blotting.
Co-immunoprecipitation
Cells were lysed in NETN (150 mM NaCl, 1 mM EDTA, 20 mM Tris-Cl (pH 8.0), 0.5% NP-40) containing protease and phosphatase inhibitors (1.0 mM sodium orthovanadate, 50 μM sodium fluoride). Immunoprecipitation was performed by incubating primary antibodies with cell lysates at 4°C for overnight, followed by the addition of secondary antibody and protein A/G-agarose (Calbiochem) for two additional hour with slow agitation and centrifugation for 15 s. The pellets were washed three times with NETN containing protease inhibitor cocktail (Roche) buffer and examined for binding partners by western blotting.
Immunofluorescence
Cells were plated on glass coverslips placed in a 12-well culture dish. When cells had attached to the surface and spread well, they were washed with cold phosphate buffered saline (abbreviated PBS) and then fixed with precold methanol/ acetone (1:1, v/v) mixture for 15 minutes at room temperature. Fixed cells were washed with PBS and permeabilized with 0.5 % (v/v) Triton X-100 in PBS for 15 min at room temperature. Cells on coverslips were incubated with indicated antibodies: anti-Lon (1:400) and anti-p53 (1:200) overnight at 4°C. The following day, fixed cells were washed three times with 0.5% Triton X-100 in PBS and incubated with Alexa 488-conjugated and Alexa-594 conjugated anti-mouse or anti-rabbit secondary antibodies. Finally, coverslips were mounted by Pro-Long® Gold Antifade Reagent with DAPI (Invitrogen, Carlsbad, CA) for room temperature 10 min. Fluorescent images were acquired by by an Olympus BX51 fluorescence microscopy.
Apoptosis assay
Apoptosis was analyzed by TUNEL staining or MMP measurement. Cell apoptosis was detected by TUNEL assay according to the manufacturer's instructions (TaKaRa BIO, Shiga, Japan) and was performed as described previously 12,42 . Apoptotic cells also were analyzed by flow cytometry after DiOC6(3) (AnaSpec, Inc) staining. Flow cytometry and data analysis were carried out on a FACSCalibur instrument (Becton Dickinson), excitation = 488 nm; emission = 530 nm (F1), using the program Lysis.
Protein structure and modeling
Structural modeling of human Lon was built by using I-TASSER package 43 by iterative fragment assembly simulation for protein 3D structure prediction and refinement. The best predicted structure was used for further application. Docking of p53 DNA-binding domain (PDB code: 1TSR) to the structure of human Lon ATPase domain from predicted model was initially carried out using the ZDOCK server 44 , which employs rigid-body docking and utilizes a scoring function based on pairwise shape complementarity, desolvation, and electrostatic energies. No residue constraints were supplied as inputs for docking calculation. The structure of human Lon ATPase domain was assigned as the receptor in the docking calculation. During rigid-body energy minimization, >500 structures were calculated and the 100 best structures based on the intermolecular energy were used for semiflexible simulated annealing. Docked structures corresponding to the 100 best structures with the lowest intermolecular energies were generated. Finally, the top 1 docking models predicted by ZDOCK data was selected for our biochemical elucidation.
Statistical methods
We examined the significance of the IHC staining association between Lon and p53 protein using Fisher's exact test and considered a statistical significance if a p value was ≤0.05. We measured the correlation in contingency table using Cramer's V coefficient, which was calculated by CramersV function in R-based lsr package. All data were analyzed using the R statistical software (version 3.1.1). Parametric Student's t test was used to judge the significance of difference between conditions of interest. In general, a p value of <0.05 was considered as statistically significant (*p < 0.05, **p < 0.01, and ***p < 0.001). | 2018-06-14T13:28:31.636Z | 2018-06-01T00:00:00.000 | {
"year": 2018,
"sha1": "20cf394ec3f426bb7f24c3f141e96c8dde69a187",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41419-018-0730-7.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "20cf394ec3f426bb7f24c3f141e96c8dde69a187",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
18123113 | pes2o/s2orc | v3-fos-license | A Node Influence Based Label Propagation Algorithm for Community Detection in Networks
Label propagation algorithm (LPA) is an extremely fast community detection method and is widely used in large scale networks. In spite of the advantages of LPA, the issue of its poor stability has not yet been well addressed. We propose a novel node influence based label propagation algorithm for community detection (NIBLPA), which improves the performance of LPA by improving the node orders of label updating and the mechanism of label choosing when more than one label is contained by the maximum number of nodes. NIBLPA can get more stable results than LPA since it avoids the complete randomness of LPA. The experimental results on both synthetic and real networks demonstrate that NIBLPA maintains the efficiency of the traditional LPA algorithm, and, at the same time, it has a superior performance to some representative methods.
Introduction
In recent years, complex networks have been widely used in many fields, such as social networks, World Wide Web networks, scientist cooperation networks, literature networks, protein interaction networks, and communication networks [1,2]. Extensive studies have shown that complex networks have the property of communities (modules or clusters), within which the interconnections are close, but between which the associations are sparse. This property reflects the extremely common and important topology structure of complex networks and it is very important for understanding the structure and function of complex networks.
A great number of community detection algorithms have been proposed in recent decades, including modularity optimization algorithms [3][4][5], spectral clustering algorithms [6][7][8], hierarchical partition algorithms [9,10], label propagation algorithms (LPA) [11,12], and information theory based algorithms [13]. Among them, LPA is by far one of the fastest community detection algorithms. The complexity of LPA algorithm is nearly linear time, and the design of the algorithm is simple, all of which make LPA algorithm receive quite a lot of attention from numerous scholars [14][15][16][17].
However, it still has a number of shortcomings; for example, the community detection results are unstable.
In this paper, we propose a novel node influence based label propagation algorithm for community detection in networks (NIBLPA), improving the performance of the traditional LPA algorithm by fixing the node sequence of label updating and changing the label choosing mechanism when more than one label is contained by the maximum number of nodes. Firstly, NIBLPA calculates the node influence value of each node as the importance measure of nodes on the networks and fixes the nodes updating sequence in the descending order of node influence value; secondly, NIBLPA processes the label propagation repeatedly until the community structure of networks is detected. During each label updating process, when more than one label returned with the maximum number of nodes, instead of randomly selecting one label, we introduce the label influence into label computing formula to reselect the label from the set of labels with the same maximum number of nodes to improve the stability. Finally, NIBLPA divides all nodes with the same label into a community. Extensive experimental studies, by using various networks, demonstrate that our algorithm NIBLPA can get better community detection results compared with the state-of-the-art methods. 2 The Scientific World Journal The rest of this paper is organized as follows. Section 2 introduces the related works including the traditional label propagation algorithm and the -shell decomposition method. In Section 3, we introduce the main idea and the detailed process of our algorithm. The experimental results on various networks in Section 4 confirm the effectiveness of the algorithm. The conclusion is given in Section 5.
Related Work
A complex network can be modeled as a graph = ( , ), where = {V 1 , V 2 , . . . , V } is the set of nodes, = { 1 , 2 , . . . , } represents the edges between nodes, and and represent the number of nodes and edges in the network, respectively. Each edge in has a pair of nodes in corresponding. The label of V is denoted as . ( ) represents the neighborhood set of V and is the degree of node .
Label Propagation Algorithm for Community Detection in Networks.
In 2007, Raghavan et al. [11] applied the label propagation algorithm (LPA) to community detection, and the main idea of LPA is to use the network structure as the guide to detect community structures. LPA starts by giving each node a unique label, such as integers and letters, and in every iteration, each node changes its label to the one carried by the largest number of its neighbors. If more than one label is contained by the same maximum number of its neighbors, then randomly select one from them. In this repeated process, the dense groups of nodes change their different labels into the same label and nodes with the same label will be grouped into the same community.
The following equation is the formula of label updating: where ( ) represents the set of neighbors of V with label . For a weighted graph , the weight of the edge between V and V is denoted as and the label updating formula is changed as follows: ( However, the algorithm cannot guarantee the convergence after several iterations. When the algorithm takes synchronize updating of the node labels (during the th iteration, the node adopts its label only based on the labels of its neighbors at the ( − 1)th iteration), oscillations will occur in bipartite or nearly bipartite graph. As shown in Figure 1, the labels on the nodes oscillate between and in a bipartite graph. Therefore, Raghavan et al. [11] proposed asynchronous updating where node in the th iteration updates its label based on a portion of labels at the th iteration of its neighbors which have already been updated in the current iteration and another part of labels at the ( − 1)th iteration which are not yet updated in the current iteration to avoid the oscillation of labels. The design of label propagation algorithm is simple and easy to be understood. The process of the algorithm is presented in Algorithm 1.
In large networks with a huge number of nodes, each time the network may have different divisions because of the randomness of LPA algorithm. Among the solutions, it is difficult to determine which is the optimal. So the stability issue of LPA is necessary to be settled.
The -Shell Decomposition Method.
There are many measures we usually use to calculate the node importance, such as degree centrality [21], clustering coefficient centrality [22], and betweenness centrality [23]. Degree and clustering coefficient of nodes can only characterize the local information of networks. The complexity of computing betweenness is very high due to the need to calculate the shortest path. Kitsak et al. [24] pointed out that nodes with large -shell value are very important for spreading dynamics on networks.
A -shell is a maximal connected subgraph of in which every vertex's degree is at least . The -shell value of node , denoted by ( ), indicates that node belongs to a -shell but not to any ( + 1)-shell. The -shell decomposition method is often used to identify the core and periphery of networks. It starts by removing all nodes with only one link, until no such nodes remain and assigns them to the 1-shell. In the same manner, it recursively removes all nodes with degree 2 (or less), creating the 2-shell. The process continues, increasing until all nodes in the network have been assigned to a shell. The shells with higher indices lie in the network core. Theshell decomposition method can be efficiently implemented with the linear time complexity of ( ), where is the number of edges in the network.
The -shell decomposition method is shown in Figure 2. It is a simple network which can be divided into three different shells.
Our Method
Although asynchronous updating method can avoid oscillation of labels, there still are many limitations. As nodes are not updated simultaneously, the updating order of nodes plays a crucial impact on the stability and the quality of the results. The randomness of LPA in selecting one label when more than one label contained by the maximum number of nodes also makes the results unstable. The Scientific World Journal We analyze traditional LPA on a toy sample network in Figure 3 [25]. There are two communities in the network, The numbers inside the nodes represent their labels. Assuming that V 1 , V 2 , and V 3 have already shared the same label 2, while V 4 , V 5 , and V 6 still have unique labels. If we update V 4 first and randomly choose label 2 as its new label, then update V 6 before V 5 . As a consequence, all nodes are classified into the same community. On the other hand, if node V 4 chooses label 6 and then updates node V 5 before V 6 , the output will correspond with the right communities.
Seen from the above analysis, LPA is very sensitive to the node updating order and the label choosing method. In this section we propose solutions to overcome the issues discussed above to improve the traditional LPA algorithm.
The Basic Idea.
In the new algorithm, we choose the asynchronous updating method to avoid oscillation of labels in Figure 1. But the randomly determined label updating order of nodes affects the stability of the algorithm. We should order the nodes based on their importance for the network and the more important nodes should be updated earlier.
A node with a big -shell value indicates that it is located in the core of the network. However, in a network, there are too many nodes with the same -shell value and we cannot rank the node effectively. In general, in a network a node with more connections to the neighbors located in the core of the network is more important for the network. Inspired by these previous studies, we propose a novel centrality measure by considering both the -shell value and degree of node itself and its neighbor's -shell values. The node influence of node is defined as follows: where is a tunable parameter from 0 to 1, which is used to adjust the effect of its neighbors on the centrality of node . We choose node influence value as the measure of node importance, so we arrange nodes in the descending order of node influence value. The fixed node updating sequence makes the algorithm more stable.
The other random factor causing the instability of LPA is that when the number of labels with maximum nodes is more than one, the algorithm randomly selects one of the labels to assign to the node. Instead of randomly selecting one of the labels contained by the maximum nodes, we improve the label updating formula using the information of the label influence.
The label influence of label on node is computed as follows: The new formula of label updating is changed as follows: where max denotes the set of labels that are simultaneously contained by the maximum nodes. When multiple labels are simultaneously contained by the maximum nodes, we recalculate the value of the labels contained by the greatest number of nodes according to (5) and choose the label with the maximum value to assign to node .
The
Steps of NIBLPA Algorithm. The main steps of NIBLPA include initialization, iteration, and community division. Then NIBLPA can be described as Algorithm 2.
We implement NIBLPA on the toy sample network in Figure 3 with = 1. The decimals outside the nodes are the node influence value. Using our method on this network, the node updating sequence is fixed as rank and by their node IDs). The label propagation process is shown in Figure 4.
Firstly, we update the label of node V 1 . We label V 1 with a set of tuples ( , , LI( )), where is a label contained by its neighbor, and represents the number of its neighbors having the label , and LI( ) is an optional value recalculated by (5) when multiple labels are contained by the maximum neighbors. As shown in Figure 4(a), V 1 has three neighbors and they all have different labels with each other, and the set of tuples is {(2, 1, 1.833), (3, 1, 1.667), (4, 1, 1.667)}. So we choose label 2 as its new label.
Then, node V 3 is the next. After the label updating of V 1 , there are two neighbors of V 3 that share label 2 and only one contains label 6, so we relabel V 3 with label 2 as shown in Figure 4(b). The next label propagations of V 4 and V 6 are consistent with V 1 and V 3 . Now only V 2 and V 5 are not updated and, as shown in Figure 4(c), all of their neighbors contain the same labels with themselves, respectively, so we do not need to relabel them. After only one iteration using this method, we get the final solution that contains two communities exactly the same with the ground truth. Since there is no randomness, the outcome is deterministic and perfect.
Time Complexity.
The time complexity of the algorithm is estimated below. is the number of nodes, and is the number of edges.
(1) The time complexity of initialization for all nodes: ( ).
(2) The time complexity of calculating the node influence value of all nodes: ( ).
The Scientific World Journal (1) Initialization: assign a unique label to each node in the network, (0) = .
(2) Calculate the node influence value for each node and arrange nodes in descending order of NI storing the results in the vector . The time complexity of ranking the nodes in descending order of NI: ( log ( )).
(3) Each iteration of label propagation consists of two parts: (1) the time complexity of normal label updating: ( ); (2) the time complexity of recalculating the labels based on (5) if necessary: ( ).
(4) The time complexity of assigning the nodes with the same label to a community: ( ).
Phases (3) are repeated, so the time complexity of the whole algorithm is 2 × ( ) + (2 × + 1) × ( ) + ( log( )), where is the number of iterations and it is a small integer.
Experimental Studies
This section evaluates the effectiveness and the efficiency of our algorithm. We compare the performance of NIBLPA with LPA, KBLPA, and CNM. Where KBLPA is an improved LPA algorithm changing the node updating sequence by descending order of -shell value. All the simulations are carried out in a desktop PC with Pentium Core2 Duo 2.8 GHz processor and 3.25 GB memory under Windows 7 OS. We implement our algorithm in Microsoft Visual Studio 2008 environment.
Datasets.
In this section, we choose two types of synthetic and eight real networks to make experiments.
According to the generation rules of Clique-Ring networks, we construct four different size Clique-Ring networks. The parameters are shown in Table 1. [27,28] are currently the most commonly used synthetic networks in community detection. It can generate networks based on users' need by changing the following parameters in Table 2.
LFR Benchmark Networks. LFR benchmark networks
We generate six groups of LFR benchmark networks and all the networks share the common parameters of max = 50. Each group contains nine networks with mu ranging from 0.1 to 0.9 and they also share parameters , , min , and max , respectively. The other parameters are set to the default values. The details are shown in Table 3.
Real Networks.
We also make experiments on eight well known real networks, including Zachary's karate club networks, Dolphins social networks, and American College Football networks. The detailed information of each network is shown in Table 4.
Evaluation Criteria.
In this paper, we use modularity ( ) [2], -measure [29], and normalized mutual information (NMI) [30] as the evaluation criteria which are currently widely used in measuring the performance of network clustering algorithms. Computing -measure and NMI needs to know the true community structure of the network, while the modularity does not. For synthetic networks, since the ground truth of the community structure has been known, we use both -measure and NMI on Clique-Ring networks and LFR benchmark networks to evaluate the results of community detection. While the underlying class labels of most real networks are unknown, we can only adopt the modularity as the evaluation criteria on partial real networks and use both NMI and modularity on others with known community structure. The maximum degree The exponent for the degree distribution The exponent for community size distribution The mixing parameter for the topology min The minimum for the community sizes max The maximum for the community sizes
Modularity
Consider the following: where represents the number of edges in the network; is the adjacency matrix of the network, if node and node are directly connected, = 1; otherwise, = 0; and , respectively, denote the label of node and node , if = , then ( , ) = 1, else ( , ) = 0.
The Scientific World Journal 7
-Measure
Consider the following: where precision and recall are written as (8), respectively, is the set of node pairs ( , ), where nodes and belong to the same classes in the ground truth, and is the set of node pairs that belong to the same clusters generated by the evaluated algorithm. Then ∩ represents the intersection of node pairs of the ground truth and the clustering result.
Normalized Mutual Information (NMI)
Consider the following: where represents the number of nodes in the network, represents a community detection result generated by the evaluated algorithm, and represents the ground truth community structure.
Experimental Results and Analysis.
In this section, the synthetic and real networks are used to test the effectiveness of NIBLPA comparing with traditional LPA, KBLPA, and CNM. Where LPA and KBLPA are processed 100 times and the average value is used as the results because of the randomness of these algorithms. We compare the stability of the algorithms by analyzing the fluctuation range of all the results. Table 5 shows the comparative results of the four algorithms on four different Clique-Ring networks, and for each instance, the best results are presented in boldface. The -measure and NMI of LPA and KBLPA are in the form of average value ± the maximum difference between one result and the average value.
The Experiments on Clique-Ring Networks.
It can be seen from Table 5 that in the Clique-Ring networks which have special structure, NIBLPA can exactly detect the correct communities and CNM gets the right community structure on the first three networks. But on network C4, the result of CNM is much worse than others because modularity has the resolution limit problem. While the average -measure of KBLPA algorithm is the lowest among LPA, KBLPA, and NIBLPA on the four networks and the average NMI of KBLPA is the lowest on most of the four networks except C4. These results illustrate that the fixed node sequence descending by the -shell value at each step of label propagation cannot get good results. The instability of KBLPA is caused by the randomness of selecting label when multiple labels are simultaneously contained by the greatest number of nodes.
The Experiments on LFR Benchmark
Networks. The twelve figures in Figure 6 are the NMI and -measure of the four algorithms on six groups of LFR benchmark networks (N1∼N6). The abscissa represents the parameter from 0.1 to 0.9. The ordinate in the left figures is the NMI of the results and the ordinate in the right figures is themeasure.
The twelve figures in Figure 6 show that with the increase of , the network structure is more and more complex and the four algorithms cannot be effective to detect the community structure. When mu is especially larger than 0.5, the NMI and -measure decrease quickly. But generally, the performance of NIBLPA is better than the other three algorithms. Although NIBLPA does not guarantee to get the best performance, it can return stable, unique, and satisfied results. It can also be seen in Figure 6 that the fluctuation range of NMI and -measure of LPA algorithm is large. KBLPA is also relatively stable, but its results are worse than LPA and NIBLPA. On these complex networks, CNM algorithm cannot detect the network structure effectively and it generally gets less number of communities than the truth.
The Experiments on Different Sizes of Networks.
In order to compare the time efficiency of the algorithms, we generate 10 LFR benchmark networks, the size of which is from 1,000 to 10,000, and the other parameters are the same ( = 10, max = 50, min = 10, max = 50, and = 0.1). The time consumption of the four algorithms on the 10 LFR benchmark networks is shown in Figure 7 From Figure 7, it is observed that the four algorithms use more and more time with the increase of the size of networks and CNM uses the longest time. When the number of nodes is larger than 5000, CNM cannot get the community structure because of the limit of computer memory. From Figure 7(b), one can note that when the number of nodes is greater than 7000, the time consumption of NIBLPA is less than LPA. To some extent, we can say NIBLPA is more suitable for community detection on large scale networks.
The Experiments on Real
Networks. The eight realworld networks shown in Table 4 are commonly employed in the community detection literature and the first four networks have known ground truth community structures. So we compare the modularity and normalized mutual information NMI on the first four networks and only compare the modularity on the last four networks. Table 6 shows the experimental results on the eight real networks, and for each instance, the best and NMI are presented in boldface.
It can be seen from Table 6 that in all the real networks besides R7(Blog) and R8(PGP), the modularity of NIBLPA is higher than the other three algorithms. Simultaneously, the NMI of NIBLPA on the first four networks is the best. The stability of KBLPA is better than LPA, but the modularity and NMI of KBLPA are worse than LPA on almost all of the networks. On the large size of PGP-network, CNM cannot detect the community structure. In general, NIBLPA can get better and stable results than the other three algorithms.
Instance Analysis.
We compare the community structure detected by NIBLPA when NMI achieves the maximum with the true community structure of Dolphins. Figure 8(b) is a community detection result of NIBLPA on Dolphins. Comparing these two figures, the division of DN63 and SN90 based on NIBLPA is inconsistent with the real structure. From the topology structure of Dolphins, we can see that DN63 has two adjacent nodes and they, respectively, belong to the two communities; DN63 has five neighbors, NIBLPA algorithm assigns it to the community which its most neighbors belong to. The modularity of Dolphins real community structure is lower than the result of NIBLPA, which draws a conclusion that the community division of NIBLPA is a reasonable result.
Parameter Selection.
There is only one parameter in NIBLPA algorithm, tunable parameter . In order to analyze the impact of the parameter, we run NIBLPA with different values of on synthetic networks and compare NMI to analyze the effect of the parameter on the algorithm. In this way, we can investigate that under which the NIBLPA can achieve the best results.
We generate five LFR benchmark networks with ranging from 10 to 50 and all the networks share the common parameters of = 1000, max = 50, min = 10, max = 50, and = 0.1. Figure 9 shows the results of NIBLPA on these networks.
As it can be seen in Figure 9, under different parameter , the value of NMI changed a lot. However, for each network, there is an optimal under which the NIBLPA method can achieve the largest NMI. Moreover, on each network, the first extreme large value is generally the best result.
Conclusion
This paper presents a node influence based label propagation algorithm for community detection in networks. The algorithm firstly calculates the node influence value for each node and ranks the node in the descending order of node influence value. During each label updating process, when more than one label is contained by the maximum number of nodes, we introduce the label influence value into the formula of label updating to improve the stability. After the algorithm converges, nodes with the same label are divided into a community. This algorithm maintains the advantages of the original LPA algorithm. Moreover, it can get the stable community detection results by avoiding the randomness of label propagation. By experimental studies on synthetic and real networks, we demonstrate that the proposed algorithm has better performance than some of the current representative algorithms. | 2016-05-12T22:15:10.714Z | 2014-06-04T00:00:00.000 | {
"year": 2014,
"sha1": "d7d00b64f6ab9384df3b5a052e43d836483589fc",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/tswj/2014/627581.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d7d00b64f6ab9384df3b5a052e43d836483589fc",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
224898984 | pes2o/s2orc | v3-fos-license | THE INFLUENCE OF TECHNOPRENEURSHIP SCIENTIFIC LEARNING, AND PRIOR KNOWLEDGE TOWARDS ABILITY TO IDENTIFY ENTREPRENEURIAL OPPORTUNITIES
503 Hidayat et al., 2020 Volume 6 Issue 2, pp. 503-513 Date of Publication: 23rd Sept, 2020 DOIhttps://doi.org/10.20319/pijss.2020.62.503513 This paper can be cited as: Hidayat, H., Herawati, S., Syahmaidi, E. & Hidayati. A. (2020). The Influence of Technopreneurship Scientific Learning and Prior Knowledge towards Ability to Identify Entrepreneurial Opportunities. PEOPLE: International Journal of Social Sciences, 6(2), 503-513. This work is licensed under the CreativeCommons Attribution-NonCommercial 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc/4.0/ or send a letter to Creative Commons, PO Box 1866, Mountain View, CA 94042, USA.
Introduction
Economic development and the challenges of globalization are important concerns of the world, including developing countries like Indonesia. Important factors in the challenges of globalization are human resources and education that are applied, one of which factors that affect a nation's economy is Technical and Vocational Education. Technical and Vocational Education is also given skills and competencies in the field of science that can be applied in the community and entrepreneurial skills. However, graduates from Technical and Vocational Education are not all absorbed well in the labor market and even contribute to unemployment in Indonesia. Based on statistical data in February 2018, in Indonesia, there was 9.5 percent (688,660 people) of the total unemployed who were alumni of tertiary institutions, including graduates from vocational education (Center Bureau of Statistics, 2018). Of that number, the highest number of unemployed, 495,143 people, were university graduates with bachelor's degrees and also graduates of vocational schools. This data explains the still weak Technical and Vocational Education graduates in labor market competition, one of which is due to the thinking / thinking style of college graduates is to work both as private workers and as servants of the state not as entrepreneurs. In developed countries, such as the United States, 14 percent of the country's population are entrepreneurs. The low level of entrepreneurship is indicated because the interest, motivation for entrepreneurship is still low and the ability to read business opportunities is still low, and in general there is no exception for Technical and Vocational Education graduates in Indonesia.
The ability of entrepreneurship in students is also influenced by the learning process that occurs, both direct and indirect learning. The community and family environment also influences one's entrepreneurial ability. All of these activities can provide experience and knowledge for someone, including students and graduates of Technical and Vocational Education. Prior knowledge in entrepreneurship for students and graduates of Technical and Vocational Education is very important so young entrepreneurs in schools and the community can see the business opportunities around them. Weak prior knowledge has an impact on starting a business that is also blocked and this can be seen from the character of the entrepreneur. The prior knowledge increases the likelihood of opportunity identification for two reasons: (1) prior knowledge provides an absorptive capacity that facilitates the acquisition of additional information about markets, production processes, and technologies (Cohen, and Levinthal, 1990), which triggers an entrepreneurial conjecture (Shane, 2000;2003); and (2) people's existing stocks of information also influence their abilities to see solutions when encountering problems that need to be solved (Yu, 2001). According to (Shane, 2000), people have different stocks of prior information through their life experiences that increase the probability of identifying opportunities. Life experience can be in the form of job function, variation inexperience, and special interest (Shane, 2003;Vesper, 1996).
Exposure to a diverse life and work experiences broadens individuals' range of what they perceive as feasible for an opportunity (Krueger, & Norris, 2000). Strengthening prior knowledge and students who do not have entrepreneurial knowledge is one alternative through curriculum and learning in schools and colleges. So the learning model becomes very important to consider, does the learning model to facilitate students to develop their entrepreneurial potential.
Technopreneurship Scientific Learning is an alternative to form students' entrepreneurial experiences in Technical and Vocational Education . Because the concept of entrepreneurship is a process of learning and interaction of many people to get benefits, this learning process is no exception in vocational education, starting from conducting needs and curriculum analysis (Ganefri et al., 2017), learning plans, facilitating learning with modules and other teaching materials (Yulastri, & Hidayat, 2017), with this impacting entrepreneurial competencies and student learning outcomes in vocational education. In addition, the entrepreneurship learning model is also very important, especially in vocational education, Technopreneurship Scientific Learning is one of vocational learning that can be integrated into productive learning , that students are trained to actively carry out activities to explore and produce products of skills that are owned (Kusumaningrum et al., 2016;Hidayat, & Yuliana, 2018;Hidayat, 2017;Yulastri et al., 2019;Ganefri et al., 2019;. This Technopreneurship Scientific Learning facilitates students to learn, to explore entrepreneurial potential through an analysis of community needs in the field that are linked to the Technical and Vocational Education learning process, with a production-based approach and commercial potential. So the purpose of this paper is to describe and test the influence of Technopreneurship Scientific Learning, and Prior Knowledge Towards the Ability to Identify Entrepreneurial Opportunities.
Research Methods
This study uses a regression method that aims to examine the effect of the contribution of two independent variables on one dependent variable. The study population was 300 students.
Samples were taken using a proportional random sampling of 150 students. Data collection techniques in this study use a scale of technopreneurship scientific learning, prior knowledge, and scale of the ability to identify entrepreneurial opportunities. Data collection was carried out through administrating the instrument to respondents who were the research samples. Data analysis uses multiple regression.
Results and Discussion
Test requirements analysis is performed on research data as a basis for consideration for selecting and determining the types of data analysis techniques that will be used in testing research hypotheses. Hypothesis testing is done using multiple regression. Therefore, requirements analysis tests conducted on the data of this study are the normality test, linearity test, and multicollinearity test.
Normality Test
The normality test is carried out by the Kolmogorof-Smirnov method with a significance level of> 0.05 which means the sample comes from a normally distributed population.
The results of the normality test calculations for these three variables are presented in Table 1. Table 2 as follows. Table 2 that the linearity test results show a significant value that can be 0.004 and 0.039 smaller than the significance that has been set 0.05. That is, the data for each variable X is linear.
Multicollinearity Test
Seeing the possibility of multicollinearity, SPSS version 20.00 was used. If the value of Variance Inflation Factor (VIF) of 10 or more becomes a rule of thumb for inferring VIF is too large, so that multicollinearity is concluded. Based on calculations through SPSS can be seen in Table 3. The calculation results in Table 3 show that the VIF value of technopreneurship scientific learning is 1,034 and the VIF value of technopreneurship scientific learning is 1,034. Thus, the two VIFs are smaller than 10. That is, there is no multicollinearity between the technopreneurship scientific learning and prior knowledge.
Hypothesis Test
After testing the analysis requirements and it turns out that all scores of each research variable meet the requirements for further statistical testing, then hypothesis testing is carried out.
In this research, there are three research hypotheses, which are as follows.
Technopreneurship Scientific Learning (X1) and prior knowledge (X2) together contribute significantly to the ability to identify entrepreneurial opportunities (Y). The following are the results of testing the three research hypotheses that have been proposed above. Hypothesis: technopreneurship scientific learning (X1) and prior knowledge (X2) jointly contribute significantly to the ability to identify entrepreneurial opportunities (Y). The results of the regression analysis of Technopreneurship Scientific Learning (X1) and prior knowledge (X2) together with the ability to identify entrepreneurial opportunities (Y) can be seen in the following tables. Table 4, the multiple regression coefficient R of 0.520, the R Square coefficient of 0.270, meaning that 27% of the ability to identify entrepreneurial opportunities can be explained by technopreneurship scientific learning and prior knowledge, the rest comes from other variables as in the identification of previous problems. Based on Table 6 above, there is a contribution of technopreneurship scientific learning and prior knowledge to the ability to identify entrepreneurial opportunities when analyzed together with multiple regression. Furthermore, the results of testing the hypothesis of the ability to identify entrepreneurial opportunities based on the technopreneurship scientific learning and prior knowledge variables in the following table.
Students
Technopreneurship Scientific Learning is learning that facilitates the development of student entrepreneurial abilities apart from competency in the field of science can also develop well because learning is in line with the curriculum and oriented to commercial potential and market needs. The technopreneurship scientific learning phase consists of , 1) finding problems, analyzing needs and learning analysis; 2) applying a cooperative scientific approach to technopreneurship; 3) design a technopreneurship scientific business plan; 4) product manufacture (prototype of goods or services); 5) evaluating work.
In this phase or stage of technopreneurship scientific learning requires students to try to find problems and product-based solutions that have commercial potential, in this process the prior knowledge of entrepreneurship is needed. Students will also design business plans and carry out production processes, if the prior knowledge of entrepreneurship is weak and learning does not facilitate the exploration of student entrepreneurial potential, students will not be able to identify business opportunities.
In the phase of finding a problem, the analysis of needs and analysis of learning is done seriously and is very useful to get important information what is needed by the community, what is the problem of the community and solutions that will be produced in line with scientific competence and ongoing learning. So that entrepreneurial learning that facilitates to be able to explore the potential of student entrepreneurship and early entrepreneurial knowledge is very important and influential in students' ability to identify entrepreneurial opportunities, and also helps shape students' understanding of entrepreneurship (Amodu, & Aka, 2017;Sumbul, & Faisal, 2018).
Conclusions
Based on the findings and discussion of the results of the study, the following conclusions can be made: Technopreneurship scientific learning and prior knowledge together contribute significantly to the ability to identify entrepreneurial opportunities. That is, technopreneurship scientific learning and prior knowledge have the significance of to the ability to identify entrepreneurial opportunities.
That is, the level of ability to identify entrepreneurial opportunities is not only influenced by one variable, but is influenced jointly by technopreneurship scientific learning and prior knowledge.
ACKNOWLEDGMENT
This work was supported by funding from the Republic of Indonesia Higher Education in 2020. | 2020-10-10T09:25:04.858Z | 2020-09-23T00:00:00.000 | {
"year": 2020,
"sha1": "535c89cc3f8dc98caa38782909e1caae15c1c814",
"oa_license": "CCBYNC",
"oa_url": "https://grdspublishing.org/index.php/people/article/download/2448/3904",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "535c89cc3f8dc98caa38782909e1caae15c1c814",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Psychology"
]
} |
11507231 | pes2o/s2orc | v3-fos-license | Macular edema with serous retinal detachment post-phacoemulsification followed by spectral domain optical coherence tomography: a report of two cases
Background Macular edema and detachment at the first day after an uneventful cataract surgery is very rare, and has been reported previously with the use of high concentrations of intra-cameral cefuroxime. However, we hereby reported two cases of macular edema with extensive serous retinal detachment the first day after an uneventful phacoemulsification with intra-cameral injection of a standard dose of cefuroxime during the procedure. Case presentation A 68-year-old female and a 63-year-old male without any special history both underwent an uneventful phacoemulsification surgery and 1 mg/0.1 ml of cefuroxime solution was injected into the anterior chamber at the end of the procedure. Macular edema with extensive serous retinal detachment around macula and optic disc area were observed the first day after surgery. Without surgical intervention, a quick recovery of the macular edema and retinal detachment was observed by spectral domain optical coherence tomography 1 week later in both cases. Conclusion We presume that the retina injury in the two cases may be attributed to cefuroxime toxicity even under a use of a standard dose. But the retinal damages are restorable and routine anti-inflammatory treatment is enough.
Background
Macular edema is one of the most common complications after cataract surgery that causes unfavorable visual outcomes and usually occurs in the surgical eye 4-16 weeks after the procedure [1,2]. Acute macular edema with retinal detachment after cataract surgery is very rare, and has been reported previously with the use of high concentrations of intra-cameral cefuroxime [3,4]. Cefuroxime is commonly used during phacoemulsification procedure [5] and has been proved to be safe with a standard dose previously [6][7][8]. However, we hereby report two cases of macular edema with extensive serous retinal detachment that was immediately detected by spectral domain optical coherence tomography (SD-OCT) the first day after an uneventful phacoemulsification with intra-cameral injection of standard doses of cefuroxime during the procedure. We presume that the retina injury in the two cases may be attributed to cefuroxime toxicity even under a use of a standard dose.
Case 1
A 68-year-old female had an uncomplicated phacoemulsification surgery with folded in-the-bag intraocular lens (IOL) implantation in her left eye. Her systemic and ophthalmic histories were unremarkable. No diabetic, uveitis or any other remarkable retinal history was found prior to the surgery. Preoperatively, the refractive errors of her right and left eyes were −4.0 and −4.5 diopters (D), with axial lengths of 24. 12 best-corrected visual acuity was 20/33 in the right and 20/40 in the left eye. The patient's anterior segment and fundus were normal in both eyes, as revealed by regular examination. The surgery was performed using an Infiniti phacoemulsification unit (Alcon, Inc.). The nucleus chopping time was 5.8 s, and the average power was 6.8 %. A +18 D folded IOL (Acrysof SN60AT Alcon, Inc) was implanted in-the-bag. The surgery was completed without complications. At the end of the procedure, 1 mg/0.1 ml of cefuroxime solution was injected into the anterior chamber. We perform our dilution in the operating room: the nurse takes 750 mg of preservative-free vial cefuroxime and adds 7.5 ml of balanced salt solution (BSS), and then, the surgeon takes 0.1 ml of this first solution and adds another 0.9 ml of BSS to obtain the second solution. 0.1 ml of the second solution is finally injected in the anterior chamber of the patient. The patient is therefore supposed to receive 0.1 ml of 10 mg/ml solution of intra-cameral cefuroxime. The total length of the surgical time was approximately 10 min.
The first day after the operation, the visual acuity in her left eye was 20/200. There were no signs of remarkable inflammation or abnormality in the anterior segment, as well as in the vitreous. Fundus examination showed no foveal reflection in the macula. Diffuse retinal edema affected most of the posterior pole. Retinal wrinkles were found around the macula and disc area. No significant abnormality was found in the peripheral retina. SD-OCT (Carl Zeiss Meditec, Dublin, CA, USA) scanning was immediately performed and showed macular edema, especially at the outer nuclear layer, with extensive shallow serous retinal detachment around macula and optic disc area (Fig. 1). The retinal thickness of the fovea was 750 μm. No significant abnormality was found in the choroids and the subfoveal choroidal thickness was 350 μm. Vitreomacular traction was not found in the SD-OCT image. Topical dexamethasone 0.1 %/tobramycin 0.3 % (Tobradex ® ) eye drops and pranoprofen (Senju Pharmaceutical Co. Ltd) were prescribed four times a day. After 1 week of treatment, the patient's vision in her left eye had improved to 20/20. The macular retina was scanned with the same area by SD-OCT and the image showed that the retinal thickness of the fovea returned within a normal range (194 μm), and the subfoveal choroidal thickness seemed not changed a lot (about 347 μm). The macular edema and subretinal fluid were absorbed completely. The integrated ellipsoid zone was preserved in the outer retina ( Fig. 2). No recurrence of macular edema or retinal detachment was noted at the last follow-up (4 months post-operative).
Case 2
A 63-year-old male underwent an uneventful phacoemulsification surgery with folded in-the-bag IOL implantation in the left eye. The systemic and ophthalmic histories were unremarkable. Preoperatively, the refractive error of his left eye was −2.75 D with an axial length of 23.93 mm. The best corrected visual acuity was 20/200. The right eye had IOL implanted 1 year ago with good visual acuity of 20/20. Findings of anterior segment and fundus examination were normal in both eyes. The phacoemulsification surgery was performed by the same doctor as in case 1. The nucleus chopping time was 32 s and the average power was 16.8 %. A +20.0 D folded IOL (Acrysof SN60AT Alcon, Inc) was implanted in-the-bag. The surgery was completed without complications. At the end of the procedure, 1 mg/0.1 ml of cefuroxime solution (diluted as case 1) was also injected into the anterior chamber. The total length of the surgical time was about 12 min.
The first day after the operation, the visual acuity in his left eye was finger count. No remarkable inflammations or abnormalities were found in the anterior segment and vitreous by slit-lamp examination. Fundus manifestation was similar to case 1. SD-OCT image also showed the same macular edema as case 1 (Fig. 3a). The retina thickness of the fovea was 794 μm. No significant abnormality was found in the choroids. The same drugs as in case 1 were adopted four times a day. After 1 week treatment, the patient's visual acuity improved to 20/20. SD-OCT revealed macular edema and subretinal fluid were absorbed completely. The integrated ellipsoid zone was preserved in the outer retina (Fig. 3b). The retinal thickness of the fovea returned within a normal range (174 μm). No recurrence of macular edema or retinal detachment was noticed until the last follow-up (3 months after surgery).
Conclusions
With modern cataract surgical techniques, the incidence of post-surgical cystoid macular edema (CME) has decreased to 0.1-2.35 % [9]. Several mechanisms may contribute to such macular edema, including the effects of vitreoretinal traction, light damage, production of prostaglandins and intraoperative complications [1,10]. The rate of macular edema after cataract surgery is increased in the presence of diabetic retinopathy and uveitis [11,12]. However, the two cases were notable for its unremarkable retinopathy and lack of history of diabetes or uveitis. Jurecka et al. [13] found a positive statistical correlation between the real phacoemulsification time and the increase in macular retinal thickness after surgery. In the present cases, the real phacoemulsification time was not long, and the average power was low.
Cefuroxime toxicity may be one of the cause of macular edema and detachment. The recommended dose of intracameral cefuroxime injection is 0.1 ml of 10.0 mg/ml solution. The fact that excessive cefuroxime solution injections into the anterior chamber can cause early serous macular detachment and edema has been reported previously [3,4]. The reported dose has been varied from 20 to 50 mg/ ml. However, recently, Kontos et al. [14] reported a case with acute serous macular detachment and macular edema after a standard dose of subconjunctival cefuroxime injection in the phacoemulsification. Faure et al. [15] even reported a case occurred retinal toxicity the second day after surgery with a standard dose of intra-cameral cefuroxime injection in France. In the present two cases in China, early macular edema and extensive retinal detachment were found immediately the first day after surgery with a standard dose of intra-cameral cefuroxime injection at the end of the phacoemulsification. The visual loss was earlier in the present two cases than that of Faure et al. report. Though the visual loss time after surgery had little difference, the manifestations of these cases were similar. The interval time between the present two cases was about one month. No abnormality was found during the drug dilution process. Thus, we presume that the retina injury in the two cases may be also attributed to cefuroxime toxicity even under a use of a standard dose.
In these two cases, the location of the edema was unusual: Typically retinal edema was located in the outer plexiform layer. However, in these cases the outer plexiform layer appeared to be spared and the outer nuclear layer had large edema. There was extensive subretinal fluid without debris. These OCT characteristics were similar to the manifestation that has been identified in OCT of the retinal toxicity caused by excessive cefuroxime solution injections [3,4], and might provide a marker for cefuroxime toxicity. The mechanism of this pattern of edema is unclear. The electroretinogram (ERG) results of animal experiments [16] and human clinical observation [15] prompted cefuroxime was toxic to retina, and may effect the Müller cell function. Previous study [3,4] reported that fluorescein angiograms (FA) showed diffuse leakage without abnormal retinal perfusion in cefuroxime toxic eyes and indicated that the blood-retinal barrier at the retinal pigment epithelium (RPE) may be disrupted. It is a limitation that the FA was not obtained in the present two cases, but the SD-OCT images may suggest that the primary lesions were localized at the outer retinal and RPE.
Clear vitreous haze has been reported in ocular toxicity after intra-cameral injection of very high doses of cefuroxime during cataract surgery [4]. But, no sign of remarkable inflammation in the vitreous was found in the present cases. No abnormality in vitreous has also been reported by Buyukyildiz in two cases of retinal toxicity caused by 2 mg/0.1 ml cefuroxime intra-cameral injection [3]. The dose of cefuroxime injection was much higher in Delyfer et al. study [4] than the presented cases and Buyukyildiz et al. cases [3]. The different doses of cefuroxime injection during cataract surgery may lead to the different findings in vitreous.
Topical nonsteroidal anti-inflammatory drugs and corticosteroids have been reported to be effective and safe therapy for preventing post-surgical ocular inflammatory and macular edema [17][18][19][20]. Thus combination of nonsteroidal anti-inflammatory drugs and corticosteroids was applied in the present cases topically as routine anti-inflammation treatment after phacoemulsification. The SD-OCT image revealed a quick recovery from the macular edema without any special surgical intervention 1 week later. Delyfer et al. [4] also reported that retinal injury and visual dysfunction induced by intra-cameral excessive cefuroxime injection were able to recover to normal without surgery intervention after 6 weeks. The recovery time was shorter in the present cases than previous report. That may be due to the much lower concentration of cefuroxime solution used in the two cases. These results suggest that early macular edema with extensive serous retinal detachment which may be attributed to cefuroxime toxicity are restorable. Routine antiinflammatory treatment is sufficient and do not require excessive interventions.
Consent
Written informed consent was obtained from the patients for publication of this case report and any accompanying images.
Authors' contributions
XH has made substantial contributions to analysis and interpretation of data; has been involved in drafting the manuscript or revising it critically for important intellectual content; and has given final approval of the version to be published. LX and GXX have been involved in revising it critically for important intellectual content; and have given final approval of the version to be published. All authors read and approved the final manuscript.
Submit your next manuscript to BioMed Central and take full advantage of: | 2017-06-29T20:21:22.338Z | 2015-11-04T00:00:00.000 | {
"year": 2015,
"sha1": "1def72369acf8e7b3babaa930a1b409533da1856",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s13104-015-1639-1",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "47cf71757d02d56d52d9bee0b4064dba9a9aa975",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
237546629 | pes2o/s2orc | v3-fos-license | Implementation and Evaluation of a Team-Based Approach to Hospital Discharge Transition of Care
Background: Transitional care management (TCM) programs guide patients from hospital discharge to outpatient follow-up with the goal to decrease hospital readmissions and the cost of care. In 2017, the department of primary care internal medicine (PCIM) at Eastern Virginia Medical Group implemented TCM. We aimed to evaluate the eXcacy and self-sustainability of this TCM program. Methods: The TCM team contacted patients upon discharge to schedule the follow-up appointment. We coded patient contact as (1) no successful phone-call contact, patient did not attend appointment; (2) successful phone-call contact, patient did not attend appointment; and (3) patient attended appointment. We collected patient demographics, readmissions, and visit costs using manual chart review and electronic health record (EHR) data extraction. We conducted χ analysis, one-way analysis of variance, and unpaired t tests to assess associations between readmission rates or costs and TCM care. Results: Initial analysis did not indicate signi]cant associations between readmission rates and level of TCM care at 30 (χ =1.40, P=.50), 60 (χ =5.48, P=.06), or 90 (χ =4.23, P=.12) days or signi]cant differences in patient charges at 30 (F[2,59]=2.85, P=.06), 60 (F[2,91]=2.00, P=.14), or 90 (F[2,126]=1.39, P=.25) days. Follow-up analysis indicated signi]cant associations between readmission rates and any level of TCM care at 60 (χ =5.40, P=.02) and 90 (χ =4.21, P=.04) days, but not at 30 days (χ =1.39, P=.28). Conclusions: Our TCM program review suggests that the bene]ts of transitional care extend beyond 30 days by decreasing readmission rates at 60 and 90 days after hospital discharge.
Introduction
Transitional care management (TCM) is a robust intervention to guide a patient's transition from a hospital setting to an outpatient follow-up visit with a primary care physician. Adherence to treatment plans can be low, and nonadherence to discharge instructions is associated with poor health outcomes. In January 2013, the Center for Medicare and Medicaid Services (CMS) created new billing codes (99495, 99496) to address the work involved in coordinating postdischarge services, incentivizing TCM programs by increasing reimbursements to physicians who provide transitional care. Comprehensive discharge planning and follow-up has been shown to reduce hospital readmissions 30 days postdischarge, and the bene]t of these programs extends to physicians and their practices. The bene]t of transitional care includes reduced future readmissions, a reduction in medication complications, and increase in high-value care for the patient. Some studies suggest that the bene]ts of transitional care extend beyond 30-days after hospital discharge. Here, we outline the implementation of a TCM program and report hospital readmission rates and costs to the patient.
Methods
In November 2017, the department of primary care internal medicine (PCIM) at Eastern Virginia Medical Group implemented a TCM program. A designated licensed practical nurse dedicates 0.5 full-time equivalent (FTE) to the TCM program. The TCM nurse tracks PCIM patients admitted to hospital systems and calls patients within 48 hours of hospital discharge to (1) con]rm hospital discharge, (2) reconcile medication lists, (3) ensure patients ]ll any newly prescribed medications and stop discontinued medications, and (4) schedule follow-up appointments. There were instances in which the TCM nurse was unable to contact a patient, but the patient already had an appointment scheduled by the inpatient team. The TCM nurse attempted to schedule all appointments within 7 days of discharge but made appointments within a later time frame when requested by the patient. We included appointments following hospital discharge in our study regardless of whether they were billed as a TCM visit.
From November 2017 to March 2019, we tracked all-cause hospital admissions for PCIM patients at a local hospital system. The cohort included patients who were discharged home or to another care facility. We performed manual chart review to collect patient data on all-cause hospital readmissions after discharge and received electronic health record (EHR) cost data associated with patient readmissions at 30, 60, and 90-day intervals from the initial date of discharge. Additionally, we recorded patient age, length of hospital stay, and number of problems treated during hospitalization. To assess differences in outcomes (eg, readmission rates, readmission charges) associated with TCM care, we conducted χ analysis, one-way analysis of variance (ANOVA), and unpaired t tests. The Eastern Virginia Medical School Institutional Review Board approved this study.
Results
There were 574 patients included in the study with an average age of 64.1 (+14.4) years. The patients had an average hospital length of stay of 4.0 (+3.6) days, and an average of 6.6 (+4.6) problems treated during hospitalization. To evaluate the program, we divided patients into three groups: (1) patients who did not receive any transitional care (n=99, 17.2%), (2) patients who received a phone call but did not attend an appointment (n=122, 21.3%), and (3) patients who attended a transitional care appointment (n=353, 61.5%; Figure 1).
When evaluating all-cause hospital readmissions, patients who received no transitional care had numerically higher readmission rates compared to those who received a phone call or attended an appointment ( Figure 2
Conclusions
We outline a clinically relevant intervention for transitional care management, but our study does have limitations. The small number of readmitted patients likely limited power to detect a statistically signi]cant effect in costs. Additionally, we were only able to analyze readmissions and cost data available through one hospital system. Although this system includes multiple hospitals, any admissions outside of the hospital system were not included in the analysis. Moreover, our analysis does not stratify patients by age, length of hospital stay, risk for readmission, or other factors. While our data are insuXcient to draw conclusions about the ]nancial impacts of TCM, implementation of transitional care programs likely has implications for revenue at the primary care practice and health system level.
Our data build upon previous studies by indicating the strength of scheduling follow-up appointments through multiple processes, as 39.6% of patients who were not contacted by the TCM nurse attended an appointment due to the work of in-patient teams scheduling follow-up. While the capacity for transitional care programs to improve patient care has been well established in previous studies, we demonstrate that the bene]t of decreased readmissions may extend beyond the 30-day time interval, as analysis indicates that any form of TCM care is associated with reductions in readmission rates at 60 and 90 days. Despite their limitations, our ]ndings emphasize the positive impact of transitional care and provide a framework for practices seeking to implement and evaluate TCM programs. | 2021-09-10T14:31:11.597Z | 2021-08-27T00:00:00.000 | {
"year": 2021,
"sha1": "f06f927e4fca220ae5acf640a83b03e2111255a8",
"oa_license": null,
"oa_url": "https://journals.stfm.org/media/4246/primer-5-28.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f06f927e4fca220ae5acf640a83b03e2111255a8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
245024057 | pes2o/s2orc | v3-fos-license | The Current Oxygen and Hydrogen Isotopic Status of Lake Baikal
: This study revises the δ 18 O and δ 2 H status of Lake Baikal. The mean values of δ 18 O and δ 2 H varied from − 15.9 to − 15.5‰ and from − 123.2 to 122.2‰, respectively, for the past 30 yr. The isotopic composition of the lake remained more “light” compared to the regional precipitation and rivers inflows. The isotopic composition of the lake has begun to change since ca .1920 after the Little Ice Age; however, Lake Baikal still has not reached the isotopically steady state in the present. The calculated composition of the steady-state should be − 12.3‰ for δ 18 O and − 103.6‰ for δ 2 H. If regional climate parameters do not change dramatically, Lake Baikal will reach these values in ca. 226 yr. Based on isotopic fingerprints of the upper (0 to 150 m) and near-bottom layers ( ca. 150 m from the bottom floor), the renewal in the southern and central basins of Lake Baikal has occurred recently compared to the northern Baikal basin, and the size of the mixing-cell of downwelling is close to 30 km.
Introduction
Lake Baikal is the largest and the most ancient lake in the world. Its size of 632 km in length and 1.642 km in depth assumes a long hydrological history and residence time. The watershed of the lake is approximately 557.000 km 2 and covers Northern Mongolia and East Siberia. Tributaries of Lake Baikal flow over landscapes of steppe, boreal taiga, permafrost, and mountains. The main tributaries of the lake are the Selenga (~25.57 to 29.43 km 3 yr −1 , 50% of the total annual inflow), Upper Angara (~8.37 to 8.56 km 3 yr −1 ,~12% of the total annual inflow), and Barguzin (~3.71 to 3.9 km 3 yr −1 ,~6% of the total annual inflow) rivers [1]. Table 1 shows monthly hydrological parameters of Lake Baikal when the maximum inflow and evaporation occur in June and December, respectively. Outflow through the Angara River is stable (~5 km 3 month −1 ) during the year. Although Lake Baikal is located in the Baikal Rift Zone, the groundwater input is <4.5% of the total water input [2].
One of the most fundamental questions for any deep lake is the mechanism of the rate of deep-water renewal via exchange with surface water. Simple indicators of the renewal for Lake Baikal are the changes in temperature and oxygen concentration at the water interface near the bottom [3]. Seasonal convection in Lake Baikal occurs in two time spans: spring and late autumn when the temperature of the surface layers are below 4 • C and colder than the deep layer [4][5][6]. When the temperature of the surface layer equals that of the deep layer, deep convection must stop.
The renewal inferred from isotopic methods was estimated 30 or more years ago. Thus, based on the content of CFC-11 and CFC-12 in Lake Baikal in 1988, only about 12.5% of the renewal of deep waters occurred each year in the 20th century [6].
Rates of the renewal of deep water with surface water deduced from volume-weighted mean 3 H- 3 He ages below 250 m depth are about 10% yr −l in the southern and central basins and 15% yr −l in the northern basin [7]. Westerlies (transfer from the North Atlantic), the Arctic Oscillation, the East Asian monsoon, and the winter Siberian High form the climate in East Siberia. Specifically, the Westerlies are forced to move northwards during warm seasons due to the following factors: (i) the northward shift of the atmospheric circulation cells driven by Earth's tilt and (ii) the expansion and the northward shift of the Azores high-pressure system [8]. Moreover, during the warm season, the well-developed East Asia low-pressure system extends to the southeast of Siberia, and "lures" the westerlies to move further eastwards, enhancing the warm-season domination of the westerlies [9]. At present, 70% of the annual precipitation in East Siberia is recorded during warm seasons when heavy but rare rain occurs due to the Asian monsoon [10]. This pattern indicates that the atmospheric circulation of the northwesterly wind direction is dominant in the area of Lake Baikal. In winter, the Siberian High occupies East Siberia and blocks the penetration of moisture from the oceans to continental Asia.
Changes in climate and the atmospheric water cycle are known to leave an imprint on the isotopic composition of different water reservoirs. Therefore, the investigation of oxygen and hydrogen isotopic characteristic can be useful for the hydrological reconstruction of water bodies [11]. However, isotopic investigations of the area around Lake Baikal and its watershed are still rare. In Siberia, the contribution from recycled water mainly controls the isotopic composition of summer precipitation [12]. On other hand, air temperature mainly affects the isotopic composition of regional precipitation, and no significant correlations were obtained for precipitation amount and relative humidity [13,14]. The isotopic study of Lake Baikal was performed only in 1991 and 1992, which revealed that Lake Baikal was in a transit status and did not reach a steady-state isotopic composition (isotopic equilibrium with inflowing water) [2]. Based on this estimation, Lake Baikal will reach its steady-state in 780 yr [2]. However, this estimation results from the calculations of the isotopic values in the starting composition of the inflow [2]. In our study, the initial isotopic composition of Lake Baikal and the inflow (1991 and 1992 yrs.) used in calculations were known that it improves to estimate the isotopic evolution of the Lake Baikal. Lake Baikal is an ecosystem with endemic flora and fauna, and we can assume that changes in the isotopic composition of the lake will have a prolonged effect on the life cycles of its biota. For example, there are different oxygen isotopic fractionations between the δ 18 O bulk water and the δ 18 O incorporated bulk of inorganic orthophosphate at the biogeochemical cycle of phosphorus [15,16]. Thus, nucleic acids account for 76% (wt.) of P org in cells and are widespread in aquatic and sedimentary environments. Therefore, we expect that the process of the P inorganic regeneration from nucleic acids can exert a significant impact on the δ 18 O P values of dissolved inorganic phosphate in natural waters, sediments, and soils. For example, there is a linear trend between the O isotope values of P inorganic released by Escherichia coli-synthesized enzymes and ambient water [17].
There have been no isotopic studies of Lake Baikal for the past 30 yr. However, the climate of the Earth has been significantly changing over the past decades [18], and these changes can affect the atmospheric circulation and hydrological regime of lakes [19]. For example, wind mixing of water layers in Lake Baikal is an effective trigger for their renewal [2]. However, regional wind activity has reduced for the past decade [3], and it could cause a decrease in the renewal of the lake.
For the past decade, the shallow zone of Lake Baikal has been under intensive anthropogenic eutrophication (excess nutrient enrichment from human activity) [20,21]. Thus, another aspect of the renewal of Lake Baikal can be the influx of polluted water from the shallow zone into the deep zone, and isotopic fingerprints can indicate this process.
In general, for the past decades, significant changes in the environment have occurred on global and regional scales, and it seems that a response of the isotopic status of Lake Baikal to these changes needs actualization. This study aimed to estimate the isotopic changes of Lake Baikal for the past 30 yr and search for isotopic fingerprints of vertical mixing between the upper and bottom water layers of the lake.
Methods
Eight vertical stations from the surface to the bottom of Lake Baikal and several surface samples were studied ( Figure 1). The water samples were collected from 4 to 9 June 2021 (the first cycle of deep convection in Lake Baikal) from the board of the R/V "Vereshagin".
An SBE-25 CTD probe (SeaBird, Bellevue, WA, USA) and a Rinko III (JFE Advantech Co., Kawasaki, Japan) were used to measure temperature (accuracy: ±0.002 • C) and the concentration of oxygen (DO) by with a resolution of 0.2 mg L −1 .
Figure 1.
A sketch shows the location of points of isotopic samples. The upper panel-isotopic composition of regional precipitation from weather stations-Irkutsk, Bagdarin, and Sukhe-Bator [12][13][14]22]. Numbers into grey circles are 1, 2, and 3-inflow via Selenga, Bargyzin, and Upper Angara Rivers, respectively. 4-outflow via the Angara River. Prim, Baik, and Barg-Primorsky, Baikalsky, and Barguzinsky Ridges, respectively. The bottom panel is Lake Baikal, grey lines-isobaths countered at 200 m [23]. Blue circles are stations of vertical profiles of temperatures, oxygen concentration, and isotopic composition from the surface to the bottom. Red circles-surface samples.
The Southern Basin of Lake Baikal
The temperature profiles can be divided into three zones. The upper zone (0 to 145 m) is characterized by a transition of water temperatures from 2.4 to 3.6 • C. The temperature gradually decreased to 3.38 • C in the middle zone (145-150 to 1150 m). In the bottom zone (1150-1450 m), the temperature drastically decreased to 3.3 • C ( Figure 2). The distribution of oxygen through the deep layers closely corresponds to temperature profiles. The DO varied from 13.5 to 9.5 mg L −1 . The LIST station showed the highest oxygen concentration compared to the KAD1 and KAD2. The location of zones in the oxygen profile was identical to temperature zones ( Figure 2). However, these changes in the bottom layer were most significant at LIST and KAD1.
The mean values of δ 18 O and δ 2 H for LIST, KAD1, and KAD2 were similar and ranged from −15.3 to −15.6‰ and from −121.7 ± 0.1 to −122.4 ± 0.3‰, respectively. The upper and bottom zones had the minimum offsets between layers and stations in the δ 18 O and δ 2 H compositions ( Figure 2). In the middle zone, these offsets could increase to 0.17‰ and 0.75‰ for δ 18 O and δ 2 H, respectively.
The Central Basin of Lake Baikal
At the stations (UH1, UH2, and IZ) of the central basin of Lake Baikal, there are also three zones with different temperatures, oxygen concentration, δ 18 O, and δ 2 H. The temperature profiles of the stations were uniform and smooth. The temperature in the upper zone (0 to 150 m) ranged from 2 to 3.6 • C, whereas, in the 0-80 m layer, it ranged from 2 to 2.4 • C. This 0-80 m layer was colder than the upper layer in the southern basin. The temperature gradually decreased from 3.6 to 3.2 • C in the middle zone (150 to 1500 m). Temperature changes were insignificant (around 0.05 • C) in the bottom zone (1500-1620 m). The DO varied from 13.58 to 9 mg L −1 in the upper and middle zones, while oxygen concentration increased to 9.9 mg L −1 in the bottom zone. In general, the station IZ was enriched in the oxygen of ca. 0.5 mg L −1 compared to stations UH1 and UH2 in the upper zone.
The isotopic composition along all stations ranged from −15.3 to −15.6‰ and from −121.5 to −122.5‰ for δ 18 O and δ 2 H, respectively. Overall, offsets in the isotopic composition between layers at the stations were 0.05‰ and 0.2‰ for δ 18 O and δ 2 H, respectively. Obviously, the main abiotic factors (temperature and oxygen concentration) of the central basin were more homogeneous than in the southern part.
The Northern Basin of Lake Baikal
The temperatures of the uppermost layer (0 to 70 m) ranged from 1.6 to 1.
Isotopic Characteristics of Inflow in Precipitation, Rivers and the Lake
To estimate the regional isotopic composition of precipitation, we used data from stations Irkutsk (70 km from Lake Baikal), Bagdarin (near the watershed of the Barguzin River) and Sukhe-Bator (the watershed of the Selenga River, Mongolia) collected in 1971, 1990, from 1996 to 2000, and from 2012 to 2017 [12][13][14]22]. The isotopic composition of the Baikal tributaries was taken from Seal and Shanks [2]. Groundwater input in Lake Baikal is <4.5% of the total net flux, and it cannot cardinally change the isotopic proportion in the lake [2].
The seasonal means of regional precipitation for δ 18 O and δ 2 H, respectively, amount to −27‰ and −209.2‰ in winter, −16.3‰ and −126.2‰ in spring, −10.3‰ and −82.0‰ in summer, and −20.2‰ and −151.6‰ in autumn (Figure 4). Figure 5 shows that regional precipitations are close to Global Meteoric Water Line (GMWL), although they are isotopically lighter. Rivers inflow also shows offset from GMWL. The annual rivers inflow is −15.5‰ for δ 18 O and −117.4‰ for δ 2 H [2]. The isotopic composition of Lake Baikal is close to the precipitation of Irkutsk, whereas the δ 18 O and δ 2 H compositions of some surface samples were close to ones for Bagdarin and Irkutsk ( Figure 5). In general, the isotopic composition of Lake Baikal is "lighter" compared to regional precipitation and rivers inflow. On the other hand, vapor from Lake Baikal should deplete the isotopic composition of precipitation in Irkutsk, and this influence of the lake increases in autumn when evaporation is the maximum. This feature of Irkutsk most likely explains the isotopic difference with Bagdarin. Notably, the isotopic composition of regional precipitation from April to September is more isotopically heavy compared to the lake composition (Figures 2 and 4). A similar disproportion in the isotopic composition of the lake, precipitation, and rivers inflow was also observed in 1991 and 1992 [2]. Most likely, this evidences that Lake Baikal is in a transient state with respect to the isotopic characteristics of its water budget.
Isotopic Composition in 1991, 1992, and 2021
Inputs from precipitation, rivers inflow, and groundwater fluxes form the water balance of Lake Baikal, whereas outputs mainly occur as effluents through the Angara River (the southern basin) and evaporation. Figure 6 shows that the maximum inflow to Lake Baikal occurs from June to August, while evaporation begins to gradually increase from September and prevails over the inflow from November to December. In 1991 and 1992, Lake Baikal was characterized by the mean values of δ 18 O and δ 2 H accounting for −15.9‰ and −123.2‰, respectively. However, at present, these values are −15.5‰ (δ 18 O) and −122.2 ‰ (δ 2 H). It is widely acknowledged that there is a clear positive relationship between rising in air and ocean temperature and enrichment of precipitation in heavy oxygen and hydrogen isotopes e.g., [11]. In Russia and the Baikal region, the temperature rises by +0.45 • C every 10 years and by +0.34 • C every 10 years, respectively [24]. Based on this trend, the shift in air temperature of the Baikal region is approximately +1.02 • C for the past 30 years. The δ 18 O/T gradient for weighted mean monthly precipitation in the Lake Baikal region ranges from 0.36 to 0.50‰ per • C [2,14]. Thus, we can assume that the revealed changes in the isotopic characteristics of Lake Baikal occurred due to the increase in δ 18 O and δ 2 H in regional precipitation with the global warming during the past decades. However, there is no significant difference in monthly offsets of regional isotopic values of precipitation for 1971 and 1990, from 1996 to 2000, and from 2012 to 2017 (Figure 5), and Lake Baikal is still isotopically depleted compared to precipitation. In this regard, modern isotope values of rivers inflow are most likely close to those in 1991 and 1992 (−15.5‰ for δ 18 O and −117.4‰ for δ 2 H) reported by Seal and Shanks [2]. Moreover, surface samples from Barguzin Bay and the Selenga Delta show the isotopic composition and d-excess~6‰ close to those for the Selenga and Barguzin rivers in 1991 and 1992. The outflow through the Angara River is the same during the year. However, the net inputs are close to the equal net outputs, so there is no net change in the volume of the lake (Table 1). Furthermore, the Angara River discharges water from the lake's depth of 0 to 50 m. Ultimately, the outflow through the Angara River should not result in an isotopic change in the composition of the lake.
Seal and Shanks [2] suggested that the "light" isotopic water of Lake Baikal formed during the Little Ice Age.
According to Equation (1) from Gonfiantini [25], we estimated the change in the isotopic composition of the lake −δ L (t) with the time span of one month: where δ 0 is the initial isotopic composition of the lake in the previous month, and δ s is the steady-state isotopic composition that the lake approaches as t → ∞ ; I t -the total monthly inflow (km 3 ) into the lake; V-the volume of Lake Baikal (km 3 ). The steady-state isotopic composition, δ s is given by Gonfiantini [25] and Gat [26] as Equation (2): where x = E/I (evaporation/inflow); m is the temporal slope from Gibson et al., (2016) [4]; δ * -the limiting factor for isotope enrichment from Gonfiantini [25], and δ I -amountweighted isotopic composition of the total monthly inflow. This calculation began with June 1992 when δ 0 was −15.9‰ for δ 18 O and −123.2‰ for δ 2 H; δ of monthly precipitation was as shown in Table 1, and the bulk annual δ of rivers inflows was used, −15.5‰ for δ 18 O and −117.4‰ for δ 2 H [2]. These calculations indicate that changes in the isotopic composition of Lake Baikal in time can be described as δ 18 O = 0.0013x − 15.9‰ (R 2 = 0.99), δ 2 H = 0.0072x − 123.2‰ (R 2 = 0.99) where x is the number of months (Figure 7). The calculated data on the shift in the isotopic values from June 1992 to June 2021 (348 months) were as follows: δ 18 O = −15.44‰ and δ 2 H = −120.69 ‰. These calculated and the measured mean values (−15.5‰ for δ 18 O and −122.2‰ for δ 2 H) are very close, and these regression models can be used to calculate a time when Lake Baikal will be in steady-state isotopic composition. According to Equation (2), the mean annual values of δ s should be close to −12.36‰ for δ 18 O and −103.64‰ for δ 2 H. If regional climate parameters do not change dramatically, Lake Baikal will reach these values in ca. 226 yr.
In Lake Baikal, the residence time of water is about 330 yr, and that of trace elements is 94 to 320 yr [27]. Based on the found line regressions of time estimates of the steady-state isotopic composition, we can assume that the lake has begun to change in an isotopic transient state since ca. 1920. The global climate transition from the Little Ice Age to the Recent Warming might trigger this change in Lake Baikal. According to many reconstructions of global surface temperatures for the Northern Hemisphere, a transition from the Little Ice Age to the Recent Warming was characterized by a sharp increase in the annual temperature that occurred ca. 1850 to 1860 [28][29][30]. However, records from East Siberia revealed that significant changes in climate parameters, vegetation, and lake bio-productivity were not intense until 1900 [30]. It seems that a regional climate passed fully to the Recent Warming of 1900 to 1920. Most likely, the found changes in the isotopic composition of the lake are rather due to the changes in the water balance and gradual replacement of the water body formed until 1920 by modern influxes than to climate changes occurring in the recent decades. Figure 2 clearly indicates that the upper layer from 100 to 150 m is colder and enriched in oxygen compared to the underlying water layers. Ultimately, thermal stratification and storm forced this cold layer to sink to the bottom because it is denser than the surrounding water masses [6,31]. A fingerprint of this vertical mixing is temperature and oxygen "anomalies" in the near-bottom layer. The estimated mean residence times for layers >250 m in the southern, central, and north basins range from 11 to 17 yr, from 10 to 18, and from 6.2 to 11 yr, respectively [6,7,32]. According to this vertical mixing mechanism, the upper and bottom layers could be younger than the "core" of the lake. The identified changes in the isotopic composition ( Figure 2) at the interface of 100 to 150 m from the bottom-most likely have this origin. Notably, the vertical water exchange does not occur every year. For instance, between 2005 and 2012, the mixing in the southern, central, and northern basins occurred with frequencies of 65, 50, and 45% yr −1 , respectively [33].
Renewal of Lake Baikal
The determined d-excess for the bottom interfaces is uniform between stations and the parts of Lake Baikal. In this regard, it seems that the d-excess can be a marker of the time span between the formations of the upper and the near-bottom layers. For example, there is no significant offset in the d-excess for the southern basin, and it can be evidence that these bottom layers are formed by one generation of downwelling. On the contrary, in the central basin, UH1 located 3 km from the coast shows offset of d-excess in ca. 1‰ from the pelagic stations UH2 and IZ. The maximum offset (1.5‰) in the d-excess of the bottom layers was determined in the northern basin ( Figure 2).
The average distance between the stations with offsets in d-excess in the near-bottom layers is 30 km (Figure 1). Based on this distance, the size of the mixing-cell of downwelling is close to 30 km. Additionally, the d-excess of the upper (0 to 150 m) and bottom layers of the southern basin are very close. It most likely elucidates that vertical renewal of water in the south basin is more frequent than in other parts of Lake Baikal. The northwesterly wind dominates in the Baikal region [34]. Thus, frequent renewal of the southern basin can be due to more intensive wind-induced convection. The central and northern basins are partially shielded from wind penetration by Primorsky, Baikalsky, and Barguzin Ridges.
In general, the identified renewal in the southern and central basins might have occurred recently compared to the northern basin. Moreover, the bottom interface in the northern basin was maximally depleted in the concentration of oxygen over vertical profiles and vice-versa in the southern and central basins ( Figure 2). Time without renewal of water at the bottom of the lake was characterized by the deficiency of DO due to organic matter oxidation in bottom sediments.
The value of stable isotopes analysis can be used in qualitative analysis of water sources in a water treatment context [35,36]. Settlements located around the shoreline of Lake Baikal do not have water treatment stations, or these stations operate incorrectly without quality treatment of wastewater [37]. Ultimately, wastewater inflows to the shallow zone of the lake with groundwater. Noteworthy is that domestic water is enriched in heavy isotopes of~0.12 to 4 ‰ for δ 18 O [38,39]. The number of tourists visiting Lake Baikal used to be approximately 1.8 M yr −1 before the COVID-19 pandemic. Hence, we can assume that wastewater could enrich the shallow zone with heavy δ 18 O. This study does not present pelagic samples with "anomalously heavy" δ 18 O. However, in the future, the δ 18 O and δ 2 H composition of the Lake Baikal shallow zone should be studied in detail.
Conclusions
We studied the distribution of oxygen and hydrogen isotopes, temperatures and oxygen concentrations in Lake Baikal as hydrological proxies in June 2021. Thus, a 30-year gap in the study of δ 18 O and δD has been filled. We have revealed that current values of δ 18 O (−15.5 ‰) and δ 2 H (−122.2‰) are "heavier" by 0.5 and 1‰ compared to those in 1991 and 1992. Based on the evolution of the isotopic composition of the lake from June 1992 to June 2021 with a time span of one month, we have calculated a line regression model for the change in the isotopic composition per unit of time. Isotopic values in 2021 are still lower than the calculated isotopic composition of the steady-state (−12.3‰ for δ 18 O and −103.6‰ for δ 2 H). Therefore, Lake Baikal most likely has not yet reached the isotopically steady state. This study revealed that the lake has begun to change since ca.1920 and will reach the isotopically steady-state in ca. 226 yr. Gradual replacement of the water body formed until 1920 by modern influxes explains well the identified shifts in isotopic composition.
The closeness of the calculated and measured δ 18 O and δ 2 H values demonstrates the relevance of used equations to estimate the isotopic status of large lakes.
There are offsets in temperatures and oxygen concentrations in the 100 to 150 m nearbottom water interface from the overlapping water layers. The δ 18 O and δ 2 H fingerprints have been first used to substantiate a vertical mixing in the uppermost (0 to ca.150 m) and near-bottom layers. It seems that the vertical renewal of water in the southern basin of Lake Baikal is more frequent than in other parts of the lake. The size of the mixing-cell of downwelling is close to 30 km.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author Fedotov A. mix@lin.irk.ru. | 2021-12-12T17:28:38.460Z | 2021-12-06T00:00:00.000 | {
"year": 2021,
"sha1": "24f34b95a0fd66afdc2f0a35cb35c655c3b3c4b5",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4441/13/23/3476/pdf?version=1638865882",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "46a32f6c12bd70c9521742f0435c3fc11885c6a9",
"s2fieldsofstudy": [
"Environmental Science",
"Geography"
],
"extfieldsofstudy": []
} |
57574635 | pes2o/s2orc | v3-fos-license | Tauopathy in veterans with long-term posttraumatic stress disorder and traumatic brain injury
Purpose Traumatic brain injury (TBI) and posttraumatic stress disorder (PTSD) have emerged as independent risk factors for an earlier onset of Alzheimer’s disease (AD), although the pathophysiology underlying this risk is unclear. Postmortem studies have revealed extensive cerebral accumulation of tau following multiple and single TBI incidents. We hypothesized that a history of TBI and/or PTSD may induce an AD-like pattern of tau accumulation in the brain of nondemented war veterans. Methods Vietnam War veterans (mean age 71.4 years) with a history of war-related TBI and/or PTSD underwent [18F]AV145 PET as part of the US Department of Defense Alzheimer’s Disease Neuroimaging Initiative. Subjects were classified into the following four groups: healthy controls (n = 21), TBI (n = 10), PTSD (n = 32), and TBI+PTSD (n = 17). [18F]AV1451 reference tissue-normalized standardized uptake value (SUVr) maps, scaled to the cerebellar grey matter, were tested for differences in tau accumulation between groups using voxel-wise and region of interest approaches, and the SUVr results were correlated with neuropsychological test scores. Results Compared to healthy controls, all groups showed widespread tau accumulation in neocortical regions overlapping with typical and atypical patterns of AD-like tau distribution. The TBI group showed higher tau accumulation than the other clinical groups. The extent of tauopathy was positively correlated with the neuropsychological deficit scores in the TBI+PTSD and PTSD groups. Conclusion A history of TBI and/or PTSD may manifest in neurocognitive deficits in association with increased tau deposition in the brain of nondemented war veterans decades after their trauma. Further investigation is required to establish the burden of increased risk of dementia imparted by earlier TBI and/or PTSD. Electronic supplementary material The online version of this article (10.1007/s00259-018-4241-7) contains supplementary material, which is available to authorized users.
Introduction
Alzheimer's disease (AD) is the most common form of dementia in the elderly, leading to a progressive deterioration of memory and spatial cognition, along with other cognitive impairments [1]. AD pathology is characterized by the aggregation of amyloid-β and phosphorylated tau [2][3][4], and tau deposition is particularly associated with progression of clinical symptoms [2]. It is increasingly recognized that traumatic brain injury (TBI) and posttraumatic stress disorder (PTSD) increase the risk of cognitive decline and dementia [5,6], suggesting a link with AD. In addition, there is considerable comorbidity of PTSD with TBI in both civilian and military settings [7][8][9], which raises the possibility of synergistic effects favouring the risk of dementia.
A retrospective cohort study by Yaffe et al. showed that veterans with PTSD have a twofold higher risk of developing dementia than veterans without PTSD [10]. In addition, a systematic review revealed an association between TBI and the Electronic supplementary material The online version of this article (https://doi.org/10.1007/s00259-018-4241-7) contains supplementary material, which is available to authorized users. development of AD with an odds ratio of 2.3 [11]. These associations imply that the two conditions may interact by increasing the risk of neurodegeneration and dementia. Indeed, several neuroimaging studies have shown overlapping patterns of brain volume loss in TBI, PTSD and AD [12][13][14]. Post-mortem investigations have shown intraneuronal tau accumulation after a single TBI incident [15] and in subjects with multiple TBI events suffering from chronic traumatic encephalopathy (CTE) [16]. PET with [ 18 F]AV1451 [17] and other tau ligands [18] has recently been used to detect tau deposits in the brain of living AD patients. There is a single report in abstract form of tauopathy in the cerebral cortex of living veterans with PTSD [19].
According to the National Institute on Aging and the Alzheimer's Association (NIA-AA) research framework, AD is defined by the presence of both amyloid-β and pathological tau deposits. However, when amyloid deposition is accompanied by primary age-related tauopathy, the disorder should properly be designated as BAlzheimer's pathological changeŵ hich could be considered as an early presentation of the BAlzheimer's continuum^ [20]. Tau PET is a new in vivo molecular imaging modality used to investigate the progression of tauopathy in the brain, and has been correlated with the Braak neurofibrillary tangle (NFT) stages as defined post mortem [21]. Indeed, Schwarz et al. used [ 18 F]AV1451 PET to identify Braak stages that represent the well-defined neuroanatomical signature of tau pathology in typical AD [22]. Furthermore, elevated tau binding on PET has been shown to be associated with amyloid positivity and cognitive impairment in both normal ageing and dementia [20,[22][23][24].
Inspired by this background, we analysed tau PET data that had been acquired using [ 18 F]AV1451 PET from the Alzheimer's Disease Neuroimaging Initiative-Department of Defense (ADNI-DOD) study of nondemented Vietnam War veterans suffering from service-related TBI, PTSD, and comorbid TBI with PTSD. Using parametric mapping procedures, we evaluated tau deposition in cohorts with TBI and/or PTSD compared with healthy veterans, and addressed the relationship between the individual tau burden and cognitive test scores. In addition, we investigated tau pathology in relation to amyloid PET findings and to the histopathological Braak stages, which were defined using the criteria of Schwarz et al. [22].
Study design
All data were obtained from the ADNI-DOD which is a multimodal (MRI, PET and neuropsychological assessment), nonrandomized study that recruited Vietnam War veterans selected from the Department of Veterans Affairs compensation and pension records, investigating TBI and/or PTSD as potential risk factors for the development of AD. ADNI-DOD is part of the ADNI project launched in 2003 as a public/private partnership led by Principal Investigator Michael W. Weiner, MD. All participants signed a consent form, and the use of deidentified data was approved by the Human Research Ethics Committee of the University of Queensland, Australia (IRB number 2017000630).
[ 18 F]AV1451 PET for tau imaging had been performed in a total of 99 subjects as part of the ADNI-DOD study, and 81 of these subjects had their T1-weighted structural MRI data available at the time of this research. Data from one female participant was excluded to avoid gender effects, leaving a total of 80 datasets from male Vietnam War veterans of mean age 71.4 ± 5.1 years. PET with [ 18 F]AV45 for amyloid imaging had also been performed in all 80 subjects. Subjects were classified into the following four groups: healthy controls (n = 21), moderate/ severe TBI (n = 10), PTSD (n = 32), and TBI with PTSD (TBI+ PTSD, n = 17). All subjects' clinical categories were identified from the BVAELG.csv^file provided by the ADNI-DOD administration. In addition, subjects with mild cognitive impairment (MCI) were identified by ADNI-DOD based on cognitive test scores. The TBI subjects had a documented history of moderateto-severe nonpenetrating TBI during their military service. PTSD subjects were identified using the clinician-administered PTSD scale (CAPS) within DSM-IV (CAPS score >40).
In addition to imaging, all participants completed several neuropsychological questionnaires, including Everyday Cognition (ECog), Clinical Dementia Rating (CDR), Mini-Mental State Examination (MMSE), Montreal Cognitive Assessment (MOCA), Alzheimer's Disease Assessment Scale-Cognitive (ADAS-Cog), Geriatric Depression Scale, Functional Assessment Questionnaire, Combat Exposure Scale, and the Armed Forces Qualification Test (AFQT). All participants were also assessed using a battery of neuropsychological tests including the Clock Drawing Test, the Rey Auditory Verbal Learning Test, the Category Fluency Test, the Trail Making Test, the Boston Naming Test, and the American National Adult Reading Test. The ECog ratings were reported by the participants that cover multiple cognitive domains, including language, memory, visual spatial ability, and executive function, including planning, organization, and divided attention. The ECog questionnaire contains 39 items, which are rated on a four-point scale: 1 = better or no change compared with 10 years earlier, 2 = questionable/occasionally worse, 3 = consistently a little worse, 4 = consistently much worse, and subjects can respond with "9" if they wish to indicate "I don't know." The score of each category was calculated as the average of the answered questions in each subcategory and the total ECog score as the mean of all answered questions in all categories.
MRI/PET image acquisition and processing
PET tau imaging was performed with [ 18 F]AV1451. Data acquisition procedures were standardized across all ADNI sites (information can be found at: http://adni.loni.usc.edu/wp-content/ uploads/2015/02/01_DOD-ADNI_Tau-Addendum-Protocol_ 23Oct2014.pdf). Data were preprocessed and analysed as described in our previous paper [25] using the FMRIB software library (FSL 5.0.9). MRI images were corrected for intensity inhomogeneity, skull-stripped, and segmented using the RECON-ALL [26] from Freesurfer. Structural data were then resampled to an isotropic resolution of 1.5 mm and normalized to the Montreal Neurological Institute (MNI) structural template nonlinearly using FSL-FNIRT [27].
The preprocessed data were downloaded from ADNI-DOD (http://adni.loni.usc.edu/methods/pet-analysis-method/petanalysis/). The four sequential emission frames were coregistered, and standardized uptake values (SUV) were calculated and averaged. SUV maps were intensity-normalized and spatially smoothed using a scanner-specific filter function to generate SUV maps with a uniform isotropic resolution of 8 mm full-width at half-maximum. The SUV maps were skull-stripped using FSL-BET and linearly coregistered to each individual's T1weighted image using FSL-FLIRT. Each individual's SUV map was scaled to the mean intensity in a cerebellar grey matter template to generate reference tissue-normalized standardized uptake (SUVr) maps [28] in the native (individual) space. Finally, [ 18 F] AV1451 SUVr maps were spatially normalized to the MNI template using the transformation matrix and warp calculated for T1 structural MR-to-MNI registration.
To assess amyloid positivity, SUV maps of the amyloid [ 18 F]AV45 PET tracer from the same subjects were downloaded. The acquisition parameters were as described previously [25]. The [ 18 F]AV45 SUV maps were coregistered to individual T1-weighted MR images in native space using FSL-Flirt, and amyloid-PET SUVr values were calculated using the whole cerebellum as the reference region [22]. To identify amyloid-positive subjects, a global SUVr score was calculated which was the mean SUVr in the whole cerebral cortex, where SUVr >1.1 was deemed as amyloid-positive.
Regions of interest and algorithm for estimating Braak staging using [ 18 F]AV1451 SUVr
Braak staging is based on the characteristic progression of tau pathology starting in the medial temporal lobe and eventually encompassing the neocortex as revealed by post-mortem examination. We applied methods developed by Schwarz et al. [22], whose algorithm scores Braak staging noninvasively using [ 18 F]AV1451 SUVr measured in the entorhinal cortex, hippocampus, superior and middle temporal gyri (STG, MTG), fusiform cortex, lingual gyrus (BA17), and pericalcarine visual cortex (V1+V2+V3). Whereas Schwarz et al. defined regions of interest (ROIs) in MNI space based on 2-mm isotropic voxels lying close to slices of the histological Braak staging protocol [29], we defined the same ROIs in the individual's native space after Free-surfer segmentation. We then calculated the mean [ 18 F]AV1451 SUVr in bilateral ROIs, from which we assigned the Braak stage using the staging algorithm described by Schwarz et al. [22], with visual confirmation from the SUVr maps. The final Braak stage was defined as the highest score between the two hemispheres.
Statistical analysis
To investigate tau accumulation associated with a history of TBI and/or PTSD, the three clinical groups were compared to the healthy control group using voxel-based approaches encompassing all brain voxels of the tau PET SUVr images, using a nonparametric permutation test (FSL-randomise) with 5,000 permutations, with correction for multiple comparisons using false discovery rate (FDR) (p < 0.05). All analyses were corrected for ApoE4 status, age, MCI status (for confirmed cases) and hypertension. We investigated the correlation between tau accumulation and the ADAS-Cog score, ECog total score, and CDR score, as well as cerebral total amyloid. These correlations were calculated using a multilinear regression performed using (FSL-GLM) that generated Pearson correlation maps, with FDR correction for multiple comparisons (p < 0.05).
Statistical analyses were performed with R-studio, version 3.3.1 ® Foundation for Statistical Computing, Vienna, Austria). Differences in neuropsychological assessment measures and ROI-based SUVr values between groups were evaluated using the Kruskal-Wallis test, with the significance level set at p < 0.05 persisting after Bonferroni correction for multiple comparisons (n = 6). To investigate tau distribution in the four great lobes of the cortex, each individual's mean regional [ 18 F]AV1451 SUVr values in the frontal, cingulate, parietal, and temporal lobes were extracted.
Clinical outcome in TBI and/or PTSD groups
In this cross-sectional study, cognitive function in groups of veterans with a history of TBI and/or PTSD was investigated. The four subject groups were healthy controls (age 74.3 ± 7.2 years, mean ± standard deviation), TBI (72.6 ± 6.8 years), PTSD (70 ± 2.7 years), and TBI+PTSD (69.9 ± 2.5 years; Table 1). Overall, the neuropsychological test results suggested that cognitive deficits were more pronounced in the PTSD and TBI+PTSD groups than in the TBI or healthy control groups, without any subject being diagnosed with AD by any test (Fig. 1, Table 1).
According to the information provided by ADNI-DOD, subjects with memory deficits were identified by applying the criterion of a CDR score of ≥0.5 to each group: 14 of 32 subjects were identified in the PTSD group, 4 of 10 in the TBI group, 6 of 17 in the TBI+PTSD, and none of 21 in the healthy controls. Furthermore, MCI was diagnosed in 8 subjects in the PTSD Table 1 Demographic characteristics and neuropsychological performance of the subjects group (6 amnestic and 2 nonamnestic), 4 subjects in the TBI group (all amnestic), and 3 subjects in the TBI+PTSD group (all amnestic; Table 1). Thus, we found that at least one third of the subjects with PTSD and/or TBI had a significant memory decline based on the CDR score, most of whom were diagnosed with amnestic MCI, suggesting an ongoing memory decline with likely eventual conversion to AD. Those same subjects had tau pathology with Braak stages II-V, which is consistent with previously reported findings in MCI subjects [22].
The TBI+PTSD group (Fig. 2b) showed higher mean Figure 3 shows the [ 18 F]AV1451 SUVr in the frontal, parietal, and temporal lobes along with the cingulate cortex in each of the clinical groups. The TBI group showed significantly higher SUVr only in the frontal lobe (1.06 ± 0.05 versus 1.00 ± 0.07; p = 0.015) as compared to healthy controls, whereas the PTSD and TBI+PTSD groups showed no significant differences in any of the large ROIs compared to healthy controls (p > 0.05).
The TBI group (Fig. 5a) showed a negative correlation between [ 18 F]AV1451 SUVr and ECog total score only in the left SMG (r = −0.45; p = 0.02), while the TBI+PTSD group (Fig. 5b) Of the three clinical groups, the TBI+PTSD group showed the most significant positive correlations between tau PET data and CDR scores, whereas the PTSD group showed a trend towards a positive correlation (see Supplementary Fig. 1). This might suggest that individuals with a CDR score ≥0.5 have relatively more tau accumulation in regions typically involved in AD. However, as part of the inclusion criteria of ADNI-DOD, none of the participants had a diagnosis of AD or other dementia at the time of scanning. Table 2). Upon further visual investigation, subjects in the Table 2.
Discussion
In vivo tau PET imaging in our clinical groups revealed increased tau tracer binding with topographical patterns resembling the distributions of tau pathology in neurodegenerative disorders such as AD and CTE [29][30][31][32]. We also observed positive correlations between tau and the severity of deficits in the various cognitive tests in the PTSD and TBI+PTSD groups. These results suggest that a history of TBI and/or PTSD might initiate pathological changes eventually coming to resemble aspects of tauopathy in AD, and manifesting in significant (but not yet pathological) deficits across a range of cognitive domains.
Neurocognition suggests more progressive impairments of TBI+PTSD and PTSD towards AD
Among subjects in the investigated clinical groups, PTSD subjects exhibited the worst cognitive performance in all assessments, followed by TBI+PTSD subjects, whereas cognitive scores in TBI subjects and healthy controls did not differ significantly (Fig. 1, Table 1). Yaffe et al. have shown that military personnel with PTSD are twice as likely to develop dementia as those without PTSD [10]. TBI and PTSD are highly comorbid conditions in civilian life and among veterans [7,9], and both conditions are associated with an increased risk of developing dementia later in life [5]. This link between TBI and PTSD may result from the physical injury and consequent cognitive impairments arising from TBI [33], or may be due to persistent trauma-related memory [34]. The ADAS-Cog, MMSE, ECog and CDR scores all showed greater memory and cognitive impairment in subjects with PTSD, recapitulating the findings of our earlier study in a larger group of ADNI-DOD subjects who had undergone amyloid PET imaging [25]. In their review, Regehr and LeBlanc found that the degree of impairment of cognitive and working memory was correlated with the severity of PTSD [35].
In the present study, >35% of the subjects with TBI and/or PTSD had some memory decline (CDR score ≥0.5), and most of these subjects were diagnosed with amnestic MCI, suggesting a progressive memory decline and raising the suspicion of early AD pathology. Indeed, these subjects were classified as Braak stages II-V, which is consistent with the range of Braak stages reported in MCI subjects [22]. Furthermore, this also suggests that a history of TBI and/or PTSD might predict memory deficits occurring decades after the trauma.
Increased tau deposition might suggest typical AD progression in TBI+PTSD and PTSD as a possible link to AD
In the present study, elevated tau deposition (10-20%) was found in the cerebral cortex of TBI subjects compared with controls. Tau is a scaffolding protein binding axonal microtubules and other proteins, and TBI causes tau to abnormally phosphorylate, misfold and cleave, and thus to form NFTs [36]. A post-mortem study of long-term (up to 49 years) survivors of a single TBI event showed exceptionally abundant TBI traumatic brain injury, PTSD posttraumatic stress disorder, TBI+ PTSD TBI subjects who developed PTSD NFTs in the cingulate gyrus, SFG and insular cortex, which led the authors to suggest a causal relationship between a single TBI event and the acquisition of AD-like neuropathological features [15]. Tauopathy has also been reported in cohorts of individuals with a history of repetitive TBI leading to CTE and ultimately proceeding to AD [16,32], and in a group of players of American football with repeated concussion who showed high [ 18 F]AV1451 uptake in the cortical grey matter-white matter junction of multiple regions, which is considered pathognomonic for CTE [16]. The relationship between TBI and tau deposition may be a consequence of the physical damage to the axonal cytoskeleton by shearing forces [37] in conjunction with the nucleation of abnormal tau promoting the formation of NFTs [38]. This biophysical model of tau pathogenesis was proposed by Ahmadzadeh et al., who suggested that tau-crosslinked microtubules are sufficiently flexible to accommodate mechanical strain in the brain when it arises slowly [39,40], but may fail if severe mechanical strain arises rapidly, and thus overwhelms the integrity of microtubules crosslinked by tau, causing tau dissociation and aggregation [41,42].
Another possible mechanism may be that damage to the blood-brain barrier (BBB) after TBI facilitates tau accumulation. In this scenario, TBI induces NFT formation particularly around small blood vessels of the cortex, typically in the depths of the sulci, and this may lead to CTE [32]. Ramos-Cejudo et al. proposed that TBI first accelerates amyloid aggregation, leading to cerebrovascular injury and BBB damage, which then results in a deleterious feed-forward mechanism in which increased arterial stiffness favours further amyloid and tau deposition [42]. PET and histopathological examination have shown that amyloid plaque density increased within a year of the occurrence of a TBI event [43,44]. On the other hand, Chen et al. found no evidence of provoked amyloid plaques in subjects who had suffered their TBI 3 years previously, despite ongoing elevation of the expression of the amyloid precursor protein in the white matter [45].
Taken together, these studies imply that transient amyloid plaques may form rapidly after TBI, but are normally cleared in subsequent years. This acute or transient response to TBI might be an initiator of a more chronic increase in tau accumulation in a pathological cascade that eventually leads to a form of tauopathy. The TBI group included subjects showing an AD-typical profile of tau deposition, with regions of increased tau appearing during Braak stages I-IV, in addition to atypical-AD regions including the frontal and cingulate cortex (Fig. 2a). A sea change in the perception of the longterm consequences of TBI has been seen in recent years, suggesting that the risks of CTE, Lewy body disease and parkinsonism are higher than the risk of AD [46][47][48]. However, we cannot currently establish if the increased tau in our TBI group was related to AD per se or to other tauopathies, mainly because of the absence of most of the cognitive impairments Table 2. TBI traumatic brain injury, PTSD posttraumatic stress disorder, TBI+PTSD subjects with TBI who developed PTSD evident in the PTSD groups. Longitudinal tau PET studies in this or a similar cohort may better establish the relationship between TBI and AD-like pathology. The PTSD group also showed elevated tau accumulation in the neocortex compared with controls. A single report has so far shown increased binding of the tau tracer [ 18 F]AV1451 in subjects with chronic PTSD from an Australian cohort of Vietnam War veterans [19]. To elucidate the underlying mechanism by which PTSD induces tau accumulation, Miller et al. investigated the influence of the lipoxygenase genes ALOX12 and ALOX15 (enzymes involved in inflammatory responses) on the decreasing cerebrocortical thickness seen in subjects with PTSD, and found that ALOX12 moderates the association between PTSD severity and thinning of the prefrontal cortex [49]. The ALOX12 pathway has been found to modulate tau metabolism [50] and may be a mediator of inflammatory mechanisms in early AD [51].
By examining the tau accumulation profiles in individual subjects, we were able to identify those with PTSD and TBI+ PTSD who showed similar tau profiles to that in AD patients. Jack proposed that early accumulation of cortical amyloid might accelerate the progression and spread of tauopathy in AD [52]. This author proposed that Bprimary age-related tauopathy^develops at some stage in life followed by increased amyloid deposition in certain neocortical areas that triggers (by an unidentified mechanism) accelerated tauopathy ultimately leading to severe cognitive deficits and AD [20,[22][23][24]. In the current study, elevated tau binding on PET was positively correlated with amyloid positivity and cognitive impairment in the PTSD and TBI+PTSD groups, but this association was not present in the TBI or healthy control groups, suggesting a particular association with PTSD.
Although none of the participants in our cohorts met the clinical diagnosis of AD, the correlation analysis of amyloid and tau PET findings suggested a strong predisposition for tau accumulation to track amyloid deposition, especially in the TBI+PTSD group, thus suggesting a complex relationship between the two pathologies. However, further investigation is required to substantiate this association. In amyloid-negative subjects, tauopathy with Braak stages above zero might be primary age-related tauopathy [37], and this also might explain the occasional finding of tau accumulation in our healthy control group. Alternately, our criterion for amyloid PET positivity of SUVr >1.1 in the whole cerebral cortex [53] may have resulted in early amyloid changes being missed in some subjects.
We found significant correlations between ADAS-Cog, ECog total and CDR scores and tau accumulation in both the TBI+PTSD and PTSD groups, with the most compelling correlations in the TBI+PTSD group (Figs. 4 and 5, and Supplementary Fig. 1). In this group, the spatial pattern of positive correlations broadly matched the default mode network (DMN), that involves the precuneus, PCC and medial frontal cortex [54]. Furthermore, tau accumulation in these same regions was positively correlated with total cortical amyloid deposition (Fig. 6). These regions of the DMN have previously been shown to contain amyloid deposits in patients with MCI [55] and early AD [54], suggesting that the DMN is the first functional network to be disrupted in AD [55]. These various correlations between tau and cognitive impairments and amyloid may suggest that TBI+PTSD and PTSD subjects are at higher risk of conversion to AD, following the typical AD progression profile proposed by the NIA-AA framework [20].
The data presented here imply that those veterans who developed PTSD following their TBI might be at the highest risk of progression to AD, while those with TBI only might be more at risk of developing other neuropathies [46][47][48], a conjecture that could be investigated by longitudinal molecular imaging studies. Work by Li et al. showed that a self-reported history of TBI was associated with an onset of cognitive impairment in older adults 3-4 years earlier than in those without a history of TBI [56], but these authors did not report interactions with PTSD.
The major limitation of this study was the small number of subjects in the TBI cohort (n = 10), which was insufficient to support strong conclusions. Further investigations are required to establish better links between TBI or PTSD with tau pathology and the risk of AD or other forms of dementia. In addition, there is a need for further investigation of the mechanisms triggering AD onset and progression. Future studies in a larger cohort may establish cut-off criteria for tau PET conforming to Braak staging.
Conclusion
Our findings show for the first time that a history of TBI and/ or PTSD is associated with increased tauopathy resembling AD-typical and atypical patterns, and is correlated with impaired neuropsychological function relative to healthy controls.
Compliance with ethical standards
Conflicts of interest None.
Research involving human participants and/or animals This research involved de-identified human data collected by the ADNI project. This study obtained ethics approval to use de-identified data from the Human Research Ethics Committee of the University of Queensland, Australia (IRB number 2017000630).
Informed consent Informed consent was obtained from all individual participants included in the study.
Open Access This article is distributed under the terms of the Creative Comm ons Attribution 4.0 International License (http:// creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. | 2019-01-08T14:58:52.026Z | 2019-01-07T00:00:00.000 | {
"year": 2019,
"sha1": "30eefedca386716239e2047a63e398171366e12f",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00259-018-4241-7.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "30eefedca386716239e2047a63e398171366e12f",
"s2fieldsofstudy": [
"Psychology",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
242463284 | pes2o/s2orc | v3-fos-license | R577X OF THE ACTN3 GENE AS PREDICTOR OF PHYSICAL PERFORMANCE IN ULTRAMARATHON RUNNERS
ABSTRACT Introduction: Genetic factors appear to explain why some athletes perform better in competition and training than their peers. Objective: To determine the occurrence of R577X polymorphism of the ACTN3 gene in mountain runners. Methods: The sample consisted of 19 female mountain runners with a mean age of 41.2 ± 6.1 years. Genotyping of R577X polymorphism of the ACTN3 gene was performed by the polymerase chain reaction (PCR) method with DNA extracted from saliva. The genotypic and allelic frequencies of the athletes were evaluated and compared with data from the literature. Hardy-Weinberg equilibrium and Chi-square with Yates correction were used, with a significance level of p<0.05. Results: The genotypic distributions did not show any significant differences between the athletes and the control group, with RR = 15.8%, RX = 57.9%, XX = 26.3%. In regard to allelic distribution, the nonfunctional allele was higher in the study group than in the control group, with R = 44.7%, X = 55.3% for p = 0.0350. Conclusion: The data revealed a possible relationship between the ACTN3 X allele and athletic performance in Brazilian female mountain runners. Level of evidence II; Development of diagnostic criteria in consecutive patients (with “gold” reference standard applied).
INTRODUCTION
The improvement of physical activity in high-performance sports may be linked to genetic predisposition. 1 For more than a decade, studies have shown that genetics may explain the different responses and better performance of some athletes in specific modalities when comparing them with their peers. 2 Moreover, genetic factors can influence up to 50% the phenotypic characteristics related to performance, training and physical fitness of high-performance athletes. 3 As many genes and gene areas have already been related to phenotypes of human physical performance, the association between genetics and phenotypic profile can help athletes developing ideal morphophysiological features for certain sports, making them less susceptible to injuries and more prepared for training and competitions. 1 Thus, the study of genetic polymorphisms can be a prognosis of physical performance. 4 However, when dealing with studies that consider ethnicity and race of the population studied, we must remember that the phenotypic effects of some polymorphisms may be expressed in multiple ways in different communities. 3 For Ahmetov et al. 1 and Yang et al. 5 , genetic polymorphisms associated with sportive success are those that bring benefits, especially related to endurance, sprint and muscle power. In this sense, we highlight R577X polymorphism of the alpha-actinin 3 gene (ACTN3), investigated here because of its association with activities that require muscle strength, sprint, and endurance. 6,7 Located on chromosome 11q13-q14, this polymorphism is the product of the switch from cytosine to thymine at position 1747 of exon 16, which results in the substitution of arginine (R allele) for a premature stop codon (X allele) at amino acid 577, and thus allowing three genotypes: RR, RX and XX. 8 The switch of arginine or R allele represents the natural transcription of the α-actinin-3 protein in the skeletal muscle, while the stop codon/X allele prevents homozygous individuals (genotype XX) from producing α-actinin-3, decreasing the transverse sections area in muscles composed of type 2 fiber predominance, and thus reducing muscle mass, when compared to RR and RX genotypes. 8,9 However, the non-expression of α-actinin-3 seems to improve aerobic metabolism, increasing the muscle and cardiorespiratory endurance. 9,10 The relation of ACTN3 with high performance in sports is described by studies that report a greater presence of the X allele in athletes with muscular endurance 11,12 , while those with muscle power and sprint have the R allele. 7,12 Considering this, our research aims at better understanding the interaction of ACTN3 R577X polymorphism with the physical performance of female mountain runners in Brazil, since such genetic variation can improve their physical fitness. Moreover, this is an unprecedented analysis of these type of athletes, with the objective of determining the occurrence of R577X polymorphism in mountain runners.
MATERIALS AND METHODS
This is a descriptive transversal study carried out according to Resolution 466/12, during the Ultramaratona dos Perdidos SkyMarathon®, a 45-km race with 2,900 m of ascent, relative height of 5,800 m, in the second week of July 2017, in Tijucas do Sul, Paraná, Brazil. The study was also approved by a Research Ethics Committee (CEP) under opinion 1,572,571.
The inclusion criteria were runners with experience of three to four years in mountain races, and in at least two races above 21 km and one above 42 km, between 2015 and 2016, who trained five to six times a week, one to two hours a day and more than three hours at weekends, and have not reported musculoskeletal diseases. Runners who did not complete the race within a limit time of 11 hours and who did not signed the Informed Consent Form (IFC) were excluded.
Sample
Of the 34 participants, 15 did not participate in the study because they did not meet the inclusion criteria; thus, the sample consisted of 19 mountain runners, with mean age of 41.2 ± 6.1 years.
Procedures
The runners selected were evaluated in relation to the presence of ACTN3 R577X polymorphism according to the following procedures:
Saliva collection
Saliva collection was performed in the field, in a specific area to support athletes, and the participants selected for the study sat in a chair with their feet supported. Initially, a 3% glucose solution was introduced in the participants' mouths, and they swished the product for two minutes. After this, they spat the liquid in a plastic cup. Then the researcher gently rubbed their jugal mucosa with a wooden spatula and washed the spatula in the cup where they spat the solution, and transferred the entire content to a 15ml Falcon tube.
After collection, the samples were taken in a polystyrene foam box with ice plates to the Laboratory of Genetics and Molecular Biology of a Private Higher Education Institution, in Curitiba/PR, where they were centrifuged at 3000 rpm for 10 minutes. After centrifugation, the supernatant liquid was discarded. We kept the precipitate, added 1300 µL of cell extraction buffer (TRIS 10mM, EDTA 5mM, SDS 0.5%, pH=8) and frozen it at −20 °C 13 .
DNA extraction
After defrosting the saliva samples, we added 10 μL of Proteinase K (BioLabs, New England) in each extraction tube and kept them at 65 °C in water bath overnight.
After taking the samples from the water bath, they were gently agitated and transferred to an 2 mL Eppendorf (microtube). The researchers added 500 μL of ammonium acetate (8mM acetate, 1mM EDTA) and mixed the solution in a vortex for five minutes. They were then centrifuged at 13,000 rpm for 16 minutes and separated into two 1.5 mL microtubes (900 μL in each one), disregarding the pellet deposited at the bottom of the tube and adding 540 μL of isopropanol. We gently reversed the tube 20 times to see the DNA.
The content with DNA was centrifuged at 13,000 rpm for seven minutes, the isopropanol was discarded, and we added 1 mL of 70% ethanol. The solution was centrifuged again at 13,000rpm for seven minutes (the supernatant was discarded), and kept at room temperature for drying. The DNA was resuspended at approximately 50 μL of TE (TRIS 10 mM, 1 mM EDTA, pH=7.76) and kept at room temperature for 24 hours. After this period, the samples were stored for two days in the refrigerator and stored at −20 °C.
ACTN3 R577X genotyping
The genotyping of ACTN3 alleles (RR, RX and XX) was performed by the RFLP-PCR technique (polymerase chain reaction through restriction fragment length polymorphism analysis). After amplification, 10 μL of the PCR product was submitted to digestion with 10 units of DdeI restriction enzyme (SIGMA) and incubated for 4 hours in water bath at 37 °C.
Electrophoresis in agarose gel
For analyzing the ACTN3 genotypes, we conducted agarose gel (3%) electrophoresis, colored with ethidium bromide and visualized with an UV transilluminator.
Statistical Analysis
The Pearson's Chi-square test was used to compare the frequencies of genotypes with other studies; when the incidence number of some genotype was less than five, we used the Fisher' s exact test. The associations between the alleles frequencies were analyzed by 2×2 contingency tables, and by the Chi-square test with Yates correction. The Hardy-Weinberg equilibrium was used to verify the distribution of ACTN3 genotypes. The support tool for tests was IBM SPSS Statistics 20 (New York, USA), except for the Hardy-Weinberg Law, which was verified with BioEstat version 5.3 (Belém, Pará, Brazil). All analyses had significance level of p<0.05.
RESULTS
The study included 19 ultramarathoners, with mean age of 41.2 ± 6.1 years, total body mass of 56.9 ± 7.4 kg, height of 163 ± 7.3 cm and fat percentage of 13.73 ± 1.72%. The distribution of the ACTN3 genotype followed the Hardy-Weinberg equilibrium (p=0.45), which shows that the genotypic frequencies found were not different from the values observed in the general population. Therefore, it had no genetic influence on the sample of our study. There were no significant differences in the distribution of ACTN3 genotypes (p>0.05) when compared to athletes from other studies. (Tabela 1) Regarding the allelic and absolute distribution of ACTN3, no significant differences were found (p=0.51), when compared with the studies by Shang et al. 15 (p=0.64), Belli et al. 16 (p=0.25), Herbert et al. 17 (p=0.23), Ben-Zaken etal. 12 (p=0.79). However, when comparing the results with the research by Coelho et al. 18 , there was a significant difference (p=0.03), showing that the X allele is higher for the study sample. (Table 2)
DISCUSSION
Regarding the ACTN3 R577X polymorphism in Brazilian athletes, we verified that, until now, there are no studies focused on this interaction in female mountain ultramarathoners, which reveals the unprecedented character of our study.
The results obtained in terms of absolute and relative ACTN3 genotypic frequency showed that, when compared with studies conducted with endurance athletes from China 15 and Israel, 12 ultra-endurance runners from Brazil, 16 European marathoners 17 and with the Brazilian population in general, 18 there is no statistically significant difference.
Although the literature 12,19,20,15 evidences the genotypic association of ACTN3 with athletic performance in high-performance modalities, this study did not find any connection of genotype with athletic performance, corroborating the studies conducted by Ahmetov et al., 4 Lucia et al. 21 , and Muniesa, 22 who researched the ACTN3 association and the athletic performance of elite Russian rowing athletes, elite European cyclists and European runners, respectively.
On the other hand, studies conducted by Ben-Zaken et al., 12 Druzhevskaya et al., 19 Gómez-Gallego et al. 20 showed a relationship between genotype and performance. Ben-Zaken et al. 12 showed evidence of the relationship between genotype XX and the fitness of endurance athletes. Druzhevskaya et al. 19 found that 3.4% of elite athletes focused on muscle strength expressed genotype XX, and Gómez-Gallego et al. 20 observed that RR and RX genotypes are related to the peak power of endurance cyclists.
When comparing the allelic distribution with the control group, 18 we noticed that our sample has a frequency of the higher X allele (55.3% vs 37%), which is in agreement with Shang et al. 15 According to these authors, women have higher frequency of genotype XX (21.2% vs 15.8%), as well as of the X allele (51.3% vs 41.1%), when compared to the control group; but the same is not valid for men, which suggests that the non-functional allele of the gene may provide advantages for women in endurance tests.
Most long-distance athletes have a higher distribution of the X allele, 12 which influences muscle function in fast fibers, directing muscle metabolism to an aerobic pathway, which results in improved endurance performance. 9,10 Although Yvert et al. 23 point out the R allele presence in elite endurance athletes, and other inconclusive studies have not linked physical performance with ACTN3 R577X polymorphism in athletes with high strength and power 20 and muscle endurance, 6 some studies 11,12 suggest that the non-functional allele (X) is present in individuals focused on sprint/power modalities, while the mutant allele (R) is more present in muscle endurance athletes. 11,24
CONCLUSION
This study was the first to analyze ACTN3 in Brazilian female ultramarathoners and found that 84.2% of the sample had the non-functional X allele, distributed in heterozygous and/or homozygous pattern, which supports the hypothesis that the non-functional allele of ACTN3 is associated with a good performance in high-level sports in Brazilian women who are mountain ultramarathoners.
All authors declare no potential conflict of interest related to this article | 2020-12-17T09:10:33.588Z | 2020-12-01T00:00:00.000 | {
"year": 2020,
"sha1": "70ac2d1c6383adfaae9c8d2f729e8d27ba8b6415",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/rbme/a/7cd65RJhwwYNcnxfzDqPXsK/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "a0e6fd8ac1f230ae6352f81bb44558dd84cceffe",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
6400284 | pes2o/s2orc | v3-fos-license | XGAP: a uniform and extensible data model and software platform for genotype and phenotype experiments
XGAP, a software platform for the integration and analysis of genotype and phenotype data.
Understanding these and other high-tech genotype-tophenotype data is challenging and depends on suitable 'cyber infrastructure' to integrate and analyze data [17,18]: data infrastructures to store and query the data from different organisms, biomolecular profiling technologies, analysis protocols and experimental designs; graphical user interfaces (GUIs) to submit, trace and retrieve these particular data; communicating infrastructure in, for example, R [19], Java and web services to connect to different processing infrastructures for statistical analysis [20][21][22][23][24] and/or integration of background information from public databases [25]; and a simple file format to load and exchange data within and between projects.
Many elements of the required cyber infrastructure are available: The Generic Model Organism Database (GMOD) community developed the Chado schema for sequence, expression and phenotype data [26] and delivered reusable software components like gbrowse [27]; the BioConductor community has produced many analysis packages that include data structures for particular profiling technologies and experimental protocols [28]; and numerous bespoke databases, data models, schemas and formats have been produced, such as the public and private microarray expression databases and exchange formats [29][30][31]. Some integrated cyber infrastructures are also available: the National Center for Biotechnology Information (NCBI) has launched dbGaP (database of genotypes and phenotypes) [32], a public database to archive genotype and clinical phenotype data from human studies; and the Complex Trait Consortium has launched GeneNetwork [33], a database for mouse genotype, classical phenotype and gene expression phenotype data with tools for 'per-trait' quantitative trait loci (QTL) analysis.
However, a suitable and customizable integration of these elements to support high throughput genotype-tophenotype experiments is still needed [34]: dbGaP, Gen-eNetwork and the model organism databases are designed as international repositories and not to serve as general data infrastructure for individual projects; many of the existing bespoke data models are too complicated and specialized, hard to integrate between profiling technologies, or lack software support to easily connect to new analysis tools; and customization of the existing infrastructures dbGaP, GeneNetwork or other international repositories [35,36] or assembly of Bioconductor and generic model organism database components to suit particular experimental designs, organisms and biotechnologies still requires many minor and sometimes major manual changes in the software code that go beyond what individual lab bioinformaticians can or should do, and result in duplicated efforts between labs if attempted.
To fill this gap we here report development of an extensible data infrastructure for genotype and phenotype experiments (XGAP) that is designed as a platform to exchange data and tools and to be easily customized into variants to suit local experimental models. We therefore adopted an alternative software engineering strategy, as outlined in our recent review [37], that enables generation of such software efficiently using three components: a compact and extensible 'standard' model of data and software; a high-level domain-specific language (DSL) to simply describe biology-specific customizations to this software; and a software code generator to automatically translate models and extensions into all low-level program files of the complete working software, building on reusable elements such as listed above as well as general informatics elements and some new/optimized elements that were missing.
Below we detail XGAPs extensible 'standard' software model (XGAP-OM) and evaluate the auto-generated text file exchange format (XGAP-TAB) and customizable database software (XGAP-DB) that should help researchers to quickly use and adapt XGAP as a platform for their genetics and/or *omics experiments (Table 1). Harmonized data representations and programmatic interfaces aim to reduce the need for multiple format convertors and easy sharing of downstream analysis tools via a hub-and-spoke architecture. Use of software auto-generation, implemented using MOL-GENIS, aims to ease and speed up customization/variation into new XGAP versions for new biotechnologies and alternative experimental designs while ensuring consistent programming interfaces for the integration and sharing of existing analysis tools. Standardized extension mechanisms should balance between format/ interface stability for existing data types and tools, and flexibility to adopt new ones.
Minimal and extensible object model
We developed the XGAP object model to uniformly capture the wide variety of (future) genotype and phenotype data, building on generic standard model FuGE (Functional Genomics Experiment) [38] for describing the experimental 'metadata' on samples, protocols and experimental variables of functional genomics experiments, the OBO model (of the Open Biological and Biomedical Ontologies foundry for use of standard and controlled vocabularies and ontologies that ease integration [39], and lessons learned from previous, profiling technology-specific modeling efforts [29]. Figure 1b shows the core components of a genotypeto-phenotype investigation: the biological subjects studied (for example, human individuals, mouse strains, plant tissue samples), the biomolecular protocols used (for example, Affymetrix, Illumina, Qiagen, liquid chromatography-mass spectrometry (LC/MS), Orbitrap, NMR), the trait data generated (usually data matrices with, for example, phenotype or transcript abundance data), the additional information on these traits (for example, genome location of a transcript, masses of LC/ MS peaks), the wet-lab or computational protocols used (for example, MetaNetwork [22] in the case of QTL and network analysis) and the derived data (for example, QTL likelihood curves).
We describe these biological components using FuGE data types and XGAP extensions thereof. Investigation binds all details of an investigation. Each investigation may apply a series of biomolecular [40] and computational [20][21][22][23]Protocols. The applications of such Protocols are termed ProtocolApplications, which in the case of computational Protocols may require input Data and will deliver output Data. These Data have the form of matrices, the DataElements of which have a row and a column index. Each row and column refers to a Dimen-sionElement, being a particular Subject or a particular Trait. Table 2 illustrates the usage of these core data types. Figure 1a, c shows how the XGAP model can be extended to accommodate details on particular types of subjects and traits in a uniform way. A Trait can be a classical phenotype (for example, flowering -the flowering time is stored in the DataElement) or a biomolecular phenotype (for example, Gene X -its transcript abundance is stored in the DataElement). A Trait can also be a genotype (for example, Marker Y is a genomic feature observation that is stored in the DataElement). Genomic traits such as Gene, Marker and Probe all need additional information about their genome Locus to be provided. Similarly, a Subject can be a single Sample (for example, a labeled biomaterial as put on a microarray) and such a sample may originate from one particular Individual. It may also be a PairedSample when biomaterials come from two individuals -for example, if biomaterial has been pooled as in two-color microarrays. An individual belongs to a particular Strain. When new experiments are added new variants of Trait and Subject can be added in a similar way. Table 3 illustrates the generic usage of these extended data types.
Several standard data types were also inherited from FuGE to enable researchers to provide 'Minimum Information' for QTLs and Association Studies such as defined in the MIQAS checklist [41] -a member of the Minimum Information for Biological and Biomedical Investigations (MIBBI) guideline effort [42]. Data types Action(Application), Software(Application), Equipment (Application) and Parameter(Value) can be used to describe Protocol(Application)s in more detail. For example, a normalization Protocol may involve a 'robust multiarray average (RMA) normalization' Action that uses Bioconductor 'affy' Software [43] with certain Para-meterValues. Data types Description, BibliographicReferences, DatabaseEntry, URI, and FileAttachment enable researchers to freely add additional annotations to certain data types -DimensionElement, Investigation, Protocol, ProtocolApplication, and Data. For example, researchers can annotate a Gene with one or more DatabaseEntries, referring to unique database accession numbers for automated data integration.
A unique feature of XGAP is the uniform treatment of the various trait and subject annotations. The drawback of allowing users to freely add additional annotations such as described above is that users and tools using metabolite and gene traits, for example, would have to inspect each Trait instance to see whether it is actually a metabolite or gene, and how it is annotated. That is why we instead use the object-oriented method of 'inheritance' to explicitly add essential properties to Trait and Subject variants to make sure that they are described in a uniform way. For example, Metabolite extends Trait, which explicitly adds properties ID, Name and Type (inherited from DimensionElement) to metabolite specific properties Mass, Formula and Structure. See Jones et al. [38] for the complete FuGE specifications and Jones and Paton [44] for a discussion on the benefits and drawbacks of alternative mechanisms for supporting extension in object models. Table 4 illustrates the usage of these annotation data types.
Another feature of XGAP is the uniform treatment of all data on these subjects and traits. To understand basic data in XGAP, newcomers just have to learn that all data are stored as Data matrices with each DataElement describing an observation on Subjects and/or Traits (rows × columns). Unlike the proven matrix structures used in MAGE-TAB (tabular format for
Upload
Upload data from measurement devices, public databases, collaborating XGAP databases, or a public XGAP repository with community data. Simply download trait information as tab-delimited files from one XGAP and upload it into another; this works because of the uniformity of the core data types (and extensions thereof)
Search
Search genetical genomics data using the graphical user interface with advanced query tools. The uniformity of the 'code generated' interfaces make it easy to learn and use interfaces for both 'core' data types as well as customized extensions
Analyze
Analyze data by connecting tools using simple methods in Java, R, Web Services or Internet hyperlinks. For example, map and plot quantitative trait loci in R using XGAP data retrieved via the R interface
Plug-in
Plug-in the best analysis tools into the user interface so biologists can use them. Bioinformaticians are provided with simple mechanisms to seamlessly add such tools to XGAP, building on the automatically generated GUI and API building blocks
Share
Share data, customizations, connected analysis tools and user interface plug-ins with the genetical genomics community, using XGAP as exchange platform. For example, the MetaNetwork R package can talk to data in XGAP. This makes it easy for other XGAP owners to also use it Extensible genotype and phenotype object model. Experimental genotype and (molecular) phenotype data can be described using Subject, Trait, Data and DataElement; the experimental procedures can be described using Investigation, Protocol and ProtocolApplication (B). Specific attributes and relationships can be added by extending core data types, for example, Sample and Gene (A, C). See Table 2, 3 and 4 for uses of this model. The model is visualized in the Unified Modeling Language (UML): arrows denote relationships (Data has a field Investigation that refers to Investigation ID); triangle terminated lines denote inheritance (Metabolite inherits all properties ID, Name, Type from Trait, next to its own attributes Mass, Formula and Structure); triangle terminated dotted lines denote use of interfaces (Probe 'implements' properties of Locus); relationships are shown both as arrows and as properties ('xref' for one-to-many, 'mref' for many-to-many relationships). Asterisks mark FuGEderived types (for example, Protocol*).
microarray gene expression experiments) [45], in XGAP these data can be on any Trait and/or Subject combination, that is, we did not create many variants of DataElement to accommodate each combination of Trait and Subject such as MAGE-TAB's ExpressionDataElement (Probe × Sample), MassSpecDataElement (MassPeak × Sample), eQtlMappingDataElement (Marker × Probe), and so on. Instead, we store all these data using the generic type DataElement and limit extension to Trait and Subject only. This avoids the (combinatorial) explosion of DataElement extensions so researchers can provide basic data as common data matrices (of DataElements) and can still add particular annotations flexibly to the matrix row and columns to allow for (new) biotechnologies as demonstrated in the various Trait extensions in Figure 1. Keeping this simple and uniform data structure greatly enhances data and software (re)usability and hence productivity, in line with the findings by Brazma et al. [29] and Rayner et al. [45] that the simple tabular structures underlying biological data should be exploited instead of making it overly complicated.
After structural homogenization, such as provided by FuGE and XGAP, semantic queries are the remaining major barrier for integration of experimental metadata. This requires ontologies that describe the properties of the materials and also descriptions of experimental processes, data and instruments. The former are provided by species-specific ontologies that are available from various sources. The Ontology for BioMedical investigation [46] may provide a solution for the experimental descriptors and is being used in this context by, for example, the Immune Epitope Database [47]. To enable researchers to use these well understood descriptors, XGAP inherits from FuGE the mechanism of 'annotations', a special field to link any data object to one or more ontology terms. For example, researchers can annotate a Gene with one or more OntologyTerms if required, referring to standard ontology terms from OBO [39] or ontology terms defined locally. A genetical genomics stem cell Investigation was carried out on 30 recombinant mouse inbred strains (Subject). It involved a ProtocolApplication of the 'Affymetrix MG-U74Av2' Protocol to produce expression profiles (Data) for 12,422*16 microarray probes (Traits). These profiles consisted of a matrix of signals (DataElement) for each Probe (Traits) and each InbredStrain (Subject). Subsequently, these Data were taken as inputData in a normalization procedure (ProtocolApplication) using RMA normalization Protocol, which resulted in outputData of normalized profiles (Data) of Probe*InbredStrain (Trait*Subject) RMA: robust multi-array average.
Table 3 Use cases of extended data types
Sample is a Subject with the additional property that 'Tissue' can be specified Individual is a Subject with the additional property that relationships with Mother and Father individuals, as well as Strain, can be specified PairedSample is a Sample with the additional property that 'Dye' has to be specified and which two Subjects (or subclasses such as Individual) are labeled with 'Cy3' and 'Cy5' An InbredStrain is a Strain with the additional property that the 'Parents' (mother Individual and father Individual) are specified and the 'type' of inbreeding used An amplified fragment length polymorphism, microsatellite or SNP Marker (is a Trait) may refer to genetic and possible genomics location (Marker also is a Locus) A correlation computation (Data) reports associations (DataElement) between Metabolite (is a Trait); because Trait and Subject are both extensions of DimensionElement, they can be connected to a row and column of DataElement interchangeably Simple text-file format for data exchange To enable data exchange using the XGAP model, we produced a simple text-file format (XGAP-TAB) based on the experience that for data formats to be used, data files should be easily created using simple Excel and text editor tools and closely resemble existing practices. This format is automatically derived from the model by requiring that all annotations on Investigations, Protocols, Traits, Subjects, and extensions thereof, are described as delimited text files (one file per data type) with columns matching the properties described in the object model and each row describing one data instance. Optionally, sets of DataElements can also be formatted as separate text matrices with row and column names matching these in the Trait and Subject annotation files, and with each matrix value matching one DataElement. The dimensions of each data matrix are then listed by a row in the annotations on Data. Figure 2 shows one investigation in the XGAP tabular data format with one delimited text file per data typethat is, there are files named 'probe.txt' and 'individual. txt', with each row describing a microarray probe or individual, respectively -and one text matrix file per set of DataElements -that is, there are files named 'data/ expressions.txt' and 'data/genotypes.txt'. The properties of each data matrix is then described in 'data.txt'; that is, for the 'data/expressions.txt' there is a row in 'data. txt' that says that its columns refer to 'individual.txt', that its rows refer to 'probe.txt' and that its values are 'decimal'. Raw data sets and data sets in other formats can be retained in a directory labeled 'original'.
After proving its value in several proprietary projects, a growing array of public data sets are now available at [48] demonstrating the use of XGAP-TAB [8,11,13,14,49,50].
Easy to customize software infrastructure
A pilot software infrastructure is available at [51] to help genotype-to-phenotype researchers to adopt XGAP as a backbone for their data and tool integration. We chose to use the MOLGENIS toolkit (biosoftware generator for MOLecular GENetics Information Systems; see Materials and methods) to auto-generate from the XGAP model: 1, an SQL (Structured Query Language for relational databases) file with all necessary statements for setting up your own, customized variant of the XGAP database; 2, application programming interfaces (APIs) in R, Java and Web Services that allow bioinformaticians to plug-in their R processing scripts, Taverna workflows [25,52,53] and other tools; 3, a bespoke web-based graphical user interface (GUI) by which researchers can submit and retrieve data and run plugged-in tools; and 4, import/export wizards to (un) load and validate data sets exchanged in XGAP-TAB Table 4 Use cases of annotation data types A Gene in an Arabidopsis Investigation can be connected to a DatabaseEntry describing a reference to related information in the TAIR database [71] and another DatabaseEntry describing a reference to the MIPS database [72] Each Individual in a C. elegans Investigation is annotated with an OntologyTerm to indicate that it was grown in an environment of either 16°C or 24°C The Arabidopsis Investigation was annotated with the BibliographicReferences pointing to the paper describing the investigation and expected results A Protocol describes the 'MapTwoPart' method for QTL mapping and was annotated with the URI linking to the 'MetaNetwork R-package', which contains this method, and a BibliographicReference pointing to the paper [22,67] that describes the MapTwoPart protocol A file with a Venn diagram describing the number of masses detected in each population was added as FileAttachement to the Arabidopsis metabolite Investigation Figure 2 Simple text file format. A whole investigation can be stored by using easy-to-create tabular text files for annotations or matrix-shaped text files for raw and processed data. Each 'annotation' file relates to one data type in the object model shown in Figure 1 -for example, the rows in the file 'probe.txt' will have the columns named in data type 'Probe'. Each 'data' file contains data elements and has row names and column names referring to annotation files -for example, 'genotypes.txt' may refer to 'marker. txt' names as row names and 'individual.txt' names as column names. If convenient, constant values can be described in the constant.properties file such as 'species_name'. format. The auto-generation process can be repeated to quickly customize XGAP from an extended model, for example, to accommodate a particular new type of measurement technology or experimental design. Figure 3 shows the GUI to upload, manage, find and download genotype and phenotype data to the database. The GUI is generated with a uniform 'look-and-feel', thereby lowering the barrier for novice users. Investigations can be described with all subjects, traits, data and protocol applications involved (1). (The numbers refer to steps in the figure.) Data can be entered using either the edit boxes or using menu-option 'file|upload' (2). This option enables upload of whole lists of traits and subjects from a simple tab-delimited format (3), which can easily be produced with Excel or R; MOLGENIS automatically generates online documentation describing the expected format (4). Subsequently, the protocol applications involved can be added with the resulting raw data (for example, genetic fingerprints, expression profiles) and processed data (for example, normalized profiles, QTL profiles, metabolic networks). These data can be uploaded, again using the common tab-delimited format or custom parsers (5) that bioinformaticians can 'plug-in' for specific file formats (for example, Affymetrix CEL files). The software behind the GUI checks the relationships between subjects, traits, and data elements Figure 3 Graphical User Interfaces. A user interface enables biologists to add and retrieve data and run integrated tools. Genotype and phenotype information can be explored by investigation, subjects, traits or data. Hyperlinks following cross-references of the object model point to related information. Items indicated by 1-9 are described in the main text. See Table 5 for uses of this GUI. See also our online demonstrator at [51].
Graphical user interface
Swertz et al. Genome Biology 2010, 11:R27 http://genomebiology.com/2010/11/3/R27 so no 'orphaned' data are loaded into the database -for example, genetic fingerprint data cannot be added before all information is uploaded on the markers and subjects involved. Standard paths through the data upload process are employed to ensure that only complete and valid data are uploaded and to provide a consistent user experience.
Biologists can use the graphical user interface to navigate and retrieve available data for analysis. They can use the advanced search options (6) to find certain traits, subjects, or data. Using menu option 'file|download' (7) they can download visible/selected (8) data as tab-delimited files to analyze them in third party software. Bioinformaticians can 'plug-in' a custom-built screen (see 'customization' section) that allows processing of selected data inside the GUI, for example, visualizing a correlation matrix as a graph (9) without the additional steps of downloading data and uploading it into another tool. Biologists can create link-outs to related information, for example, to probes in GeneNetwork.org (not shown). Table 5 summarizes use cases of the graphical user interface.
Application programming interfaces
De facto standard analysis tools are emerging, for example, tools for transcript data [20,21,24] or metabolite abundance data [22] to mention just a few. These tools are typically implemented using the open source software for statistical analysis and graphics named R [19]. Bioinformaticians can connect their particular R or Java programs to the XGAP database using an API with similar functionality to the GUI, that is, using simple commands like 'find', 'add' and 'update' (R/API, Java/ API). Scripts in other programming languages and workflow tools like Taverna [53] can use web services (SOAP/API) or a simple hyperlink-based interface (HTTP/API), for example, http://my-xgap/api/find/Data? investigation=1 returns all data in investigation '1'. On top of this, conversion tools have been added to the R interface to read and write XGAP data to the widely used R/qtl package [24]. Figure 4 demonstrates how researchers can use the R/ API to download (or upload) all trait/subject/data involved in their investigation from (or to) their XGAP database for (after) analysis in R. When XGAP is customized with additional data type variants, the APIs are automatically extended in the XGAP database instances by re-running the MOLGENIS generator, thus also allowing interaction with new data types in a uniform way. These new types can then be used as standard parameters for new analysis software written in R and Java. Table 6 summarizes use of the application programming interface.
Import/export wizards
A generated import tool takes care of checking the consistency of all traits, subjects and data that are provided in XGAP-TAB text files and loads them into the database. The entries in all files should be correctly linked, the data must be imported in the right order and the names and IDs need to be resolved between all the annotation files to check and link genes, microarray probes and gene expression to the data. The import program takes care of all these issues (conversion , Table 5 Use cases of the graphical user interface for biologists Navigate all Investigations, and for each Investigation, see the Assays and available Data Select a Gene and find all Investigations in which this Gene is regulated as suggested by significant eQTL Data (P-value < 0.001) For a given Locus, select all Genes that have QTL Data mapping 'in trans'; and this may be regulated by this Locus, for example, absolute(QTL locusgene locus) > 10 Mb and QTL P-value < 0.001 Download a selection of raw gene expression Data as a tab-delimited file (to import into other software) Upload Investigation information from tab-delimited files Upload Affymetrix Assays using custom *.CEL/*.CDF file readers Plot highly correlated metabolic network Data in a network visualization graph Define security levels for Assays/Investigations to ensure that appropriate data can be viewed only by collaborators, and not by other people A MassPeak has been identified to be 'proline' and we can follow the link-out URI to Pubchem [46], because it was annotated to have 'cid' 614, to find information on structure, activity, toxicology, and more Swertz et al. Genome Biology 2010, 11:R27 http://genomebiology.com/2010/11/3/R27 Figure 4 Application programming interfaces. APIs enable bioinformaticians to integrate data and tools with XGAP using web services, Rproject language, Java, or simple HTTP hyperlinks. The figure shows how scientists can use the R/API to upload raw investigation data (Scientist A) so another researcher can download these data and immediately use it for the calculation of QTL profiles and upload the results thereof back to the XGAP database for use by another collaborator (Scientist B). Note how 'add.datamatrix' enables flexible upload of matrices for any Subject or Trait combination; this function adds one row to Data for each matrix, and as many rows to DataElement as the matrix has cells. See Table 6 for uses of these APIs. Table 6 Use cases of the application programming interface for bioinformaticians relationship checks, dependency ordering, and so on). Moreover, the import program supports 'transactions', which ensures that all data inserts are rolled back if an import fails halfway, preventing incomplete or incorrect investigation data to be stored in the database. In a similar way, an export wizard is provided to download investigation data as a zipped directory of XGAP-TAB files.
When XGAP is customized with additional data type variants, the import/export program is automatically extended by the MOLGENIS generator, 'future-proofing' the data format for new biotechnological profiling platforms. Moreover, the auto-generated import program can also be used as a template for parsers of proprietary data formats, such as implemented in parsers for the PED/MAP, HapMap, and GeneNetwork data. Collaborations are underway within EBI and GEN2-PHEN to also enable import/export of MAGE-TAB [45] files, the standard format for microarray experiments, of PAGE-OM [54] files, a specialized format for genome-variation oriented genotype-to-phenotype experiments, and of ISA-TAB [55] files, a generalized evolution of MAGE-TAB to represent all experimental metadata on any investigation, study and assay designed to be FuGE compatible. Also, convertors to ease retrieval and submission to public repositories like dbGaP are under development. It is envisaged that integration of all these formats will enable integrated analysis of experimental data from, for example, mouse and human experiments using various biotechnology platforms, which was previously near impossible for biological labs to implement.
Customizing XGAP
Customizations and extensions of the XGAP object model can be described in a single text file using MOL-GENIS [37,56] DSL. On the push of a button, the MOL-GENIS generator instantly produces an extended version of the XGAP database software from this DSL file. A regression test procedure assists XGAP developers to ensure their extensions do not break the XGAP exchange format. Figure 5a shows how the addition of a Metabolite data entity as a new variant of Trait takes only a few lines in this DSL. Figure 5b shows how the GUI can be customized to suit a particular experimental process. Figure 5c shows how programmers can add a 'plug-in' program that is not generated by MOLGENIS but written by hand in Java (for example, a viewer that plots QTL profiles interactively). Moreover, use of Cascading Style Sheets (CSS) enables research projects to completely customize the look and feel of their XGAP.
All XGAP and MOLGENIS software can be downloaded for free under the terms of the open source license LGPL. Extended documentation on XGAP and MOLGENIS customization is available online at the XGAP and MOLGENIS wikis [51,57].
Conclusions
In this paper we report a minimal and extensible data infrastructure for the management and exchange of genotype-to-phenotype experiments, including an object model for genotype and phenotype data (XGAP-OM), a simple file format to exchange data using this model (XGAP-TAB) and easy-to-customize database software (XGAP-DB) that will help groups to directly use and adapt XGAP as a platform for their particular experimental data and analysis protocols.
We successfully evaluated the XGAP model and software in a broad range of experiments: array data (gene expression, including tiling arrays for detection of alternative splicing, ChIP-on-chip for methylation, andgenotyping arrays for SNP detection); proteomics and metabolomics data (liquid chromatography time of flight mass spectrometry (LC-QTOF MS), NMR); classical phenotype assays [8,11,13,15,49,50,58,59]; other assays for detection of genetic markers; and annotation information for panel, gene, sample and clone. Nontechnical partners successfully evaluated the practical utility by independently formatting and loading parts of their consortium data: EU-CASIMIR (for mouse; Table 7), EU-GEN2PHEN (for human; Table 7), EU-PANA-CEA (for C. elegans) and IOP-Brassica (for plants). A public subset of these data sets is available for download at [51]. When needed we could quickly add customizations to the model, building on the general schema, and then use MOLGENIS to generate a new version of the software at the push of a button, for example, to support NMR methods as an extended type of Trait [60]. Furthermore we successfully integrated processing tools, such as a two-way communication with R/QTL [24] enabling QTL mapping on XGAP stored genotypes and phenotypes with QTL results stored back into XGAP.
Based on these experiences, we expect use of XGAP to help the community of genome-to-phenome researchers to share data and tools, notwithstanding large variations in their research aims. The XGAP data format can be used to represent and exchange all raw, intermediate and result data associated with an investigation, and an XGAP database, for instance, can be used as a platform to share both data and computational protocols (for example, written in the R statistical language) associated with a research publication in an open format. We envision a directory service to which XGAP users can publish metadata on their investigations either manually or automatically by configuring this option in the XGAP administration user interface. This directory service can then be used as an entry point for federated querying between the community of XGAPs to share data and tools.
Groups that already have an infrastructure can assimilate XGAP to ease evolution of their existing software. Next to their existing user tools, they can 'rewire' algorithms and visual tools to also use the MOLGENIS APIs as data backend. Thus, researchers still have the same features as before, plus the features provided by the generated infrastructure (for example, data management GUIs, R/API) and connected tools (for example, R packages developed elsewhere). Moreover, much less software code needs to be maintained by hand when replacing hand-written parts by MOLGENIS-generated parts, allowing software engineers to add new features for researchers much more rapidly.
We invite the broader community to join our efforts at the public XGAP.org wiki, mailing list and source code versioning system to evolve and share the best XGAP customizations and GUI/API 'plug-in' enhancements, to support the growing range of profiling technologies, create data pipelines between repositories, and to push developments in the directions that will most benefit research.
Materials and methods
Software modeling, auto-generation/configuration and component toolboxes are increasingly used in bioinformatics to speed up (bespoke) biological software development; see our recent review [37]. For XGAP we required a software toolbox providing query interfaces, data management interfaces, programming interfaces to R and web services, simple data exchange formats and a minimal requirement of programming knowledge. The MOLGENIS modeling language and software generator toolbox [37,56] was chosen as it combines all these features.
Several alternative toolboxes were evaluated: BioMart [57,61] and InterMine [62] generate powerful query interfaces for existing data but are not suited for data management; Omixed [63] generates programmatic interfaces onto databases, including a security layer, but
CASIMIR
The collection and distribution of large volumes of complex data typical of functional genomics is carried out by an increasing number of disseminated databases of hugely variable scale and scope. Combined analysis of highly distributed datasets provides much of the power of the approach of functional genomics, but depends on databases' ability to exchange data with each other and on analytical tools with semantic and structural integrity. Agreement on the standards adopted by databases will inevitably be a matter of community consensus and to that end a recent coordination action funded by the European Commission, CASIMIR [70], is engaged in a community consultation on the nature of the technical and semantic standards needed. What has already become clear in use-case studies conducted so far is that whatever standards are adopted, they will inevitably remain dynamic and continue to develop, particularly as new data types are collected. Crucially, they should allow the open-ended development of analytical and datamining software, while integration of efforts to agree such standards and develop new software is essential GEN2PHEN Currently available genotype-to-phenotype (G2P) databases are few and far between, have great diversity of design, and limited or no interoperability between them. This arrangement provides no convenient way to populate the databases, no easy way to exchange, compare or integrate their content, and absolutely no way to search the totality of gathered information. In this context, the European Commission has recently funded the GEN2PHEN project [55], which intends to significantly improve the database infrastructure available within Europe for the collation, storage, and analysis of human and model-organism G2P data. This will be achieved by first developing various cutting-edge solutions, and then deploying these in conjunction with proven concepts, so as to transform the current elementary G2P database reality into a powerful networked hierarchy of interlinked databases, tools and standards lacks user interfaces; PEDRO/Pierre [64] generates data entry and retrieval user interfaces but lacks programmatic interfaces; and general generators such as AndroMDA [65] and Ruby-on-Rails [66] require much more programming/configuration efforts compared to tools specific to the biological domain. Turnkey [67] seemed to be closest to our needs: it emerged from the GMOD community having GUI and SOAP interfaces but lacks auto-generation of R interfaces and a file exchange format. Figure 6 summarizes how MOLGENIS generates the XGAP database software in three layers: database, API and GUI. MOLGENIS either generates a high-performance 'server' edition, which requires installation on server software, or a limited 'standalone' edition that runs on a desktop computer without any additional configuration. The database layer is generated as SQL files with 'database CREATE statements' that are loaded into either MySQL (server), PostgreSQL (server) or HSQLDB (standalone). Each data type in the XGAP object model ( Figure 1) is mapped to its own table -for example, there is a 'Trait' table. Each inheritance adds another table, for example, each Gene has an entry in the 'Gene' table and also in the 'Trait' table. One-to-many crossreferences between data types are mapped as foreign keys -for example, Data has a numeric field called 'Investigation' that must refer to the foreign key 'molgenisid' of Investigation. Many-to-many cross-references are mapped via a 'link-table' -for example, an additional table 'mref_import_data' is generated for two foreign keys to Data and ProtocolApplication, respectively, to model the importData relationship between them. The API layer is generated as Java files either served via Tomcat (server) or Jetty (standalone). A Java class is generated for each data type -for example, there is a class Gene. All data can be queried programmatically via a central Database class, that is, command db. find(Gene.class) returns all Gene objects in the database. To enhance performance, the API uses the 'batched' update methods of Java's DataBase Connectivity (JDBC) package and the 'multi-row-syntax' of MySQL to allow inserts of 10,000s of data entries in a single command, an optimization that is 5 to 15 times quicker than standard one-by-one updates. The Java/API is exposed with a SOAP/API, HTTP/API and R/API, so XGAP can also be accessed via web service tools like Taverna, HTTP or R, respectively (accessible via hyperlinks in the GUI). The GUI layer is also generated as Java files. The GUI includes classes for each Menu and Form -for example, the InvestigationForm class generates a view-and editform for investigations in the GUI. The generation is steered from one XML file written in MOLGENIS DSL (partially shown in Figure 5). To enable FuGE extension, the FuGE model was automatically translated into Figure 6 Auto-generation of XGAP software. Open source generator tools are used to produce a customized XGAP software infrastructure. 1, The XGAP object model is described using the MOLGENIS' little modeling language ( Figure 4). 2, Central software termed MolgenisGenerate runs several generators, building on the MOLGENIS catalogue of reusable assets. 3, At the push of the button, the software code for a working XGAP implementation is automatically generated from the DSL file. GUI and APIs provide simple tools to add and retrieve data, while the reusable assets of MOLGENIS hide the complexity normally needed to implement such tools. For customization, only simple changes to the XGAP model file are required; the MOLGENIS generator takes care of rewriting all the necessary files of SQL and Java software code, saving time and ensuring a consistent quality. MOLGENIS DSL. We therefore first downloaded the FuGE v1 MagicDraw file from [68], exported from MagicDraw to XMI 2.1, parsed the XMI using the EMF parser from Eclipse [69] and then automatically translated it into MOLGENIS DSL using a newly built Xmi-ToMolgenis tool. Compatibility with the FuGE standard is ensured via inheritance; that is, Investigation, Protocol, ProtocolApplication, Data and DimensionElement in XGAP all extend FuGE data types of the same name. Further implementation details can be found at [51,57]. | 2014-10-01T00:00:00.000Z | 2010-03-09T00:00:00.000 | {
"year": 2010,
"sha1": "c36cd3cb803a2ffd27676e349ff4e9007fe57de7",
"oa_license": "CCBY",
"oa_url": "https://genomebiology.biomedcentral.com/track/pdf/10.1186/gb-2010-11-3-r27",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "c36cd3cb803a2ffd27676e349ff4e9007fe57de7",
"s2fieldsofstudy": [
"Biology",
"Computer Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
597313 | pes2o/s2orc | v3-fos-license | Non-surgical instrumentation associated with povidone-iodine in the treatment of interproximal furcation involvements
Objective The aim of this controlled clinical trial was to evaluate the effect of topically applied povidone-iodine (PVP-I) used as an adjunct to non-surgical treatment of interproximal class II furcation involvements. Material and methods Thirty-two patients presenting at least one interproximal class II furcation involvement that bled on probing with probing pocket depth (PPD) ≥5 mm were recruited. Patients were randomly chosen to receive either subgingival instrumentation with an ultrasonic device using PVP-I (10%) as the cooling liquid (test group) or identical treatment using distilled water as the cooling liquid (control group). The following clinical outcomes were evaluated: visible plaque index, bleeding on probing (BOP), position of the gingival margin, relative attachment level (RAL), PPD and relative horizontal attachment level (RHAL). BAPNA (N-benzoyl-Larginine-p-nitroanilide) testing was used to analyze trypsin-like activity in dental biofilm. All parameters were evaluated at baseline and 1, 3 and 6 months after non-surgical subgingival instrumentation. Results Six months after treatment, both groups had similar means of PPD reduction, RAL and RHAL gain (p>0.05). These variables were, respectively, 2.20±1.10 mm, 1.27±1.02 mm and 1.33±0.85 mm in the control group and 2.67±1.21 mm, 1.50±1.09 mm and 1.56±0.93 mm in the test group. No difference was observed between groups at none of the posttreatment periods, regarding the number of sites showing clinical attachment gain ≥2 mm. However, at 6 months posttreatment, the test group presented fewer sites with PPD ≥5 mm than the control group. Also at 6 months the test group had lower BAPNA values than control group. Conclusion The use of PVP-I as an adjunct in the non-surgical treatment of interproximal class II furcation involvements provided limited additional clinical benefits.
INTRODUCTION
Furcation involvements represent a great challenge to the success of periodontal therapy 12 . The reduced rate of success experienced in the treatment of furcation involvements seems to result from the incomplete removal of subgingival plaque and calculus in the interadicular area owing to the peculiar anatomy of the furcation space 24,25 . In addition, the distal location and the specific root configuration of molars limit adequate plaque control by the patient 3 .
Regarding the therapeutic approach, although furcation involvement treated with a conservative approach does not yield the same satisfactory results as single-rooted teeth or molar flat surfaces, it has been shown that teeth with furcation involvement have a remarkable survival rate following conservative treatment 21 . These findings demonstrate that the conservative approach for furcation involvement can be performed with the expectation of a high long-term survival rate for the patients demonstrating a satisfactory plaque control 3 .
Del Peloso Ribeiro, et al. 7 (2006) showed in a randomized controlled clinical trial that non-surgical therapy can effectively treat buccal and lingual class II furcation involvements. However, Del Peloso Ribeiro, et al. 8 (2007) also showed that buccal and lingual class II furcation involvements respond better to non-surgical therapy than interproximal class II furcation involvements.
In an attempt to improve the biological response of interproximal furcation involvement to nonsurgical periodontal therapy chemotherapeutic agents could be used. Povidone-iodine (PVP-I) is one of the most potent and widely used broadspectrum antiseptics available. It has a very rapidly bactericidal effect, being effective against periodontal pathogens in vitro in as little as 15 s of contact and in vivo within 5 min of contact 4,11 . PVP-I also fails to allow the development of bacterial resistance; has low systemic toxicity and low financial cost 2,15 . In addition, allergic sensitization to PVP-I is rare 28 .
PVP-I has been used as an adjunct in the treatment of chronic periodontitis with promising results 27 . Rosling, et al. 26 (2001) showed that PVP-I, used as the cooling liquid of an ultrasonic device, in conjunction with subgingival root debridement improved the outcome of non-surgical therapy in non-molar teeth. Hoang, et al. 13 (2003) also demonstrated in non-molar teeth that the addition of subgingival PVP-I irrigation to conventional mechanical therapy caused a greater reduction in total pathogen counts.
The aim of the present study was to evaluate the effect of PVP-I, used as the cooling liquid of an ultrasonic device in the non-surgical treatment of interproximal class II furcation involvements.
Study population
Thirty-two subjects were recruited from those referred for treatment to the Department of Periodontology of Piracicaba Dental School, State University of Campinas, Brazil. Subjects were enrolled from October 2005 to June 2006. All patients were individually informed about the nature of the proposed treatment and informed consent forms were signed. The study protocol was approved by the local Research ethics Committee.
Power analysis indicated that 13 subjects would have 88% power to detect a 1-mm difference in clinical attachment level between the two groups. Subjects who were invited to participate met the following inclusion criteria: 1) diagnosis of severe chronic periodontitis by the presence of periodontal pockets with clinical attachment loss ≥5 mm, bleeding on probing and radiographic bone loss9; 2) at least one molar with class II interproximal furcation involvement 10 that bled on probing, with probing pocket depth ≥5 mm. Exclusion criteria were: 1) furcation involvement in molars with periapical disease; 2) medical disorders that could influence the response to treatment; 3) scaling and root planing in the preceding 6 months; 4) consumption of drugs known to affect periodontal status within the past 6 months; 5) pregnancy; 6) allergy to iodine; 7) thyroid dysfunction and 8) smoking habits.
Study design
This was a randomized parallel arm clinical trial with six months of duration. Patients initially received detailed information on the etiology of periodontal disease and instructions in proper self-performed plaque control measures, interdental cleaning with dental floss and interdental toothbrushes. In the initial sessions, patients also had plaque retentive factors (caries, excess of restorations and supragingival calculus) removed. The baseline measurements were performed 21 days after this initial phase.
All teeth with periodontal pocket ≥5 mm that bled on probing were treated with scaling and root planing (basic therapy). The non-surgical therapy on molars with furcation involvement was performed under local anesthesia by the use of an ultrasonic device (Profi III, Dabi Atlante, Ribeirão Preto, SP, Brazil). Specific furcation tips were used (33R and 33L; Amdent, Stockholm, Sweden). The instrumentation of furcation involvements in the test group was combined with the administration of 10% PVP-I. In this group, the traditional cooling liquid of the ultrasonic device was replaced by PVP-I for the treatment of the interproximal furcation. Thus, not all the pathological sites of the test group received PVP-I. The furcation involvements of the control group received an identical treatment, but distilled water was used as the cooling liquid. The randomization was done by coin toss right after the patient was included in the study. The allocation concealment was secured by having a person not involved in the study performing the randomization. This person was different from the one responsible for the treatment (S. B.) and different from the examiner (e. D. P. R.). The person responsible for the treatment was a specialist in periodontics. The randomization code was not broken until all data had been collected. Thus, the treatment group was not revealed to the clinical examiner or to the statistician.
Only one clinician was responsible for administering the treatment throughout the course of the study. This clinician was different from the calibrated examiner performing the clinical Non-surgical instrumentation associated with povidone-iodine in the treatment of interproximal furcation involvements 2010;18(6):599-606 measurements. The furcation involvements were instrumented until a smooth, hard surface was achieved.
After the active treatment, all subjects were included in a maintenance program composed of professional supragingival plaque control and reinforcement of oral hygiene instructions every 15 days for the first month and every month until the sixth month. At the 3-month recall visit, sites that exhibited probing pocket depth ≥5 mm and bled on probing received subgingival therapy identical to that provided during the phase of basic therapy. The maintenance program also included an update of the medical and dental histories, extraoral and intraoral soft tissue examination, dental examination and periodontal evaluation.
Clinical measurements
The following clinical parameters were taken at baseline and at 1, 3 and 6 months after therapy. Visible plaque index (VPI) 1 evaluated supragingival plaque accumulation dichotomously at six sites on all teeth in the mouth. Bleeding on probing (BOP) 18 was also measured dichotomously at six sites per tooth. Thus, VPI and BOP were calculated as full mouth percentage of presence of plaque and BOP.
An individual stent was fabricated of clear selfcuring resin, with a vertical groove at the place where the furcation involvement could be probed to create fixed landmarks and to standardize the location of periodontal probes at the furcation sites. The position of the gingival margin (PGM) was measured from the stent to the gingival margin and the relative attachment level (RAL) from the stent to the bottom of the periodontal pocket. The probing pocket depth (PPD) was calculated based on RAL and PMG. The relative horizontal attachment level (RHAL) was measured, using a curved periodontal probe (Neumar, São Paulo, SP, Brazil) with grooves at 1 mm intervals 29 from the stent to the deepest horizontal point of the periodontal pocket. The clinical parameters VPI, BOP, PGM, RAL and PPD were measured using a standardized periodontal probe with 1 mm markings (PCPUNC 15 ® ; Hu-Friedy, Chicago, IL, USA).
Examiner Calibration
The investigator charged with clinical assessments was calibrated for intraexaminer repeatability prior to the start of the trial. Three patients with chronic periodontitis were enrolled for this purpose. Duplicate measurements (N=414) for PPD, RAL and RHAL were collected with an interval of 24 h between the first and the second recording. The intraclass correlation coefficients as a measure of intraexaminer reproducibility were 0.81, 0.88 and 0.89 for mean PPD, RAL and RHAL, respectively.
Biochemical evaluation -BAPNA assay
The biochemical evaluation was done with the BAPNA test that permits the detection of microorganisms possessing trypsin-like enzymes such as Tannerella forsythensis, Treponema denticola and Porphyromonas gingivalis. Subgingival dental plaque was collected from furcation involvements with sterile Gracey curettes. Before the collection, the area was dried, isolated and had the supragingival plaque removed.
The plaque collected was placed in preweighed coded microcentrifuge tubes. To the tubes was added 1 mL of a solution containing the enzyme substrate N-benzoyl-L-arginine-p-nitroanilide (BAPNA; Sigma, St. Louis, MO, USA) with a final concentration of 1.0 nmol/L in the assay buffer (0.05 nmol/L Tris-HCl, 5 mM CaCl 2 , pH 7.5) that had 5% DMSO. This suspension was vortexed and then placed in an ultrasound bath on ice for 10 min with 2-s cycles and 2-s intervals at 17 W using a 100 W ultrasonic processor. After 17 h of incubation at 37°C, the reaction was stopped with ice and by the addition of 100 mL of glacial acetic acid. The absorbance was read at 405 nm. The results were given in nanomoles of product per minute per milligram of dental plaque wet weight 20 .
Statistical analysis
Repeated measures analysis of variance (ANOVA) was used to detect intra and intergroup differences in clinical parameters (VPI, BOP, PPD, RAL and RHAL). When statistical significance was found, analysis of the difference was determined using the method of Tukey. The proportion of sites presenting PPD ≥5 mm, RAL gain ≥2 mm, plaque and bleeding on probing at the furcation sites and number of lesions retreated at 3 and 6 months were compared between groups with chi-square analysis or Fischer's exact test. The Friedman test was used to detect intragroup differences in biochemical parameter among all periods and the Mann-Whitney test to detect intergroup differences in this parameter at each time interval. All evaluations used the subject as the unit of measurement; averages were used if more than one site per subject existed. Individual furcation involvements were compared regarding the RAL gain ≥2 mm, the occurrence of BOP, and the number of sites referred for treatment and PPD ≥5 mm. All analysis was done with SAS Software 2001-Release 8.2 (SAS Institute Inc., Cary, North Carolina, USA). An experimental level of significance (alpha) was set at 0.05. RAL was considered as the primary outcome variable. All other parameters were considered secondary outcomes.
RESULTS
Four patients did not show up for all the appointments, for reasons not related to the study; and thus, a total of 28 patients completed the study: 15 of the control group and 13 of the test group. A total of 37 furcation involvements were evaluated: 18 in the control group and 19 in the test group. The baseline data indicated that control and test groups were similar according to age, gender, clinical and biochemical parameters (Table 1).
Visible plaque index (VPI) and bleeding on probing (BOP)
VPI and BOP were reduced during the study in both groups, but no statistically significant difference was found between them (Table 2). BOP were measured for the whole mouth and also exclusively at the furcation involvements. In this case, no difference was observed between groups ( Figure 1).
Clinical parameters -PgM, PPD, RAL and RhAL
Both groups showed, without significant difference between them, an increase in PGM, a reduction in PPD and a gain in RAL and RHAL ( There was no difference between groups regarding the proportions of furcation presenting RAL gain ≥2 mm. At the first month after treatment, the control group had 36.67% of sites showing a RAL gain ≥2 mm and the test group had 42.10%. At 3 months, these values were 30.00% and 26.31% and at 6 months, 33.33% and 42.10%, respectively. Difference between groups was found regarding the proportion of sites with PPD ≥5 mm at 6 months after treatment (Figure 2). All furcations presented PPD ≥5 mm at baseline. At the first month after therapy, 70.00% of the furcation involvements still had PPD ≥5 mm in the control group and 52.63% had this PPD in the test group. At 3 months, the values were 66.67% and 52.63%, respectively. At 6 months, 70.00% of the evaluated sites had PPD ≥5 mm in the control group and 36.84% in the test group (p=0.04).
The percentage of furcation sites that needed retreatment at 3 months was 43.33% in the control group and 36.84% in the test group. At 6 months, the values were 33.33% and 15.79%, respectively. These differences were not statistically different ( Figure 3). Table 4 presents the results of BAPNA at different time intervals. Both groups, at one and three months after treatment, showed a statistically significant reduction of the trypsin-like enzyme activity in subgingival biofilm, without difference between groups. At 6 months, only the test group presented BAPNA value significantly different from baseline. There was also a significant difference between groups on BAPNA values at 6 months.
DISCUSSION
The conservative approach of furcation involvements is effective in the treatment of buccal and lingual class II furcation lesions in patients demonstrating a satisfactory plaque control 7 . However, interproximal furcation involvements do not show the same biological response to non-surgical treatment probably because of the irregular anatomy of the area that impairs professional plaque control procedures and because of the difficulties faced by the patient to maintain plaque control 3,8 . In an attempt to overcome these difficulties antimicrobials agents could be used. Because PVP-I is a potent antiseptic and its use in the treatment of non-molar teeth with chronic periodontitis has shown promising results 13,26 , the present study has been designed to evaluate the use of locally applied PVP-I as an adjunct to nonsurgical therapy of interproximal class II furcation involvements.
However, in the present study, both groups presented similar means of PPD reduction and RAL and HRAL gain. No benefit on the use of antimicrobials in the treatment of class II mandibular furcation was also shown by Tonetti, et al. 30 (1998).
In the present study, difference between groups was observed, at 6 months, in BAPNA values. Test group had lower levels of trypsin-like activity than the control group. Trypsin-like enzyme is produced by important periodontal pathogens such as Tannerella forsythia, Treponema denticola and Porphyromonas gingivalis and is associated with signs of periodontitis 32 . These microorganisms are increased in diseased sites, when compared to stable ones 22 . Strong evidence also supports the report of T. forsythia and P. gingivalis as risk indicators of periodontal disease 33 . This information raises the doubt if the lower values of BAPNA in the group where PVP-I was used will be followed by a lower average of disease recurrence.
The BAPNA results could be related to the higher percentage of sites presenting, at 6 months, PPD ≥5 mm in the control group than in the test group. This is an important result since according to Claffey and egelberg 6 (1995) patients demonstrating a high proportion of deep pockets following initial cause related therapy are more likely to experience further clinical attachment loss. However, no difference was observed in the number of sites referred for re-treatment. This parameter is an important indicator of the clinical significance of treatment because it relates to the percentage of sites that returned to health and to the percentage of sites still requiring therapy 14 .
Sites with PPD ≥5 mm and BOP were re-treated. BOP is an important outcome measurement as the absence of bleeding on probing in recall patients has been associated with clinical stability over time 16 . BOP and PPD are the accepted indicators of the response to root debridement 31 . At 3 and 6 months after therapy the percentage of sites re-treated was 30.00% and 33.33% in the control group and 26.31% and 42.10% in the test group, respectively. This means that at 6 months only 66.67% of control group and 58.90% of test group had PPD ≤5 mm without BOP.
Interproximal furcation involvements are difficult to treat not only with non-surgical therapy, but also with guided tissue regeneration. In a regeneration furcation study, the largest clinical improvement was found in class II furcation defects of mandibular molars, followed by buccal class II furcations of maxillary molars and, interproximal furcation lesions exhibiting the least or no improvement 23 .
The difficulties in the treatment of interproximal furcation involvement could not be overcome by the use of PVP-I as adjunct to non-surgical therapy even with the use of ultrasonic tips specifically design to access the furcation area. These tips were used since it is known that areas inaccessible to instruments are also inaccessible to the cooling liquid. Also it has been shown that furcation entrances have in general smaller dimensions than the width of a Gracey curette, but they are larger than the average dimension of a new standard ultrasonic tip 5 . This is in accordance with the finding that ultrasonic instruments are more effective than hand scaling in reducing gingival fluid flow and bacterial proportions of spirochetes and other motile organisms in class II and III furcation involvements 17 .
The decision to employ a 10% solution of PVP-I was based on findings by Nakagawa, et al. 19 (1990). They determined that only the in vivo undiluted PVP-I solution significantly reduced the total colony forming units of subgingival pockets. Other in vitro studies showed bactericidal effect with lower concentration of PVP-I 27 . However, in vitro data may not reflect in vivo results because intraoral findings can be affected by salivary dilution, protein deactivation, and the inability of drugs to penetrate bacterial biofilms. Hoang, et al. 13 (2003) also used 10% PVP-I subgingival irrigation in periodontitis lesions showing radiographic evidence of subgingival calculus.
CONCLUSION
It may be concluded, based on the design of the study, that the use of PVP-I, as the cooling liquid of an ultrasonic device in the non-surgical treatment of interproximal class II furcation involvements provided limited additional clinical benefits. | 2016-08-09T08:50:54.084Z | 2010-12-01T00:00:00.000 | {
"year": 2010,
"sha1": "05a7754d793357a3e49d40a8b3dff807a8751fea",
"oa_license": "CCBYNC",
"oa_url": "http://www.scielo.br/pdf/jaos/v18n6/11.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "05a7754d793357a3e49d40a8b3dff807a8751fea",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
224308430 | pes2o/s2orc | v3-fos-license | Gradient Based Elastic Property Reconstruction in Digital Image Correlation Elastography
Aim: A Conjugate Gradient implementation of the Digital Image Correlation Elastography method is presented. Method: The gradient is calculated using the adjoint method, requiring only two forward solutions regardless of the number of mechanical properties to reconstruct. A power-law based multi-frequency viscoelastic model is used to relate the reconstructed mechanical properties and the Digital Image Correlation surface displacement measurements. Result: The method is tested against harmonic surface motion fields generated through numerical simulation and measured on a silicon phantom. Conclusion: Reconstruction results show the ability of the gradient reconstruction method to detect stiff internal inclusions in simulated and phantom data.
with silicon fluid (Factor II-V40104) to reduce its rigidity to physiological levels.
Experimental setup
The phantoms were placed in the DICE prototype, which consists of an actuator and four pairs of stereoscopic cameras (Grasshopper 2 from Point Grey®). The 3D time-harmonic motion field of the surface was acquired using Vic-Snap and Vic-3D (from Correlated Solutions®), a DIC system allowing precise measurement of the surface displacement. The actuator is a model 2025E voice coil driver (The Modal Shop®). Generally, the DIC subset size was a square of 31 × 31 pixels with a step size (distance between two consecutive evaluation points) of 7 pixels. These parameters allow the capture of around 25,000-30,000 displacement measurements covering the entire surface of the phantom, with a resolution of 1 µm. The acquisition of the displacement was made after a startup period at the start to eliminate the transient effect. The thermal effects are small enough at the low damping rate of our phantom that it can be ignored. [11] Digital image correlation elastography data processing The measured displacements acquired with via Vic-3D were processed with a Python script to extract the x, y, and z positions and the u, v, and w displacements for each of the measurement points. The resulting point cloud was then processed by Hoppe's surface reconstruction method. [10] This triangulated surface is then converted into a 3D linear tetrahedral finite element (FE) mesh using the CGAL. [12] The measured data points are then projected onto the surface of the mesh at the closest surface element to serve as control points for the material property reconstruction. For each point, the FFT of the displacement time-series was calculated using the FFTpack algorithm implemented in the Scipy Python library.
Digital image correlation elastography gradient reconstruction
The forward problem is defined as: Given 1. The material properties: The Lamé parameters, λ and µ (µ being the shear modulus), and the density (ρ) in the spatial domain Ω 2. The displacement boundary conditions on the boundary Γ D 3. The traction vector on the boundary Γ N .
Find the displacement field u that satisfies the equilibrium equation for the given material model. Using a FE formulation, this equation can be written as: where K(θ) is the stiffness matrix defined for materials properties θ = (λ, µ. ρ), and f is the forcing vector. The equilibrium equations for a viscoelastic isotropic compressible solid under time-harmonic motion are: Where E is the Cauchy stress tensor for the body, ω is the actuation pulsation, u the complex time-harmonic displacement field, u 0 the prescribed displacement field, n the surface normal and f the traction vector. The weak form of (2) is: With: Being the inner product and E defined as: Combining (4) and (5) we have: .
To reconstruct the mechanical properties of the phantom, we use a nonlinear inversion technique based on the (CG) method. [13] This nonlinear technique involves a computational model of the time-harmonic response of materials under external excitation (the forward problem) and estimates the spatially distributed material properties by minimizing an error function. This motion error objective function, Φ(), is defined as: Where u m are the measured displacements, u c () are the corresponding displacements calculated by FE solution (1) for the current set of mechanical properties, , T is a linear operator used to transform the calculated displacements to the 3D-DIC measurement space and * is the complex scalar product. For this study, the mechanical properties, , consist of µ R and µ I, respectively, the shear modulus (G') and the loss modulus (G''). In order to minimize this function, the CG method is used. Starting from an initial estimate of , the CG update of the mechanical properties at the n-th iteration is given by: Where p n is the search direction and α n is a step length minimizing the objective function, Φ(). For the nonlinear CG method used here, the search direction at the n-th iteration is given by: where β k is given by the Polak-Ribère formula and g k is the gradient vector. Normally, the gradient is calculated as (9): where J T is the transpose of the Jacobian matrix of u c . Each column of J is calculated by resolving: where K is the stiffness matrix of the forward problem. For M unknown material properties, this leads to M + 1 solutions of the forward problem to obtain the gradient vector, which is computationally expensive. The adjoint method proposed by Oberai et al. [14] uses a Lagrangian formulation to obtain a set of equations that allow the calculation of the gradient vector with only two solutions of the forward problem.
The adjoint gradient formulation begins by defining the following Lagrangian L: The differential of L is: Where D denotes the partial derivative operator. Setting D w L·W = 0, gives: Which is easily verified using (3). Setting D U L·U = 0 gives: Combining (7) with (13), (14) and (15), we now have: Since Φ= g·, we can calculate g as:
Adjoint forcing in the digital image correlation elastography system
In the DICE problem, u c are displacements calculated in the FE solution space and u m are displacements measured in the 3D-DIC imaging space. In general, these measurements are not located at the nodes of the FE calculation, nor do they occur at a well-distributed set of points that allow for easy interpolation to the FE space. Therefore, the linear operator, T, is used to transform the calculated displacements to the 3D-DIC measurement space. To do this, each measurement point, x i , is projected onto the FE surface mesh [ Figure 1].
This projection provides the weighting coefficients for each node of the FE surface element containing the measurement point, which allows interpolation of the calculated displacements to the projected measurement point, u x T i u The NM x NN transformation matrix, T, is generated once during the initialization phase of the algorithm, where NM is the number of measurement points and NN are the number of nodes in the FE space. Once T is calculated, all the necessary elements to calculate the adjoint forcing vector from (15), u m , are in place. For the j th element of the calculated displacement field, u cj , the first term in this inner product, T ( u c j ), represents the j th column of T, such that the adjoint state solution, W, can be calculated simply as: By taking advantage of the self-adjoint nature of the viscoelastic bilinear operator, A.
Multi-frequency
To improve the quality of the reconstruction, we use a multifrequency reconstruction as it introduces more information to the poorly posed inverse problem. [15] This brings a few changes to the DICE reconstruction process. First, the data are obtained directly from a multi-frequency periodic actuation signal, usually a sum of sine waves. We then compute the FFT to isolate each individual actuation frequency. Second, the multifrequency reconstruction algorithm works by calculating the gradient for each individual frequency, as described previously, and linearly combining them to obtain a full multi-frequency gradient. Moreover, the shear modulus is now reconstructed as a power-law, in the form of: Where µ 0 and α are the parameters being reconstructed, rather than µ itself. The gradient formulation then becomes: The gradient terms for the loss modulus are calculated using the same process.
Regularization
Given the poorly posed nature of the DICE inverse problem, where internal parameters are estimated based on purely surface-based measurements, regularization terms are added to the objective function, (7), to penalize elastic property distributions with sharp variations and unreasonably high or low values. This regularization also helps avoid artifacts due to model-data mismatch around the complex, friction contact region of the actuator.
Total variation regularization
To help reduce the spatial variation of the reconstructed property distributions, , we introduce a total variation (TV) regularization. In our case, the TV of a parameter is defined as the integral of its squared gradient over the imaging domain. [16] This term is then added to the objective function, Φ(). This regularization term modifies the gradient calculation shown in Eq. 17 through the addition of the corresponding TV gradient. For the gradient term corresponding to the n th material property value, n ,we have: .
G G 2
(22)
Where α TV is the regularization weighting, δ is a term used to ensure the differentiability of the TV operator when the property gradient is close to zero and n is the FE shape function that supports the n th material property value.
Tikhonov regularization
Tikhonov regularization penalizes large changes in the mechanical property values from one iteration to the next, in effect limiting the size of p n in Eq. (8). In the DICE reconstruction problem, parameter estimates near the center of the phantom are least sensitive to the surface-based displacement measurements, so the Tikhonov regularization term is scaled by the square of the distance from the central axis of the phantom (independent of the axial position) to ensure that elastic properties at the center of the phantom are only adjusted in the case of strong evidence from the surface displacement data. For the gradient term corresponding to the n th material property value, n , we have: Where α TK is the regularization weighting, dθ n is equal to θ nθ 0 , β is a user-defined exponent, 2 in this case, and dist is the distance from the central axis of the phantom.
Spatial filtering
To promote smooth property distributions, we apply a Gaussian spatial filtering to the parameter distribution, θ, at each iteration. The window for this filter is set as a 5 × 5 × 5 grid surrounding each material property node.
Choice of the total variation coefficient
In order to correctly choose the TV weighting coefficient, TV , the change in the motion error, as calculated by (7), is plotted as a function of the value of the TV coefficient, as shown in Figure 2. The optimal TV weighting corresponds to the lowest possible coefficient, which still shows the effect of the regularization, corresponding to the first point along the horizontal axis of Figure 2 where the motion error increases.
As shown in Figure 2, both for simulated and phantom data, the optimal TV coefficient lies between 10 −14 and 10 −12 . For this study, a value of 5.10 −13 was chosen for α TV .
Simulation results
To validate the DICE reconstruction method, displacement fields for homogenous and heterogeneous breast-shaped geometries were calculated by FE methods. In each case, the material properties were calculated for each frequency using a power-law in the form of Equation 19, with µ 0 set to 500, and α equal to 0.45 for the shear modulus and 0.32 for the loss modulus, reflecting values measured in. [10] For the homogeneous geometry, the properties at 60 Hz were 7.2 kPa for µ R and 3.3 kPa for µ I . The mono-frequency reconstruction gives a mean value of 6.1 kPa for µ R (min: 5.6 kPa, max: 6.9 kPa) and 2.4 kPa for µ I (min 2.2 kPa, max: 3.2 kPa), representing 84% and 72% of the true value, respectively. The multi-frequency reconstruction give a mean value of 7.2 kPa for µ R (min: 6.9 kPa, max: 7.6 kPa) and 3.6 kPa for µ I (min 3.3 kPa, max: 3.8 kPa), representing 100.5% and 108% of the simulated value, respectively.
For the heterogeneous geometry, the shear modulus of the inclusion was set as 20 X stiffer than the surrounding material. All other properties were identical between the two materials. While the properties of the surrounding tissue were quite accurately reconstructed (103% and 104% of the simulated value), the properties of the inclusion were incorrect, although the location of the inclusion is correctly identified. Figure 3 shows the result for mono-frequency
Experimental results
The main objective of the DICE method is to detect and localize stiff inclusions within the imaging volume. To validate the method using measurement data, two silicon phantoms (one homogeneous and one heterogeneous) were constructed. The heterogeneous phantom consisted of a half-ellipsoid with half axes of 6.5 cm, 6.5 cm, and 7.5 cm with two inclusions (1 cm and 2 cm diameter) located at 120° from each other. The inclusions are located at a depth of 4 cm and located 1 cm and 2 cm from the phantom surface for the small inclusion and large inclusion, respectively. The two inclusions were 3D printed from ABS plastic, and as they were hollow, were visible within a standard T1 magnetic resonance image (MRI). The density of the silicon phantom material was measured at roughly 900 kg/m 3 and for the reconstruction; the λ modulus was set at 150 kPa, which corresponds to a Poisson's ratio of 0.48 at a shear modulus of 6.25 kPa.
The DICE reconstruction for the homogenous phantom is shown in Figure 4. As can be seen, the stiffness distribution is largely homogeneous, with a slightly stiffer region toward the center of the phantom, where the influence of the surface-based measurements is minimal. Figures 5 and 6 show the result of the reconstruction for the heterogeneous phantom. Figure 6 shows the comparison between the T1 MRI within a slice containing both inclusions and the corresponding slice of the multi-frequency DICE reconstruction.
Comparison with the sweep reconstruction
The material properties for the homogeneous phantom can be compared with results obtained from a brute force sweep analysis. The sweep analysis is done by calculating the value of a motion error objective function Φ, defined as: Where u i m are the measured displacements, u i c (θ ) are the corresponding displacements calculated by FE solution for the current set of mechanical parameters, θ, nm is the number of measurements and * is the complex scalar product. For this study, the mechanical parameters, θ, consist of G' and G". The parameter sweep was performed over a range from 4150 Pa to 11,500 Pa for the storage modulus and 2000 Pa to 5000 Pa for the loss modulus. The sweep analysis yields, for the multifrequency parameters, log 10 (θ) and α, 3.8 and 0.42 for the shear modulus, 3.77 and 0.35 for the loss modulus. The gradient reconstruction yield 3.01 and 0.35 for the shear modulus and 3.04 and 0.36 for the loss modulus. The two reconstruction methods thus give similar values.
Exact measures of viscoelastic properties in very soft gels such as those used in these experiments are hard to obtain using traditional rheometers due to resonance effects. However, the property values obtained here are in good agreement with values obtained in previous, simplified homogeneous experiments [17] as well as phantoms studies run in other laboratories using similar gels. [11]
Second phantom
To investigate the robustness of the DICE method, we performed the same experiment previously described on a second heterogeneous phantom. This phantom was smaller in size than the first, with half axes of 4.5 cm, 4.5 cm, and 6.5 cm, and contained two inclusions located at 90° to each other. The inclusions are located at mid-height, near the phantom surface. Both inclusions are roughly oval shaped, made from cut, rigid silicone gel. Figure 7 shows their location within the phantom and the corresponding slice of the DICE reconstruction.
The reconstruction was performed using the same algorithm and parameters described for the 1 st heterogeneous phantom [ Figure 8].
While the inclusions in the reconstruction seem to appear near the bottom of the phantom, in real life, they are located at roughly mid-height. The difference is because about 1.5 cm of the phantom is not covered by the finite element method mesh, due to a lack of spatial information on this region from the 3D DIC data.
dIscussIon
For the homogeneous phantom, the adjoint gradient method does not find a purely homogeneous material, as the shear modulus varies from around 7 kPa in the center to around 4.3 kPa on edge. However, the change is progressive with no abrupt variation, as it is the case when inclusion is present.
For the first heterogeneous phantom, we were able to successfully identify the presence and location of the stiff inclusions, as we can see from Figure 6. However, the material properties values (shear and loss modulus) of the inclusions are not accurately identified. The inclusion, being made in plastic, is considered to have an extremely high shear modulus compared to the surrounding medium. This high rigidity corresponds to extremely long mechanical wavelengths, which cannot be accurately characterized in the small size of these inclusions.
For the second heterogeneous phantom, inclusions are well located by the reconstruction algorithm. However, an artifact from the reconstruction is present on the second line of the layer. It appears as a stiff inclusion. This is believed to be due to the actuator. Tests with a ring actuator have led to the creation of a circular pattern at the same place. It is believed that it is the compression resulting from the actuation behind the actuator that creates this pattern. Overall, the DICE method has been proven successful to locate inclusions in two different phantoms.
conclusIon
We were able to successfully and accurately reconstruct the material properties using simulated displacement fields for both cases, validating the code and the model used. Using real measurements, we were able to discriminate between the homogeneous and heterogeneous cases. Moreover, for the heterogeneous phantoms, we were able to adequately locate the two inclusions. More tests need to be done to determine optimal regularization levels and to assess the robustness of the DICE method.
Financial support and sponsorship
Nil.
Conflicts of interest
There are no conflicts of interest. | 2020-10-19T13:33:31.353Z | 2020-07-01T00:00:00.000 | {
"year": 2020,
"sha1": "2b47bdebd4ac6091940338e935f07a21f86579bb",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/jmp.jmp_99_19",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "6dfda56dd0a48078fc69fd0c0d0a531d9f99d641",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
265309758 | pes2o/s2orc | v3-fos-license | Successful management of concurrent COVID-19 and Pneumocystis Jirovecii Pneumonia in kidney transplant recipients: a case series
Background Pneumocystis pneumonia (PCP) is a life-threatening pulmonary fungal infection that predominantly affects immunocompromised individuals, including kidney transplant recipients. Recent years have witnessed a rising incidence of PCP in this vulnerable population, leading to graft loss and increased mortality. Immunosuppression, which is essential in transplant recipients, heightens susceptibility to viral and opportunistic infections, magnifying the clinical challenge. Concurrently, the global impact of coronavirus disease 2019 (COVID-19), caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), has been profound. Kidney transplant recipients have faced severe outcomes when infected with SARS-CoV-2, often requiring intensive care. Co-infection with COVID-19 and PCP in this context represents a complex clinical scenario that requires precise management strategies, involving a delicate balance between immunosuppression and immune activation. Although there have been case reports on management of COVID-19 and PCP in kidney transplant recipients, guidance on how to tackle these infections when they occur concurrently remains limited. Case presentations We have encountered four kidney transplant recipients with concurrent COVID-19 and PCP infection. These patients received comprehensive treatment that included adjustment of their maintenance immunosuppressive regimen, anti-pneumocystis therapy, treatment for COVID-19 and other infections, and symptomatic and supportive care. After this multifaceted treatment strategy, all of these patients improved significantly and had favorable outcomes. Conclusions We have successfully managed four kidney transplant recipients co-infected with COVID-19 and PCP. While PCP is a known complication of immunosuppressive therapy, its incidence in patients with COVID-19 highlights the complexity of dual infections. Our findings suggest that tailored immunosuppressive regimens, coupled with antiviral and antimicrobial therapies, can lead to clinical improvement in such cases. Further research is needed to refine risk assessment and therapeutic strategies, which will ultimately enhance the care of this vulnerable population.
Background
Pneumocystis pneumonia (PCP) is a severe opportunistic pulmonary fungal infection caused by Pneumocystis jirovecii, which usually occurs in immunocompromised patients, especially those infected with human immunodeficiency virus.In recent years, PCP has become increasingly prevalent among solid organ transplant recipients, particularly kidney transplant recipients, and individuals with hematological malignancies [1][2][3].Notably, PCP has been associated with a heightened risk of graft loss and mortality [4].Kidney transplant recipients require maintenance immunosuppressive therapy and therefore have increased susceptibility to both viral and opportunistic infections.
Coronavirus disease 2019 (COVID-19), caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), has posed an unprecedented threat to global health.By August 2023, a total of 769,806,130 cases of COVID-19, including 6,955,497 deaths, had been confirmed worldwide [5].COVID-19 infection in kidney transplant recipients may be particularly severe and require admission for intensive care [6].
In clinical practice, kidney transplant recipients with COVID-19 and PCP co-infection represent a multifaceted and intricate clinical scenario.Effective management of this dual infection necessitates precise clinical strategies, frequently involving a delicate balance between immunosuppression and immune activation [7].While a number of case reports have documented the management of either COVID-19 infection or PCP infection in kidney transplant recipients [6,8], the literature on the management of these infections when they occur simultaneously is limited.
This report describes the successful clinical management of COVID-19 and PCP co-infection in four kidney transplant recipients.Our intention is to provide health care practitioners with valuable insights and potential guidance for effective management of similar cases, with the ultimate goal of improving patient outcomes.
Case 1
A 65-year-old man with IgA nephropathy and chronic renal insufficiency secondary to end-stage renal failure (ESRF) underwent living donor kidney transplantation in 2011.His renal function recovered well after the operation.Serum creatinine was maintained at 200-300 µmol/L, and immunosuppression was maintained using a triple-drug regimen consisting of mycophenolate mofetil (MMF) 0.5 g twice daily, tacrolimus 1 mg twice daily, and prednisone 5 mg once daily.On January 25, 2023, the patient developed fever, cough, chest tightness, and fatigue following physical activity, leading to hospitalization on February 2, 2023.On admission, he had a body temperature of 39.2 °C and a blood oxygen saturation of 90% in ambient air.A quantitative reverse transcription-PCR (RT-qPCR) assay for SARS-CoV-2 was positive (Table 1).The pertinent admission-related test results are shown in Fig. 1.
A computed tomography (CT) scan of the chest revealed multiple patchy small nodular ground-glass opacities in both lungs, which were accompanied by grid-like and scattered fibrous cord-like opacities with indistinct borders (Fig. 2).Subsequently, a 5-mL sample of bronchoalveolar lavage fluid was collected and send to the local microbiology laboratory for metagenomic next-generation sequencing, which confirmed P. jirovecii infection (Table 2).As part of the treatment strategy, the patient's maintenance immunosuppressive regimen was discontinued.The therapeutic protocol consisted of methylprednisolone 40 mg as the sole anti-rejection agent, with antiviral intervention that consisted of namatevir/ritonavir (namatevir 300 mg/ritonavir 100 mg twice daily on day 1; namatevir 150 mg/ritonavir 100 mg once daily on days 2-5) and ganciclovir 250 mg/day.The patient was also started on an antimicrobial regimen of sulfamethoxazole (administered as 3 tablets per dose, three times daily), caspofungin (70 mg on day 1, followed by 50 mg/day), and moxifloxacin 250 mL (Table 1).
After 4 days of treatment, the patient tested negative for COVID-19 and his temperature returned to normal.Seven days later, his symptoms of cough and chest tightness had improved slightly, vital signs were stable, and his blood oxygen saturation was above 95% on supplemental oxygen at a flow rate of 3 L/min.Chest CT showed that the lesions were partially absorbed (Fig. 2).Methylprednisolone was stopped, prednisone acetate was administered orally, and tacrolimus was added.Finally, when the absolute value of lymphocytes is greater than 1000, added MMF.Thereafter, his vital signs were stable, his symptoms improved, and he was discharged from hospital.The relevant test results and changes during hospitalization are summarized in Fig. 1, and the changes on chest CT are shown in Fig. 2.
Case 2
A 27-year-old man underwent living donor kidney transplantation for hypertensive nephropathy-associated ESRF in 2020.Following the procedure, his renal function recovered well, with serum creatinine levels consistently maintained within the normal range.Immunosuppression was maintained by a triple-drug regimen consisting of MMF 0.36 g twice daily, tacrolimus 2 mg in the morning and 1 mg in the evening, and prednisone 5 mg once daily.On March 15, 2023, the patient presented with fever, cough, chest tightness, and shortness of breath, prompting hospitalization on March 28, 2023.On admission, he had a body temperature of 39.0 °C and a blood oxygen saturation of 91% in ambient air (Table 1).Admission-related test results are shown in Fig. 1.An RT-qPCR assay for SARS-CoV-2 was positive (Table 2).Chest CT scans revealed multiple patchy nodular ground-glass opacities in both lungs, accompanied by grid-like and scattered fibrous cord-like opacities with blurred borders (Fig. 3).Next-generation sequencing of pathogenic microorganisms in peripheral blood revealed P. jirovecii infection (Table 2).
The therapeutic protocol consisted of methylprednisolone 40 mg as the sole anti-rejection agent, with antiviral intervention that consisted of namatevir/ritonavir (namatevir 300 mg/ritonavir 100 mg twice daily on days 1-5) and ganciclovir 250 mg/day.The remaining treatment protocols are the same as those in case 1. (Table 1).Following a 4-day course of treatment, a COVID-19 test was negative and the patient's body temperature had returned to the normal range.Seven days thereafter, notable relief of symptoms, including cough and chest tightness, was observed and vital signs were stable.The patient consistently achieved a blood oxygen saturation higher than 95% on supplemental oxygen at a flow rate of 3 L/min.Subsequent chest CT demonstrated partial resolution of the previously identified lesions.The patient's vital signs remained stable, and symptomatic improvement subsequently continued under the same treatment protocol.The examination outcomes and changes observed during hospitalization are shown in Fig. 1 and findings on chest CT over time in Fig. 2.
Case 3
A 52-year-old man with ESRF associated with polycystic kidney disease underwent living donor kidney transplantation in 2016.Following the procedure, his renal function showed marked recovery.His serum creatinine level was maintained within the range of 120-160 µmol/L.Immunosuppression was maintained using a triple-drug regimen consisting of MMF (0.75 g in the morning, 0.5 g in the evening), tacrolimus (2 mg in the morning, 1 mg in the evening), and prednisone (5 mg once daily).On April 15, 2023 he developed symptoms of fever, cough, chest tightness, and fatigue post-exertion, leading to hospitalization on April 19, 2023.On admission, his body temperature was 39.7 °C and his blood oxygen saturation was 89% in ambient air.His admission-related test outcomes are summarized in Fig. 1.An RT-qPCR assay for SARS-CoV-2 was positive, as shown in Table 2. Chest CT scans revealed multiple patchy grid-like and cord-like shadows characterized by increased density in both lungs with blurred boundaries (Fig. 4).Next-generation sequencing of pathogenic microorganisms in peripheral blood confirmed P. jirovecii infection (Table 2).
After 2 days using the same treatment plan as case 1 (Table 1), the patient's COVID-19 test was negative and his body temperature had returned to the normal range.Five days later, his symptoms of cough and chest tightness had resolved, and vital signs were stable.His blood oxygen saturation was consistently above 96% on supplemental oxygen at a flow rate of 3 L/min.A chest CT scan showed partial absorption of the lesions.The follow-up treatment plan was then initiated, and the patient's vital signs remained stable, his symptoms improved, and he was finally discharged.Figure 1 shows the relevant test results and changes observed during hospitalization, while the chest CT findings over time are shown in Fig. 4.
Case 4
A 66-year-old woman underwent living donor kidney transplantation for ESRF stemming from polycystic kidney disease in 2016.Her renal function recovered well after surgery, with serum creatinine levels that were consistently within the normal range.Immunosuppression was maintained using a triple-drug regimen consisting of MMF 0.75 g twice daily, tacrolimus 1 mg twice daily, and prednisone 5 mg once daily.On May 25, 2023, the patient developed symptoms of fever, cough, chest tightness, and fatigue, prompting hospitalization on June 1, 2023.The relevant admission-related test outcomes are summarized in Fig. 1.An RT-qPCR assay for SARS-CoV-2 was positive.Chest CT scans revealed multiple patchy ground-glass density shadows alongside grid-like shadows characterized by blurred edges involving both lungs (Fig. 5).In view of financial constraints, next-generation sequencing was not performed for this patient.
Drawing on the experience at our institution, a 3-day adherence to the identical treatment plan as case 2 (Table 1) resulted in conversion of her COVID-19 test result from positive to negative with normalization of body temperature, thus confirming the efficacy of treatment.A week later, her symptoms of cough and chest tightness were alleviated and her vital signs were stable.Her blood oxygen saturation was consistently above 97% on supplemental oxygen at a flow rate of 3 L/min.Subsequent CT scans of the chest indicated partial absorption of the identified lesions.Continued adherence to the follow-up treatment protocol led to sustained improvement, culminating in the patient's discharge from hospital.The results of relevant tests and changes noted during hospitalization are shown in Fig. 1 and progression of the chest CT findings in Fig. 5.
Discussion and conclusions
Thus far, there have been three reported cases of kidney transplant recipients who developed COVID-19 with concurrent PCP infection.Two of these cases were successfully treated and one was ultimately fatal [1][2][3].In this report, we describe four kidney transplant recipients (three male, one female) in Nanjing, China who were admitted with COVID-19 and PCP co-infection that was managed successfully.The mean patient age was 53 years, and the average interval between kidney transplantation and onset of PCP was 83 months.PCP is a significant complication arising from immunosuppressive therapy in individuals who have undergone solid organ transplantation.Trimethoprim/sulfamethoxazole has been used widely for prophylaxis against PCP; however, its potential risks and adverse effects outweigh its preventive benefit [9].Therefore, trimethoprim/sulfamethoxazole is generally not recommended for prophylaxis against PCP nowadays.Moreover, a recent report suggests that patients on MMF may not need PCP prophylaxis [10].All our four cases received a combined immunosuppressive regimen of MMF, tacrolimus, and prednisone following kidney transplantation.However, it is noteworthy that these individuals developed PCP infection at varying time intervals after transplantation.It may be attributed to the SARS-CoV-2 infection, as evidenced by an unexpectedly high proportion of PCP samples in critically ill patients with COVID-19 [11].
All four cases were confirmed to have SARS-CoV-2 infection by RT-qPCR on admission.Furthermore, three of these four patients were diagnosed with PCP by metagenomic next-generation sequencing, with two found to have co-infection with cytomegalovirus.All four cases were found to have lymphocytopenia on admission, with absolute lymphocyte counts of less than 500 × 10 6 cells/L; the risk of developing PCP was 18.7-fold greater in these patients than in those with an absolute lymphocyte count higher than 500 × 10 6 cells/L [12].The patients' maintenance immunosuppressive regimens were discontinued to enhance the immune response to the infections.In all cases, antiviral (namatevir/ritonavir/ganciclovir) and antimicrobial (caspofungin/sulfamethoxazole/moxifloxacin) therapy was administered to address PCP and other infections.Methylprednisolone had been administered to control the inflammatory response and alleviate respiratory symptoms.After an average of 6 days of combination therapy, all four patients showed improvements in their clinical symptoms, with conversion to a negative COVID-19 test result.The limitation of this case series is that chest CT was used to diagnose PCP in one of the cases.Bilateral diffuse ground-glass opacities with interstitial infiltrates are typical findings in PCP [13][14][15][16].After treatment, the ground-glass opacities partially resolved in all cases.Upon observing a positive response to treatment, the decision was made to reinstate the maintenance immunosuppressive regimen in order to minimize the risk of rejection.
In conclusion, this case series provides a foundation for further research in the field of simultaneous COVID-19 and PCP infection in kidney transplant recipients.Continued investigation of the risk factors, optimal treatment approaches, and long-term outcomes is essential for improvement of the management and care of these complex cases.
Fig. 3
Fig. 3 Case 2-Computed tomography images at different hospital days Fig. 2 Case 1-Computed tomography images at different hospital days
Fig. 4
Fig. 4 Case 3-Computed tomography images at different hospital days
Table 2
Microbiological Outcome Criteria | 2023-11-22T14:05:27.171Z | 2023-11-21T00:00:00.000 | {
"year": 2023,
"sha1": "ed74960ebfd30e7458f8dde32a7151039999d90e",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "e62bc860da503d53baf0483c7a9e0587897bbe10",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
258765887 | pes2o/s2orc | v3-fos-license | Potential of algae-derived alginate oligosaccharides and β-glucan to counter inflammation in adult zebrafish intestine
Alginate oligosaccharides (AOS) are natural bioactive compounds with anti-inflammatory properties. We performed a feeding trial employing a zebrafish (Danio rerio) model of soybean-induced intestinal inflammation. Five groups of fish were fed different diets: a control (CT) diet, a soybean meal (SBM) diet, a soybean meal+β-glucan (BG) diet and 2 soybean meal+AOS diets (alginate products differing in the content of low molecular weight fractions - AL, with 31% < 3kDa and AH, with 3% < 3kDa). We analyzed the intestinal transcriptomic and plasma metabolomic profiles of the study groups. In addition, we assessed the expression of inflammatory marker genes and histological alterations in the intestine. Dietary algal β-(1, 3)-glucan and AOS were able to bring the expression of certain inflammatory genes altered by dietary SBM to a level similar to that in the control group. Intestinal transcriptomic analysis indicated that dietary SBM changed the expression of genes linked to inflammation, endoplasmic reticulum, reproduction and cell motility. The AL diet suppressed the expression of genes related to complement activation, inflammatory and humoral response, which can likely have an inflammation alleviation effect. On the other hand, the AH diet reduced the expression of genes, causing an enrichment of negative regulation of immune system process. The BG diet suppressed several immune genes linked to the endopeptidase activity and proteolysis. The plasma metabolomic profile further revealed that dietary SBM can alter inflammation-linked metabolites such as itaconic acid, taurochenodeoxycholic acid and enriched the arginine biosynthesis pathway. The diet AL helped in elevating one of the short chain fatty acids, namely 2-hydroxybutyric acid while the BG diet increased the abundance of a vitamin, pantothenic acid. Histological evaluation revealed the advantage of the AL diet: it increased the goblet cell number and length of villi of the intestinal mucosa. Overall, our results indicate that dietary AOS with an appropriate amount of < 3kDa can stall the inflammatory responses in zebrafish.
Introduction
Inflammatory bowel disease (IBD) is a multifactorial disorder characterized by chronic and recurrent episodes of inflammation in specific segments of the intestine. IBD can be instigated by both genetic and environmental factors, but the rise in IBD cases over the last decade suggests the decisive role of diverse environmental factors in the pathogenesis of IBD (1). Moreover, the accelerated incidence of IBD in developing nations is correlated with a high intake of Western diet. Current approaches for the treatment of IBD include the use of different anti-inflammatory drugs (2). The remitting and relapsing nature of the disease necessitates prolonged use of such anti-inflammatory agents, leading to undesirable side effects (3). Diet is an important environmental factor that can be an alternative to drugs, since components such as prebiotics are known to regulate intestinal inflammation by maintaining immune homeostasis. These non-digestible carbohydrates are considered as establishers of beneficial bacteria that can produce bioactive metabolites, such as short-chain fatty acids that provide energy to enterocytes and maintain mucosal integrity (4).
Alginate oligosaccharides (AOS) are natural bioactive compounds and among other bioactivities have anti-inflammatory, antioxidant and prebiotic properties (5). They are produced through chemical or enzymatic digestion of alginates mainly extracted from brown algae. AOS are linear polymers of 2-25 monosaccharides composed of b-D-mannuronic acid (M) and a-L-guluronic acid (G) monomers linked by 1-4 glycosidic linkages with different M/G ratios and degrees of polymerization. The biological functions of AOS is dependent on the molecular weight (MW) (5). An in vitro study has reported that low molecular weight alginates enhance the radical scavenging and immunomodulatory capacities in the gut (6) and AOS < 1 kDa and 1.84 M/G can efficiently scavenge superoxide, hydroxyl, and hypochlorous acid radicals compared to AOS of MW 1 to 10 kDa (7). Furthermore, an in vitro study reported that AOS < 1kDa is effective in eliciting lysozyme activity, peroxidase activity, phagocytic capacity and total nitric oxide synthase activity compared to AOS of MW 1-2 kDa or 2-4 kDa (8). Inflammation suppressing ability of AOS has also been described previously; through attenuation of nitric oxide and prostaglandin E 2 production and inactivation of the nuclear-factor kappa B and mitogen-activatedprotein-kinase signaling pathways, as reported for mice macrophage cell lines (9) and enhancement of the activity of antioxidant enzymes such as superoxide dismutase (SOD) and catalase (CAT), as reported for human umbilical vein endothelial cells (10). Dietary AOS can also alter intestinal morphology and barrier function; by increasing the villi length, goblet cell number and mucin-2 (MUC2) expression (11). Furthermore, AOS supplemented diet ameliorated the inflammatory responses in a DSS-induced colitis mice model by reducing the infiltration of neutrophils and level of inflammatory markers (TNF-a, COX-2) and increasing the expression of tight junction proteins Zonula occludens-1 and Occludin (12). We have reported the ability of AOS to increase the abundance of bacteria associated with short chain fatty acid (SCFA) production (13). Most of the previous reports on the anti-inflammatory and antioxidant activities of AOS have been based on in vitro studies. Hence, it is essential to generate in vivo study-based evidence on the anti-inflammatory potential of AOS using an animal model. Furthermore, in vivo studies to understand the effect of molecular weight of AOS on its anti-inflammatory potential has not been explored in detail.
Algal b-(1, 3)-glucan is a known prebiotic derived from the unicellular alga Euglena gracilis. Paramylon is the storage polysaccharide in E. gracilis and it is a straight-chain b-(1, 3)glucan (14). Previous studies have reported the anti-inflammatory potential of paramylon; oral administration of paramylon reduced the number of infiltrating CD3 + T-lymphocytes, and decreased expression of Ccl2 and Il-11 in the gut of gastric cancer mice model (14). Furthermore, paramylon treatment activated M2 macrophages and downregulated the expression of inflammatory cytokines in the liver of mice (15).
In the present study, intestinal transcriptome and plasma metabolome of zebrafish were profiled to reveal the effects of dietary AOS (Laminaria sp.-derived, with varying amounts of < 3kDa fraction). We employed an adult zebrafish intestine inflammation model to understand the efficacy of the macroalgaderived oligosaccharides to counter inflammation. In addition, we investigated the changes caused by AOS and those imparted by a well-known anti-inflammatory product, algal b-(1, 3)-glucan (16).
Experimental fish
Adult zebrafish (8-month-old AB strain) were used for the experiment. To obtain this stock, the parents were bred in five tanks at the zebrafish facility of Nord University, Norway, following a previously reported protocol (17). Fifteen males and 30 females in each of the five replicate tanks were community bred to obtain 300-400 eggs from each tank. These eggs were kept in E3 medium and incubated at 28°C in an incubator until hatching i.e., at around 50 h post-fertilization. Larvae at 5-day post -fertilization (dpf) stage were fed ad libitum commercial micro diets (< 100 µm particle size, Zebrafeed ® , Sparos Lda, Olhão, Portugal). From 15 dpf (advanced larval stage), they were transferred to a recirculatory system and fed micro diets of 100-200 µm particle size (Zebrafeed ® ). From 30 days post-fertilization, the fish were fed a zebrafish diet (Zebrafeed ® ) of 300 µm particle size. Upon reaching the 8 th month, 250 male zebrafish weighing 300-400 mg were transferred to a freshwater flow-through system (Zebtec Stand Alone Toxicological Rack, Techniplast, Varese, Italy) and acclimatized in 3.5 L tanks of the system. These fish were randomly distributed into 25 tanks (10 fish per tank). The water temperature in the tanks was 28°C; the water flow rate was 2.5 L/h and dissolved oxygen in the tanks ranged between 7-8 ppm (oxygen saturation above 85%). A 14L:10D photoperiod was maintained throughout the 30-day feeding experiment.
Diet preparation and feeding experiment
Sparos Lda. prepared the five experimental diets (Supplementary Table 1A): One control diet and four soybean-based diets. The control (CT) diet was a fish meal-based diet with high-quality marine protein. Soybean-based (SBM) diet contained 50% (w/w) soybean meal (defatted, protein content 44%) and 11% soy protein isolate; the former is expected to induce intestinal inflammation (16). The product Aquastem ™ 300DR (derived from the microalga Euglena gracilis) from Kemin, Des Moines, USA was added (2.5%, w/w) to the SBM diet to prepare the b-glucan diet (BG). Likewise, the diets AL and AH were prepared by adding 0.962 and 0.658% (w/w) alginate oligosaccharide, AOS (derived from the macroalga Laminaria; Centre d'Etude et de Valorisation des Algues (CEVA), Pleubian, France). The AL diet had a lower overall MW, in particular a higher content of short-chain AOS (over 30% < 3kDa). In addition, AL had 3-10kDa (7%), 10-30kDa (22%) and 30-60kDa (40%) compared to the AH diet that had AOS < 3kDa (3%), 3-10kDa (5%), 10-30kDa (30%) and 30-60kDa (60%). AOS in both AL and AH diets were prepared from the same batch of purified alginates. Hence, they have the same M:G ratio (0.9) and M:G distribution along the polymer (Supplementary Table 1B). Thus, BG, AL and AH diets had all the ingredients of the SBM diet in addition to the respective test compound. The experimental fish were fed daily at 5% body weight (offered manually as three rations at 08:00, 13:00 and 18:00) for 30 days. Fish in 5 replicate tanks were allotted to each of the five study groups.
Sampling
At the end of the experimental period, the fish were sacrificed by immersing (5 min) in a lethal dose of 200 mg/l of tricaine methanesulfonate (Argent Chemical Laboratories, Redmond, WA, USA) buffered with an equal amount of sodium bicarbonate. Total length and weight of the individual fish from each treatment group were measured and the information is in Supplementary Figure 1. The fish were dissected to collect the posterior intestine (n = 5 per group) and snap-frozen in liquid nitrogen. These samples were later stored in a −80°C freezer until further analyses. Blood drawn by tail ablation (18) was collected in a heparinized tube and centrifuged at 5000 g for 10 min at 4°C to collect the plasma (n = 5 per group; 5 fish from each tank pooled). Intestine samples (n=6-9 per group) were taken to assess the histomorphology.
RNA isolation, mRNA sequencing and bioinformatic analyses
Total RNA was extracted from the frozen intestine samples using Direct-zol ™ RNA MiniPrep (Zymoresearch, CA, USA), following the manufacturer's instructions. The RNA concentration and integrity were determined using Qubit 4 Fluorometer (Thermo Fisher Scientific, Waltham MA, USA) and Tape Station 2200 (Agilent Technologies, Santa Clara, CA, USA). RNA samples exhibiting RIN value ≥ 7 were used for qPCR and preparation of RNA-Seq libraries. Library preparation and sequencing of samples (n=5 for each diet group) were done by Novogene Europe (Cambridge, UK). Messenger RNA was purified from total RNA using poly-T oligo-attached magnetic beads. After fragmentation, the first strand cDNA was synthesized using random hexamers followed by the second strand cDNA synthesis. The libraries were endrepaired, A-tailed, adapter ligated, size selected, amplified, and finally purified. The libraries were quantified by Qubit and realtime PCR and size distribution was checked by bioanalyzer. The barcoded libraries were then pooled at equimolar concentrations and loaded on the Illumina NovaSeq 6000 Sequencing system (Illumina, San Diego, CA, USA) to obtain 150 bp paired end reads. For each sample, an average of 22 million filtered reads were obtained with a minimum of 19.8 million reads per sample. The average mapping percentage of the filtered reads was 86% (Supplementary Table 2). The bioinformatic analysis of the RNA-Seq data was performed following our previously described protocol (16). In brief, the quality of raw reads was assessed using the FastQC command line, and the tool fastp to filter the reads by considering the Phred quality score (Q ≥ 30). The filtered reads were then aligned to the reference zebrafish genome downloaded from NCBI (release 106) using HISAT2, version 2.2.1 with default parameters. Read counts that belong to each gene were generated using featureCounts version 1.5.3. Differential expression of the genes across the treatment groups was determined by DESeq2 and transcripts with |Log 2 fold change| ≥ 1 and an adjusted p-value < 0.05 (Benjamini-Hochberg multiple test correction method) were considered significantly differentially expressed. The gene ontology (GO) enrichment analyses were performed using the software Database for Annotation, Visualization and Integrated Discovery version 6.8 with a p value of 0.05 and minimum gene count of 2. The packages ggplot2, pheatmap and GOplot in R were employed to present the data.
qPCR analysis
Genes related to intestinal inflammation, namely interleukin-1b (il1b), matrix metalloproteinase-9 (mmp-9), myeloid-specific peroxidase (mpx), interleukin-10 (il10), chemokine (C-X-C motif) ligand 8a (cxcl8a), mucin2.1 (muc2.1), mucin5ac (muc5ac) and those of antioxidant enzymes superoxide dismutase 1 (sod1), glutathione peroxidase 1a (gpx1a), catalase (cat) were selected for qPCR (n = 5 for each diet group) and each reaction was done with technical replicates. One µg of total RNA from each sample was reverse transcribed using the QuantiTect reverse transcription kit (Qiagen, Hilden, Germany), according to the manufacturer's instructions. The cDNA was further diluted 10 times with nuclease-free water and used as a PCR template. PCR reactions were performed using the SYBR green in LightCycler ® 96 Real-Time PCR System (Roche Holding AG, Basel, Switzerland) with the following conditions: initial denaturation at 95°C for 10 min, followed by 35 cycles of 95°C for 20 s, 60°C for 30 s and 72°C for 10 s. We designed the primers for the selected genes using the Primer-BLAST tool in NCBI. The primers were then checked for secondary structures such as hairpin, repeats, self and cross dimer by NetPrimer (Premier Biosoft, Palo Alto, USA). The primers for the target genes are listed in Supplementary Table 3. Relative expression of selected genes was determined based on the geometric mean of three reference genes (eef1a1l1, rpl13a and actb1). The data were checked for assumptions of normality (Shapiro-Wilk) and homogeneity of variance (Bartlett's test). Based on the normality and equal variance assumption check results, the statistical difference was determined by Analysis of Variance (ANOVA) or Kruskal-Wallis test. The pair wise comparison between the treatments was done by Tukey's or Dunn test.
Intestinal histomorphometry
Distal intestine sample were fixed in 3.7% (v/v) phosphatebuffered formaldehyde solution (pH 7.2) at 4°C for 24 h. Standard histological procedures were employed for dehydration, processing, and paraffin embedding as described by Bancroft and Gamble (19). The paraffin blocks thus prepared were sectioned using a microtome (Microm HM3555, MICROM International GmbH, Walldorf, Germany). Four micrometer thick longitudinal sections were cut and mounted on SuperFrost ® slides (Menzel, Braunschweig, Germany), and a robot slide stainer (Microm HMS 760×, MICROM International GmbH) was used to stain the slides with Alcian Blue-Periodic Acid Schiff's reagent (AB-PAS, pH 2.5). First, all acid mucins were stained blue with alcian blue, and in the subsequent PAS reaction, only the neutral mucins were stained magenta. Light microscopy photomicrographs were taken with the Olympus BX61/Camera Color View IIIu (Olympus Europa GmbH, Hamburg, Germany) and the photo program Cell P (Soft Imaging System GmbH, Munster, Germany). The ImageJ software (20) was used for analysing the tissue microarchitecture. To understand the histopathological changes, we measured five parameters of the intestine features: number of eosinophils, goblet cell number, goblet cell size, villi length and width of lamina propria.
Plasma metabolomics
Metabolomic profiling was carried out by MS-Omics (Vedbaek, Denmark). The analysis was carried out using a Thermo Scientific Vanquish LC (Thermo Fisher Scientific, Waltham, U.S.) coupled to Orbitrap Exploris 240 MS (Thermo Fisher Scientific). The company used an electrospray ionization interface as the ionization source. The analysis was performed in positive and negative ionization mode under polarity switching. The ultra-performance liquid chromatography was performed using a slightly modified version of the protocol described by Doneanu et al. (21). Peak areas were extracted using Compound Discoverer 3.2 (Thermo Fisher Scientific). Metabolites in the samples were identified at four levels; Level 1: identification by retention times (compared against in-house standards), accurate mass (with an acceptable deviation of 3 ppm), and MS/MS spectra, Level 2a: identification by retention times (compared against in-house standards), accurate mass (with an acceptable deviation of 3ppm). Level 2b: identification by accurate mass (with an acceptable deviation of 3 ppm), and MS/ MS spectra, Level 3: identification by accurate mass alone (with an acceptable deviation of 3 ppm). The obtained metabolomic data were analyzed employing MetaboAnalyst 5.0 (22). The data were log-transformed and auto-scaled (mean-centered and divided by the standard deviation of each variable) before downstream analyses. Principal component analysis was performed using the mixomics package in R 4.2.1 to understand the differential clustering of the study groups. Metabolites with a |Log 2 fold change| ≥ 0.6 and a p value of < 0.05 are reported as significantly altered metabolites. The packages ggplot2 and pheatmap in R 4.2.1 were employed to prepare the illustrations in this article.
Soybean-based diet altered the expression of key genes related to inflammation
To gather evidence on soybean meal-induced inflammatory response in zebrafish, we examined the relative expression of selected inflammatory genes in the intestine of the fish fed a soybean meal-based diet for 30 days. The relative expression of il1b was significantly (p < 0.05) increased in the SBM and AH groups compared to CT group ( Figure 1A). Furthermore, the SBM group had significantly higher expression (p < 0.05) of mmp9 compared to the BG and AH groups ( Figure 1B) and significantly (p < 0.001) higher expression of mpx compared to the CT group ( Figure 1C). However, the expression of mpx in the BG, AL and AH groups was significantly lower compared to the SBM group. The expression of cxcl8a was significantly higher in the SBM group (p < 0.001) compared to CT, BG and AH groups ( Figure 1D). In addition, we observed significant differences in the expression of the antioxidant genes sod1 and cat (Figures 1E, F); the expression of sod1 was significantly lower in the AH group compared to the CT, SBM and AL groups and the expression of cat was significantly higher in the AL group compared to the CT group. We did not detect any statistically significant differences in the expression of the mucin genes, muc2.1 and muc5ac, the gene of the antioxidant enzyme, gpx1a, and the anti-inflammatory gene, il10 (Figures 1G-J).
Intestinal transcriptome profile reflected soybean-induced inflammation
To gain a deeper understanding of the effects of soybean mealinduced inflammation, we analyzed the intestinal transcriptome of zebrafish fed the soybean meal-based diet. A comparison of the SBM group with the CT group revealed 141 differentially expressed genes (DEGs), of which 58 were upregulated and 83 were downregulated in the SBM group (Figure 2A and Supplementary Table 4). The principal component analysis (PCA) plot shows the differential clustering of the SBM and CT groups along the first principal component (PC1), which explains 56% variability in the data ( Figure 2B). Hierarchical clustering ( Figure 2C) revealed a clear separation of upregulated and downregulated DEGs in the SBM group compared to the CT group.
The GO enrichment analysis based on the upregulated DEGs revealed significant enrichment of several GO terms including those linked to immune system process, endoplasmic reticulum (ER) part, leukocyte chemotaxis, response to stress, response to external stimulus and leukocyte migration ( Figure 2D). GO enrichment analysis employing the downregulated DEGs revealed terms like cilium dependent motility, flagellum dependent cell motility, sexual reproduction, alpha-tubulin activity and male gamete generation ( Figure 2E).
Algal b-glucan targets distinct immune-related genes
We performed a comparison of the transcriptome of the BG and SBM groups. Forty-two genes were differentially expressed, of which 16 were upregulated and 26 were downregulated in the BG group ( Figure 3A and Supplementary Table 5). The PCA plot shows the differential clustering of the SBM and BG groups along the PC1, which reveals 63% variability in the data ( Figure 3B). Hierarchical clustering ( Figure 3C) revealed a clear separation of DEGs in the SBM group compared to the BG group. Several downregulated DEGs were immune-related, for example GTPase IMAP family member-like gimap7 (LOC799889), gimap8 (LOC103910175), lectin, galactoside-binding, soluble, 9 (galectin 9)-like 6 (lgals9l6), matrix metalloproteinase-13a ( mmp13a), chemokine ccl-c17a (LOC100002392), TIMP metallopeptidase inhibitor 2b (timp2b) and complement component (c7a). The GO enrichment analysis employing the downregulated DEGs revealed the enrichment of terms like endopeptidase activity, hydrolase activity and proteolysis ( Figure 3D). However, the upregulated DEGs did not reveal any significant GO enrichment.
Specific shift in the expression of immune-related genes caused by AOS
We first compared the intestine transcriptome of the AL group with the SBM group. The analysis revealed 32 DEGs, of which 10 were upregulated and 22 were downregulated in the AL group ( Figure 4A). The PCA plot shows the differential clustering of the SBM and AL groups along the PC1, which explains 65% of variability in the data ( Figure 4B). Hierarchical clustering ( Figure 4C) revealed a clear separation of differentially expressed genes in the SBM group compared to the AL group. Several downregulated DEGs were immune related; chemokine (C-C motif) ligand 36, duplicate 1 (ccl36.1), intelectin 3 (itln3), CD59A glycoprotein-like (LOC103910140), aquaporin 1a (aqp1a.1), NLR family CARD domain-containing protein 3-like (LOC101882744), Relative expression of immune genes in the intestine of zebrafish fed different diets.
Comparison of the AH group with the SBM group revealed 20 DEGs of which 10 were upregulated and 10 were downregulated in the AH group ( Figure 4E and Supplementary Table 8). The PCA plot shows the clustering of samples into SBM and AH groups. The clusters are separated from each other along the PC1, which explains 63% of variability in the data ( Figure 4F). Hierarchical clustering ( Figure 4G) revealed a clear separation of up-and downregulated DEGs in the AH group compared to the SBM group. The downregulated DEGs-based GO analysis revealed enrichment of negative regulation of immune system processes (Supplementary Table 9). Our analysis did not detect a significant GO term enrichment based on the upregulated DEGs. The upregulated DEGs were immune related: B-cell receptor CD22 (LOC100151328), NLR family CARD domain-containing protein 3-like (LOC108183498), Fc receptor-like protein 5 (LOC101886098) and macrophage mannose receptor 1-like (LOC100331140). The downregulated DEGs were, among others, interleukin 26 (il26), adhesion G protein-coupled receptor E15 (adgre15), lectin galactoside-binding, soluble, 9 (galectin 9)-like 6 (lgals9l6) and CD59 glycoprotein-like (LOC103910140) ( Figure 4H). To find out if the AOS has the capacity to shift the expression of genes that were altered by SBM, we examined the common DEGs A direct comparison of AL versus AH group revealed 14 DEGs, of which 2 were upregulated and 12 were downregulated in the AL group. Among the downregulated DEGs, immune genes like ccl36.1 had a 12fold and C-reactive protein -6 (crp6) had a 5-fold downregulation in the AL group compared to the AH group (Supplementary Table 10 Furthermore, these 2 DEGs caused the enrichment of the GO term "response to virus" (Supplementary Table 11).
Soybean-based diets (both with and without glucans or AOS) altered the plasma metabolome
To gain deeper insights into the impact of different dietary treatments, we compared the plasma metabolome of the various treatment groups. We identified a total of 71 metabolites (level 1). Partial least squares discriminant analysis revealed a group-based clustering of the samples (Supplementary Figure 3). Comparison of the SBM group with the CT group revealed aldopentose, ethylmalonic acid, xanthine, itaconic acid, 2-(hydroxymethyl) butanoic acid, citrulline, ornithine, taurochenodeoxycholic acid and trigonelline as the significantly altered metabolites (Figures 6A, B; Supplementary Figure 4 and Supplementary Table 12). The pathway analysis using these nine significantly altered metabolites identified arginine biosynthesis as the significantly enriched pathway ( Figure 6C). Comparison of the BG group with the SBM group revealed pantothenic acid and isocitric acid as the significantly altered metabolites (Supplementary Figure 5A and Supplementary Table 13). Furthermore, the AL group versus the SBM group revealed 2-hydroxybutyric acid as the significantly abundant metabolite ( Figure 6D; Supplementary Figure 5B and Supplementary Table 14). We did not find any significantly altered metabolite from the AH vs SBM comparison.
AOS altered the intestinal histomorphology
We investigated histological changes in the intestine of zebrafish to understand the effect of different diets ( Figure 7A). We found a significantly higher number of goblet cells per villi (p < 0.05) in the AL group compared to the CT and SBM groups ( Figure 7B). We also found an increase in the villi length in the AL group compared to the CT and BG groups ( Figure 7C). The diets seem to have no effect on goblet cell size, eosinophils and lamina propria width in zebrafish (Figures 7D-F).
Discussion
Prebiotics are often administered through diet to obtain a "synergistic or complementary synbiotic" effect, and currently, scientists are gathering evidence on the IBD-alleviating potential of Transcriptome-based differences in the intestine of zebrafish fed AOS with 31% <3kDa and AOS with 3% <3kDa compared to the soybean group. this approach. The belief is that dietary prebiotics change the composition of intestinal microbiota, which influences mucosal as well as systemic immune responses in a host (23). In one of our previous studies (13), we profiled the intestinal bacterial communities in Atlantic salmon fed two levels of AOS (0.5 and 2.5%). We reported the potential ability of 0.5% AOS to stimulate the proliferation of bacteria with SCFA-producing capacity. The same product was added to the AL diet of the current study. For comparative purposes, we formulated the AH diet that contained an AOS with a lower proportion of < 3kDa. The two products were incorporated at 0.962% (AL) and 0.658% (AH) (both w/w) into the diet of zebrafish, taking into consideration the content of the active component. We performed an in vivo study to compare the antiinflammatory effects imparted by two AOS products (with 31% < 3kDa and with 3% < 3kDa), using an intestine inflammation model in adult zebrafish. We targeted the transcriptome and metabolome of the fish to evaluate the anti-inflammatory potential of AOS. We have also studied the transcriptome and metabolome of zebrafish fed an algal b-glucan that we studied previously (16). The generated transcriptomic and metabolomic profiles revealed the distinct responses evoked by the products (based on the comparison with the SBM group). Downregulated DEGs-based enriched GO terms of the AL group were complement activation, inflammatory response and humoral response, compared to the negative regulation of the immune system in the case of the AH group. The significantly abundant plasma metabolite in the AL group was 2-hydroxybutyric acid. Histological evaluation indicated that the AL group had more goblet cells and longer intestinal villi.
Dietary soybean meal altered the expression of genes linked to inflammation, endoplasmic reticulum, reproduction and cell motility
Soybean meal contains several anti-nutritional factors (ANFs) including saponins, lectins, isoflavones, and b-conglycinin (24). These ANFs can hamper growth, reduce digestive enzyme activity, and alter the gut mucosal integrity to induce inflammation. Such an inflammatory response could be due to soy saponins as reported in fish studies (25,26). Increased granulocyte recruitment and higher expression of inflammatory marker genes (il1b and il8) were the characteristics described in zebrafish larvae fed a diet containing soybean meal and soy saponin (27). In our previous studies, we found that the dietary soybean meal (50% inclusion) affected barrierrelated genes in the intestine of juvenile zebrafish (28) and soy saponin developed inflammation features such as increased lamina propria width, infiltration of immune cells, and increased expression of genes related to antimicrobial peptides and ion transport in the intestine of Atlantic salmon (26). In the present study, the expression of inflammatory marker genes (il1b, mpx, cxcl8a) were upregulated in the SBM group. The proinflammatory cytokine IL-1b is secreted by innate immune cells and is an important mediator of inflammatory response (29). The chemokine CXCL8A is a neutrophil chemoattractant that stimulates the migration of neutrophils from blood to the inflamed sites. Granulocytes (mainly neutrophils) are the first responders that migrate to an inflamed site and a high 18 Venn diagrams showing the total number of genes that were altered by the experimental diets. The genes that were differentially upregulated in the SBM vs CT comparison (A) but downregulated in the BG, AL and AH vs SBM comparisons. The genes that were differentially downregulated in the SBM vs CT comparison (B) but upregulated in the BG, AL and AH vs SBM comparisons. Genes that were altered in the SBM group had mRNA levels in the AL (C, D) and AH (E, F) groups, respectively, similar to that in the control group.
concentration of granulocytes represents a transition from an acute phase to a chronic inflammatory state (30). We also gathered evidence on increased presence of neutrophils in the intestine of larval zebrafish (16) and in the present study the expression of the neutrophil marker mpx was elevated by the SBM diet. Moreover, MPX was found to be involved in the production of ROS in the mucosa of patients suffering from intestinal inflammation (31). The MMPs secreted by neutrophils degrade the extracellular matrix, facilitating the transendothelial migration of neutrophils to the inflamed sites (32). In the present study, several inflammation-related GO terms like leukocyte chemotaxis, leukocyte migration were enriched in the SBM group with significantly upregulated immune genes (mmp13a, coro1a, il22, ccl34a.4, cd59, foxn, gig2i). The metalloprotease gene mmp13 codes for an endopeptidase that plays a critical role in intestinal epithelial barrier disruption and is therefore considered a potential therapeutic agent for treating IBD (33). LPS-induced goblet cell depletion, ER stress, permeability and tight junction alterations were reduced in the gut of Mmp13 knockout mice (33). The gene il22 codes for a cytokine that regulates the intestinal barrier integrity and its expression was altered during inflammation (34). A previous study on juvenile Jian carp (Cyprinus carpio var. Jian) reported that soybean b-conglycinin can also cause intestinal damage and induce inflammation and oxidative stress as a result of the elevated expression of inflammatory cytokine il-8, tumor necrosis factor-a (tnf-a), and transforming growth factor-b (tgf-b) genes and reduction of the anti-oxidant enzymes SOD and CAT (35). Hence, the negative effects of soybean meal can be compounded by the actions of all the antinutritional factors. For instance, soybean lectins can potentiate the detrimental effects of saponin on epithelial barrier function (36). Furthermore, dietary soybean meal can also have other metabolic effects like altering the cholesterol metabolism and hampering reproductive development (37,38). In the present study, several downregulated DEGs in the SBM group significantly enriched the GO term sexual reproduction. These results corroborated those reported in our previous article; 50% soybean meal feeding altered genes related to reproduction and cholesterol metabolism in zebrafish (16,28). This could be attributed to isoflavones present in soybean meal, which can bind to oestrogen receptors (39). As reported in previous studies, alteration in the membrane cholesterol by soy saponin might have affected cell motility and lipid metabolism by influencing the functioning of ER (40,41). Furthermore, dietary soybean meal can increase the rate of respiration (16), thereby increasing the production of reactive oxygen species (42) and aggravating the inflammatory response (43). Soybean meal diet increased the oxygen consumption (16) and altered the genes related to oxidoreductase activity in zebrafish (28). Thus, the intestinal inflammatory response to soybean meal can be a direct effect of anti-nutritional factors or due to cumulative metabolic changes caused by multiple factors in the soybean diet.
Distinct changes in the intestine of adult zebrafish fed soybean meal and algal b-glucan or AOS
Defects in the barrier function caused by intestinal structural changes can increase luminal antigen penetration and the associated chemokine-induced recruitment of neutrophils. We found that the expression of genes associated with neutrophil recruitment (mpx and cxcl8a) and barrier disruption (mmp9) was downregulated in the AOS and BG fed groups. The expression of proinflammatory cytokine il1b was upregulated in the SBM and AH group. Conversely, the expression of il1b was not altered in the AL group compared to the control group suggesting an immune modulation in the zebrafish intestine by the AL diet. Furthermore, in the AL group the downregulated DEGs (cd59, c7a, mpx, ccl36.1, itln3, aqp1a.1, nlrc3, gpr142 and mmp25b) enriched the GO terms inflammatory response, complement activation and humoral immune response. Note that mpx and mmp (mmp25b) were downregulated in the AL group. Furthermore, the expression of the gene, cat was upregulated in the AL group, and this antioxidant is a key regulator of ROS generated during inflammatory conditions (44). It should be noted that catalase activity was lower in patients suffering from intestinal inflammation (45) and catalase administration can reduce ROS levels and ameliorate inflammation, as shown in colitis mice models (44). Intestinal epithelial cells are sources of complement components and appropriate regulation of complement activation is essential to prevent intestinal epithelial cell damage. Increased complement activation has been associated with the pathogenesis of IBD (46). The C7A protein is part of the membrane attack complex (MAC), and the downregulation of gene expression of this component points to the prevention of complement activation. Therefore, the suppression of several processes related to inflammation by the downregulated DEGs and the increase in the antioxidant gene cat in the AL group suggest the ability of AOS (AL) to reduce the intestinal inflammation induced by the dietary soybean.
Conversely, in the AH group, we found one GO term, viz. negative regulation of immune system process, enriched by the downregulated DEGs (lgals9l6, CD59 glycoprotein-like). The downregulated DEG in the AH group, lgals9l6 that codes for protein galactoside-binding, soluble, 9 (galectin 9)-like 6, is an ortholog of human LGALS9 (galectin 9/Gal-9). Gal-9, a b- galactoside binding lectin with a carbohydrate recognition domain, is expressed in human crypt cells and its expression is lowered in IBD patients (47). Furthermore, mice lacking gal-9 were reported to have impaired intestinal mucosal antigen-specific IgA response and were more susceptible to developing watery diarrhoea (47). Because CD59 prevents the activation of the complement system and the associated assembly of MAC, the decrease in epithelial expression of CD59 in IBD patients renders the epithelial cells prone to complement lysis and may lead to destruction of gut epithelium (48). Furthermore, the comparison of AH group with the SBM group revealed the upregulation of several immune genes (il26, cd22, nlrc3, cd206). The gene il26 is a mediator of inflammation and is overexpressed in activated or transformed T cells (49). The protein CD22 is abundantly expressed on the cell surface of activated B-lymphocytes and it can negatively regulate lamina propria eosinophil levels, as in the case of mice (50). Based on these facts, we speculate that the AL group is effective in reducing inflammation. Venn diagrams created to understand the differential effects of AL and AH on the expression of genes in zebrafish intestine revealed that the results of the AL vs SBM comparison was distinct from those of AH vs SBM comparison. We found only three common DEGs (loc103910140, zgc:63568, polr2c) in the two comparisons; two (zgc:63568 and loc103910140/CD59) of these were downregulated DEGs and one was an upregulated (polr2c) DEG. As mentioned before, the protein CD59 prevents the complement activation and MAC formation. We performed a direct transcriptomic comparison of the AL group with AH group to delineate further the specific effects of the two products. This comparison revealed the downregulation of ccl36.1 and crp6 in the AL group. The zebrafish gene crp6 is an ortholog of human CRP, which is used as a biomarker of systemic inflammation and has been reported as a valuable marker of IBD (51). In our previous study, we have reported that the positive effects of yeast-derived b-glucan on soybean meal-induced inflammation could also be due to a downregulation in the expression of ccl36.1 (28). In the present study, dietary algal b-glucan downregulated the DEGs (zgc:171509, timp2b, lipg, mmp13a, c7a) linked to the GO terms like endopeptidase activity and proteolysis. We found that the immunostimulant can suppress the expression of mmp13a while the expression of mmp13 was upregulated in Atlantic salmon infested with sea lice in response to chronic tissue damage (52). The genes mmp13 and timp2 have an essential role during tissue remodelling because the expression of the molecules determine the intricate extracellular matrix turnover (53, 54). Therefore, these genes could be markers of tissue damage caused by the dietary soybean. Furthermore, as noted in this study on adult fish, in larval zebrafish also algal b-glucan reduced endopeptidase and proteolytic activity (16).
AOS diet altered the histological architecture of the intestine
We studied five histomorphological parameters of the intestine and found that the AL group had longer villi with significantly higher number of goblet cells. Goblet cells are responsible for the synthesis, storage, and release of intestinal mucin proteins. Mucus production is an indication of a healthy barrier function as it restricts the entry of pathogens and unwanted luminal antigens into the intestine. It is reported that oligosaccharides can support the mucosal barrier function by stimulating intestinal goblet cells to produce more mucus (55). More mucus cells per villi is an indication that more intestinal cells differentiate into goblet cells to reinforce the barrier. We noted changes specific to the AL group. AL diet fed fish had longer villi and more goblet cells. More goblet cells, longer villi and an increase in the villus height-to-crypt depth (V:C) ratio were reported in a study on AOS diet fed pigs that had better growth (11). b-glucans increased the V:C ratio as well as the average body weight of broiler chicken (56). Mannan oligosaccharide enhanced the growth, increased the villi height and decreased the intestine crypt depth of the juvenile striped catfish (Pangasianodon hypophthalmus) (57). There are not many reports on AOS induced alteration in V:C ratio and its correlation with the growth of fishes. Since zebrafish lacks intestinal crypts (58), we cannot relate the growth to the V:C ratio. Nevertheless, increased villus height has been associated with increased nutrient absorption, higher transport of nutrients and improved growth in mammals and fish (59,60). However, the AL diet did not stimulate the growth of zebrafish. Our previous study on the larval zebrafish model also did not reveal any effect of the SBM and b-glucan diets (also used in the present study) on the standard length (16). Conversely, SBM diet caused several developmental defects like impairment of eye, swim bladder and skeletal deformities in the larval zebrafish (16). Zebrafish is known to have a determinate growth (46) and therefore fish of age 20-40 dpf is considered suitable for a reliable growth study. During this period, energy is predominantly allocated for rapid growth and this time window permits a 40-fold increase in body weight (61). However, in our study we did not find any changes in the growth of the AL group as the feeding experiment was conducted using adult zebrafish. A previous study has also reported that inclusion of soybean meal can stimulate an inflammatory response in the intestine without any effect on the growth of zebrafish (62).
Plasma metabolites indicate soybean meal-induced inflammation, and AOS and algal b-glucan induced SCFA and vitamin production
To our knowledge, this is the first study on the metabolites of zebrafish plasma. Plasma metabolomics can give indications of the systemic perturbations caused by intestinal inflammation (39). Only few metabolites have been detected in plasma from zebrafish due to the small sample amount that can be retrieved from the fish. A comparison of the SBM group with the control group yielded 9 differentially abundant metabolites, out of the 71 detected metabolites. Among the altered metabolites, itaconic acid was significantly abundant in the SBM group compared to the control group. Itaconic acid is considered a biomarker of inflammation, and M1 macrophages are known to produce substantial amounts of itaconate (63). Furthermore, itaconate concentration was markedly increased during lipopolysaccharideand interferon-g-induced activation of mammalian macrophages (64), probably due to polarization of macrophages to their M1 phenotype. We also found a decrease in the metabolite taurochenodeoxycholic acid (TCDCA) in the SBM fed group. TCDCA, the secondary bile acid that is conjugated with taurine, is a derivative of the primary bile acid chenodeoxycholic acid. Secondary bile acids are microbiota-associated metabolites and studies have reported an increase in primary bile acids and a reduction in secondary bile acids in IBD patients (65). Furthermore, the amino acid residues in soybean protein have a high bile acid-binding ability and can suppress enterohepatic circulation, even in fishes (66). Therefore, the decreased concentration of TCDCA in plasma is also likely due to soybean feeding. On the other hand, arginine nitric oxide and arginine urea pathways are implicated in the pathogenesis of IBD; in the former case NOS2 (the inducible form of nitric oxide synthase [iNOS]) metabolizes L-arginine to NO and L-citrulline and in the latter arginases (ARG1 and ARG2) catalyse the conversion of arginine to urea and ornithine. The abundance of ornithine and citrulline was higher in the plasma of zebrafish fed the inflammation-inducing diet, as reported for IBD patients (67). However, pathway analysis using metabolites with significantly higher abundance in SBM group detected arginine biosynthesis as the significantly enriched pathway. Our results point to the enrichment of the arginine biosynthesis pathway and upregulation of ornithine and citrulline. Increase of arginine biosynthesis in the SBM group could be a compensatory response in the body due to decreased arginine availability associated with the inflammatory response (68).
SCFAs are produced by microbial fermentation of non-digestible carbohydrates in the posterior segment of the intestine of mammals (69). We found significantly higher levels of 2-hydroxybutyric acid (2-HB) in the plasma of the AL group. While a high fat diet decreased the abundance of 2-HB in the serum of mice, dietary polysaccharide increased the metabolite (70). The same study reported that pretreatment of macrophages with 2-HB can significantly decrease LPSinduced up-regulation of TNF-a. In the present study, proinflammatory genes were downregulated in the AL group with a high abundance of 2-HB. Interestingly, we found that AOS (AL) can elevate 2-HB, which was positively correlated with Alloprevotella in another study (71). Furthermore, dietary AOS could increase the abundance of Alloprevotella and butyric acid which are positively correlated (72). The AL and AH groups exhibited distinct transcriptomic and metabolomic responses although the two diets differed only in terms of the percentage of the low molecular weight fraction (AL, with 31% < 3kDa and AH, with 3% < 3kDa). Our results indicate that this difference can affect the immune modulatory and prebiotic potential of the diet; the AL diet was more effective in reducing the intestinal inflammation compared to the AH diet. Low molecular weight polysaccharides are more soluble and have greater fermentability (73). Low and high molecular polysaccharides are utilized by different intestine bacteria (74), and the former type is fermented faster to produce SCFAs and have better prebiotic potential (75). This could be the reason for the detection of a SCFA in the plasma of the AL group.
A comparison of the BG group with the SBM group indicated an increase in the abundance of pantothenic acid also known as vitamin B5 (VB5). A previous study has found an inverse correlation between dietary VB5 intake and serum CRP concentration (marker of inflammation) in humans (76). Although we observed a downregulation of crp6 in the AL group compared to the AH group, such changes were not noted for the BG group. A study on mice revealed that VB5 could enhance the phagocytosis of macrophages to reduce the pathogen load in macrophages (77). In addition, in vitro studies have shown that VB5 can increase glutathione levels in cells, suggesting a role of VB5 as an antioxidant to reduce cell damage (78). Although previous studies have not indicated a connection between pantothenic acid and dietary b-glucan, it is possible that algal b-glucan might have stimulated the proliferation of gut microbes such as Bacteroides fragilis, Prevotella copri and Ruminococcus spp. that possess the genes to synthesize vitamin B5 (79).
Conclusion
Dietary soybean affected both the expression of inflammatory marker genes (il1b, mpx, cxcl8a) and metabolites like itaconic acid and taurochenodeoxycholic acid in the intestine of zebrafish. Conversely dietary AOS with a higher percentage of low molecular weight reduced the expression of several inflammatory marker genes, increased goblet cell number, villi height and a SCFA in the plasma. The BG diet suppressed several immune genes linked to the endopeptidase activity and proteolysis, suggesting a possible role of algal b-glucan in controlling the tissue damage caused by dietary soybean. In the future, it would be interesting to study the impact of structurally different AOS on the microbiota composition and SCFAs in zebrafish and explore the synergetic effect of AOS and algal b-glucan in reducing soybean induced intestinal inflammation.
Data availability statement
The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found here: SRA Bioproject, PRJNA896758.
Author contributions SR, VK, and JD designed the study. RP provided AOS products, KM supplied the algal b-glucan and helped to plan the study. JD formulated and prepared the experimental feeds. SR and AG performed the feeding experiment. AG, SR, and YA did the sampling. SR did the molecular analyses. AG and SR performed the bioinformatic analysis. SR and VK wrote the manuscript. All authors contributed to the article and approved the submitted version.
Funding
SR and AG were supported by Netaji Subhas-ICAR International Fellowships (NS-ICAR IFs) from the Indian Council of Agricultural Research, India. The authors acknowledge the funding support received from Nord University. | 2023-05-19T13:11:05.268Z | 2023-05-19T00:00:00.000 | {
"year": 2023,
"sha1": "c82613eb1706d700e837ffa28b21aabd85f9356e",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "c82613eb1706d700e837ffa28b21aabd85f9356e",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
12988887 | pes2o/s2orc | v3-fos-license | Palliative care and quality of life in neuro-oncology
Health-related quality of life has become an important end point in modern day clinical practice in patients with primary or secondary brain tumors. Patients have unique symptoms and problems from diagnosis till death, which require interventions that are multidisciplinary in nature. Here, we review and summarize the various key issues in palliative care, quality of life and end of life in patients with brain tumors, with the focus on primary gliomas.
Introduction
"Palliative care begins from the understanding that every patient has his or her own story, relationships and culture, and is worthy of respect as a unique individual. This respect includes giving the best available medical care and making the advances of recent decades fully available." -Dame Cecily Saunders.
Primary and secondary malignant tumors of the brain, despite combined modality treatment with surgery, radiotherapy, and chemotherapy, are virtually incurable; palliation, maintenance and improvement of the patient's quality of life is of more importance. From diagnosis to the end of life, the care needs of patients with brain tumors are high, underestimated and often neglected. Also, care and needs increase towards the end of life, with a high incidence of neurological symptoms and psychosocial problems [1].
Palliative care, as defined by the World Health Organization, is "an approach that improves the quality of life of patients and their families facing the problems associated with life-threatening illness, through the prevention and relief of suffering by means of early identification and impeccable assessment and treatment of pain and other problems, physical, psychosocial and spiritual". The goal of palliative care is to maximize quality of life, with inputs from a multidisciplinary team to help the patient live as actively as possible whilst neither hastening nor postponing death [2]. Early integration of palliative care into the treatment schedule improves quality of life and symptom management, and leads to a reduction in aggressive therapy at the end of life [3,4].
Patients diagnosed with brain tumors and their caregivers undergo enormous physical, emotional, social and financial hardships, during diagnosis, treatment and towards the end of life.
Quality of life issues
Quality of life is a concept that encompasses the multidimensional well-being of a person in terms of physical or functional status, as well as emotional and social wellbeing, and therefore reflects an individual's overall satisfaction with life [5]. Health-related quality of life is, by definition, a patient-reported outcome measure, reflecting the patient's perspective.
Studies on quality of life are primarily qualitative and have focused on specific symptoms such as fatigue, sleep disorders, cognitive dysfunction and some symptom clusters. Quality of life in brain tumor patients is complex and multidimensional in nature, with symptoms having interrelationships with each other as well as with patient, tumor, and treatment factors. The increased interest in exploring quality of life as a primary end point for cancer therapy has created a need for prospective, controlled studies to assess baseline and serial quality of life parameters apart from the classic outcome measures, such as progression-free survival and overall survival [5]. In fact, in the RTOG-0825 study [6], the baseline and early change in neurocognitive function were prognostic for overall survival and progression-free survival. However, assessing quality of life is challenging as validated instruments for measurements are scarce, serial measurements over time are often associated with a lack of compliance, and there is a lack of well-designed trials.
Quality of life measurement
Impairments are the direct consequences of disease, demonstrated by physical examination, and can be evaluated using neurological and neuropsychological examinations. Disability is the impact of this impairment on the patient's ability to carry out activities and can be determined using scales such as the Barthel index, and the Karnofsky Performance Status Scale. Handicap is the consequence of disability on patient well-being; the Modified Rankin Handicap Scale and the Spitzer scale are specific handicap scales for brain tumor patients [7].
Various health-related quality of life measures are available for use in clinical trials as well as in daily clinical work. The European Organization for Research and Treatment of Cancer quality of life questionnaire (EORTC QLQ-C30) consists of 30 items, which are organized into five functional scales (physical, role, emotional, cognitive and social functioning), three symptom scales (fatigue, nausea and vomiting, and pain), one global health status scale, one overall quality of life scale, and six single items (dyspnoea, insomnia, appetite loss, constipation, diarrhoea, and financial difficulties) [8]. This core questionnaire can be supplemented with a brain tumor-specific questionnaire, the EORTC QLQ-BN20, which includes 20 items, which are organized into four scales (future uncertainty, visual disorders, motor dysfunction, and communication deficit) and seven single items (headache, seizures, drowsiness, hair loss, itchy skin, weakness of legs, and bladder control) [9]. All single item and/or multi-item scales of the EORTC questionnaires are linearly transformed to 0-100 scales. Change in scores of ≥10 points on any given scale are interpreted as being clinically meaningful; changes of >20 points represent a very large effect. The Functional Assessment of Cancer Therapy-General questionnaire (FACT-G) consists of 27 items covering four domains: physical, social/family, emotional, and functional wellbeing [10]. A brain cancer-specific subscale consists of 23 items measuring concerns relevant to patients with brain tumors [11]. The EORTC measures are more focused on functioning and symptoms, while the FACT measures cover more psychosocial aspects of the disease and its treatment. The recent M.D. Anderson Symptom Inventory (MDASI) questionnaire was specifically designed to measure the severity of symptoms in cancer patients (13 items) as well as the interference of these symptoms with activities of daily living (six items) [12]. In addition to the core questionnaire, a brain tumor-specific module (MDASI-BT) has been developed, consisting of nine items (weakness, difficulty in understanding and speaking, seizures, difficulty concentrating, vision, change in appearance, change in bowel pattern, and irritability) [13].
Proxies or health care professionals can rate patient quality of life, when patients are unable to self-report. Proxies and health care providers tend to report more health-related quality of life problems than do patients themselves and proxy ratings tend to be more in agreement with patient physical health-related quality of life domains than with psychological domains [7].
Specific symptoms
Brain tumor patients deal with a significant symptom burden. Studies that have evaluated specific symptoms affecting quality of life have been mostly descriptive, and most examine a heterogeneous group of patients receiving different therapies at different stages of their illness. Most of these studies have focused on fatigue, sleep, pain, seizures, mood disturbance and cognitive function [14][15][16]. While some of these symptoms are seen in patients with other malignancies, fatigue, neurological deficits, seizure, cognitive dysfunction and mood disturbances are specifically encountered in neuro-oncology.
Fatigue is one of the significant symptoms in patients with newly diagnosed and recurrent high-grade gliomas and may be more significant a problem when compared to patients with low-grade tumors [17][18][19]. While seizures are more commonly observed in low-grade glioma patients than in high-grade glioma patients, and are associated with deterioration in multiple cognitive domains, the prognostic impact of seizures remains contentious [20][21][22].
Mood disturbances, especially anxiety and depression, are commonly noted in brain tumor patients [5]. Depression is one of the most important independent predictors of quality of life and has been shown to have an adverse impact on survival [23,24]. Cognitive functioning has also been extensively studied and reported in brain tumor patients [25][26][27]. Impaired cognition is seen in patients prior to therapy, after radiotherapy and chemotherapy, as well as in patients with tumor recurrence. In a study, cognitive deterioration was detected 6 weeks prior to radiographic failure [28].
Although symptoms such as anxiety, depression, pain, fatigue, and sleep disturbance have been studied separately, they are often interrelated and may have a common etiology. Research concerning such symptom clusters in primary brain tumor patients is, however, limited. Pharmacological, non-pharmacological and complementary medicine interventions are being studied to assess their ability to improve cognition and mood disturbances, and hence improve the quality of life [29][30][31][32].
Factors influencing quality of life
Quality of life scores depend on various patient, tumor, and treatment-related factors [33]. There may be differential importance of various factors influencing quality of life scores in high-grade gliomas and low-grade gliomas.
Patient factors
Performance status is related to quality of life in patients with newly diagnosed high-grade gliomas [34], with a worse Eastern Cooperative Oncology Group (ECOG) performance score associated with worse overall quality of life. In a study evaluating the factors influencing the activities of daily life, the functional independence and functional activity measurement systems were found to be significantly higher in patients with a Karnofsky performance score of 70 or more and a neurological performance scale of 0 or 1 [35]. Some studies of patients with brain tumors reported higher levels of mood disturbance and lower quality of life scores in women than in men [36].
Tumor factors
Several tumor-related factors like tumor grade, location and laterality have been shown to have an impact on health-related quality of life. Patients with high-grade gliomas experience worse quality of life than patients who have low-grade gliomas [7,37]. Depression may also be more frequent in patients with left-hemisphere tumor high-grade gliomas, while right-hemisphere primary brain tumor patients may have higher anxiety. Low-grade glioma patients with lesions in the frontal lobe or lesions in the temporo-parietal lobe are known to have mood disturbances. Also, greater cognitive disability has been noted in those with tumors in the dominant hemisphere [18,38].
Treatment factors
Treatment modalities like surgery, radiation, chemotherapy and concomitant medications can affect the overall quality of life of brain tumor patients. In high-grade glioma patients, those only undergoing biopsy have worse quality of life than those who have undergone gross total resection [39]. Both radiation-induced fatigue and cognitive dysfunction are known to adversely affect quality of life. Also, cognitive functioning is significantly more impaired in patients receiving whole brain irradiation compared to partial brain irradiation [40]. Patients receiving temozolomide develop more symptoms such as vomiting, anorexia, constipation, and decreased social functioning; the quality of life in newly diagnosed glioblastoma patients receiving either radiotherapy alone or radiotherapy with concomitant and adjuvant temozolomide was substantially impaired compared to historical controls, but no significant decrease in overall quality of life was noted throughout treatment [15].
Bevacizumab, an anti-vascular endothelial growth factor (VEGF) inhibitor, has been studied extensively in recurrent and newly diagnosed glioblastoma patients. In the AVAglio study [41], in addition to significantly prolonging progression-free survival and decreased corticosteroid dependence, bevacizumab was also found to improve or stabilize health-related quality of life compared with placebo; while in the RTOG-0825 study [6], the visuomotor measure of executive function was worse in the bevacizumab arm than the placebo. The contrasting findings related to quality of life between these two large randomized trials could be due to the different parameters evaluated, statistics involved in analysis, and the level of participation in the testing.
Antiepileptic drugs and steroid medications are commonly prescribed to brain tumor patients and can adversely affect physical, emotional, and cognitive functioning. Antiepileptics have been associated with cognitive dysfunction and steroids have been linked to depression in high-grade glioma patients [20,42].
Quality of death and end-of-life issues
When the patient's condition declines due to tumor progression, and further tumor-directed treatment is not an option, the end-of-life phase begins. During this phase, symptom burden becomes high and patients are often troubled by seizures and deficits in cognition, communication, and motor function. Furthermore, loss of consciousness, cognitive disturbances, communication deficits, and confusion often hamper the patient's competence to participate in end-of-life decision making [43,44].
The goal of end-of-life care is to achieve a death with dignity. The intrinsic dignity of every human being is based on their sense of worth, personal goals and social circumstances; personal dignity can be influenced by various factors such as symptom distress, acceptance of disease, level of independence, spiritual well-being, preservation of social role and social support [45]. A recent study found that "death with dignity" was associated with better communication, especially regarding prognosis, and patients being prepared for death. It was also associated with greater satisfaction with the physician providing end-of-life care on the part of relatives [43].
In order to allow the patient to experience a peaceful death, specific palliative interventions are required for the control of pain, confusion, agitation, delirium or seizures. The lack of control of symptoms can often lead to re-hospitalization with a resultant increase in costs and a worsening of patient's quality of life. In a study of over 5000 patients with glioblastoma, more than one fifth were found to be hospitalized for at least 25% of their remaining lives [46]. Although, hospitalizations are primarily aimed at identifying correctable causes and reducing the symptoms, it may also be emotionally and financially counter-productive; this is true especially in advanced cancer, where hospitalization has been shown to be associated with a reduced quality of life [47]. Many patients prefer to die at home and, in a study on high-grade glioma patients, less than 5% of patients were actually in favour of hospitalization near the end of life [48]. With effective home-based palliative care, the rate and cost of hospitalization can be significantly lowered [49].
Management of symptoms
The main goals of palliative care and end-of-life care in patients with brain tumors are to offer adequate symptom control, relief of suffering, to avoid inappropriate prolongation of dying and to support the psychological and spiritual needs of patients and families [50,51].
Epilepsy appears to be one of the more frequent symptoms in the last stage of disease and represents a major issue in the management of dying patients, particularly in those assisted at home. Loss of seizure control in the end-of-life phase may influence the quality of life of patients and their caregivers. In a recent study on high-grade gliomas, more than 35% of patients developed seizures in the last month before death, with the risk higher in patients with a previous history of epilepsy [52]. Most patients may encounter swallowing difficulties when taking anticonvulsants orally, due to dysphagia and disorientation, hence anticonvulsant therapy needs to be optimized; alternate routes of drug administration (such as intramuscular, rectal, trans-dermal, or subcutaneous) can be considered.
Agitation and restlessness together with physical pain are common features and require appropriate treatment. In the large majority of patients, headache is due to increased intracranial pressure, and usually responds to steroid treatment. In patients with meningeal syndrome due to meningeal involvement, headaches may be severe; steroids, pain medication with non-opioids or opioids might be indicated [53].
In the last weeks of life, most patients experience a progressive loss of consciousness, lethargy, and confusion, and the majority of patients enter into deep coma in the last days. Agitation, delirium and confusion without a complete loss of consciousness may be very distressing for patients and their families, especially in a home care setting. Palliative sedation is the intentional lowering of the level of consciousness of a patient in the last phase of life by administration of sedatives. The objective is to relieve severe physical or psychological suffering that is otherwise untreatable. A subcutaneous infusion of midazolam is used for continuous sedation, if feasible; otherwise intermittent administration of midazolam, diazepam, lorazepam or chlorpromazine may be considered [54].
In the unconscious dying patient, difficulty in clearing upper airways leads to an accumulation of respiratory tract secretions. This "death rattle" may be very distressing for the family and caregivers but is unlikely to be distressing for the patient, owing to the decreased level of consciousness. Gentle nasal suction, postural drainage and administration of anti-cholinergic drugs can help in reducing these symptoms.
End-of-life treatment decisions
End-of-life treatment decisions in neuro-oncology include withdrawing or withholding of medications, nutritional support and palliative sedation. While withholding medication is a planned decision not to undertake symptomatic therapy that is otherwise warranted, withdrawal is the discontinuation of symptomatic treatments, and terminal sedation is defined as the pharmacologically induced reduction of vigilance up to the point of the complete loss of consciousness, with the aim of reducing or abolishing the perception of symptoms that would otherwise be intolerable [53,55]. The process of end-of-life decision making is complicated by the presence of cognitive deterioration that may affect patients' competence to express treatment preferences. Therefore, it is of paramount importance to plan and discuss end-of-life decisions in advance with the patient and the family.
Diversity amongst countries, professionals and cultures
In many parts of the world, hospice and palliative care is still non-existent or in its infancy. Approximately one million individuals die each week around the globe and, even in developed countries, medical services have often focused on preventing death rather than helping people suffering from pain, discomfort and stress [56].
The International Observatory on End of Life Care (IOELC) reviewed and compared the hospice-palliative care activity in various countries. The countries were placed in six different groups based on their activity: no known hospice-palliative care activity, capacity building activity, isolated and generalized hospice-palliative care provision, countries where hospice-palliative care services are at a stage of preliminary integration into mainstream service provision, and countries at a stage of advanced integration into mainstream service provision. In 2006, 115 of the world's 234 countries (49%) had established one or more hospice-palliative care services. The trend increased by 9% in 2011: 136 of the world's 234 countries (58%) had one or more hospice-palliative care service established. In 2006, 156 countries (67%) were actively engaged in delivering a hospice-palliative care service or developing the framework; by 2011 there had been a slight increase in this number to 159 countries. Also, in most regions of the world, a strong association exists between palliative care and human development, with the more developed countries in the process of integration of palliative care services [57,58].
This heterogeneity across the various countries is highlighted in the study by the Economist Intelligence Unit's research team, which devised the Quality of Death Index by collating data across 40 countries from interviewing various health professionals. The index used differential weightage for basic healthcare environment, availability, quality and cost of end of life care perceived amongst these countries. The bottom-ranked countries in the Quality of Death Index included developing countries, such as China, Mexico, Brazil, India and Uganda, and, surprisingly, some developed countries like Denmark, Japan, Italy, Finland and South Korea. The report also noted that the public awareness regarding end-of-life care was found to be lacking in both developed and developing countries alike [59].
Strong taboos against talking about death exist in various cultures and communities. Even in India, where death is discussed more openly as the inevitable consequence of life, the protective attitude of the relatives still presents a big barrier to open communication with the patient. In the case of children, these taboos around death and dying are stronger. Death of children is more accepted in developing countries with high infant mortality rates, but in the developed world, the "cure at all cost" attitude of the health care professionals and parents heavily influences the endof-life care. This ideological difference owing to specific cultures was evident when we conducted a survey amongst various neuro-oncology professionals in Asian countries; we noticed that most health care professionals refer patients only when they develop symptoms that require palliation, and the referral of patients to hospice care at the end of life was done rarely [60].
A government-led national palliative care strategy exists in very few developed countries. While government policy statements do not necessarily guarantee quality and availability of end-of-life care, they can be valuable if backed up by the development of strategic services. According to the World Health Organisation, about five billion people have insufficient or no access to medications to control severe or moderate pain. While legally any physician can prescribe opioids for pain control, their availability is a major concern in many countries, essentially because of the complex narcotics laws restricting the sale of morphine, as governments are concerned about illicit drug use [59].
Caregivers' perspective
The diagnosis of a brain tumor has a catastrophic effect not only on the patient but also on the family members. Family caregivers provide extraordinary uncompensated care involving significant amounts of time and energy for months or years, requiring the performance of tasks that are often physically, emotionally, socially, or financially demanding. They are constantly challenged to solve problems and make decisions as care needs change; because the focus is on the patient, their own needs are often neglected [61].
For parents, the grieving process starts right at the diagnosis of the brain tumor and the decision to move toward palliative care is a difficult one, filled with many highly charged emotions including anger, and a search for answers, the intensity of which differs between family members. The neurologic deterioration that characterizes the dying trajectory of children warrants the need for increased awareness of the distinct issues in the palliative care of children with brain tumors and for early anticipatory guidance provided for families. In one study [62], parents described the loss of the ability to communicate as a turning point that led to acceptance. Parental coping mechanisms included striving to maintain normality, and finding spiritual strength through hope and the resilience of their child. Parents are also required to handle routine tasks while learning new skills, involving hands-on patient care, and their stress is compounded by financial hardships, and inadequate community support. Parents of dying children have an overwhelming feeling of loss during the end-of-life phase, and part of the purpose of palliative care is to help them come to terms with the loss of their child. This process is a gradual relinquishing of the instinct to preserve the child's life regardless of their condition and accepting inevitable loss. A perceived loss of control by the parents makes this process a major challenge. However, parents who make this transition are more receptive to their child's needs [63].
Conclusion
In summary, with growing awareness amongst clinicians regarding the quality of life, even in patients with as difficult a disease as brain tumor, focus is shifting towards meaningful prolongation of life. This in turn necessitates an active participation of the patient, family and caregivers, with the health care professional involved in every step of the disease management. "Quality of death" is a concept that could be viewed as a natural extension of quality of life. With the amalgamation of these concepts into routine clinical practice, the basic "right to a decent life" is fortified. | 2015-03-03T00:52:33.000Z | 2014-08-01T00:00:00.000 | {
"year": 2014,
"sha1": "62acd0fc24a660abda01fb54113755c6c77c9cb6",
"oa_license": "CCBYNC",
"oa_url": "https://f1000.com/prime/reports/m/6/71/pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "62acd0fc24a660abda01fb54113755c6c77c9cb6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
118438814 | pes2o/s2orc | v3-fos-license | Observations of the Near- to Mid-Infrared Unidentified Emission Bands in the Interstellar Medium of the Large Magellanic Cloud
We present the results of near- to mid-infrared slit spectroscopic observations (2.55--13.4 um) of the diffuse emission toward nine positions in the Large Magellanic Cloud with the Infrared Camera (IRC) on board AKARI. The target positions are selected to cover a wide range of the intensity of the incident radiation field. The unidentified infrared bands at 3.3, 6.2, 7.7, 8.6 and 11.3 um are detected toward all the targets, and ionized gas signatures: hydrogen recombination lines and ionic forbidden lines toward three of them. We classify the targets into two groups: those without the ionized gas signatures (Group A) and those with the ionized signatures (Group B). Group A includes molecular clouds and photo-dissociation regions, whereas Group B consists of HII regions. In Group A, the band ratios of I(3.3)/I(11.3), I(6.2)/I(11.3), I(7.7)/$I(11.3) and $I(8.6)/$I(11.3) show positive correlation with the IRAS and AKARI colors, but those of Group B do not follow the correlation. We discuss the results in terms of the polycyclic aromatic hydrocarbon (PAH) model and attribute the difference to the destruction of small PAHs and an increase in the recombination due to the high electron density in Group B. In the present study, the 3.3 um band provides crucial information on the size distribution and/or the excitation conditions of PAHs and plays a key role in the distinction of Group A from B. The results suggest the possibility of the diagram of I(3.3)/I(11.3) v.s. $I(7.7)/$I(11.3) as an efficient diagnostic tool to infer the physical conditions of the interstellar medium.
Introduction
Since the discovery of the 11.3 µm band in planetary nebulae in 1973 (Gillett et al. 1973), space infrared missions, and air-borne and ground-based infrared observations have shown that the major unidentified infrared (UIR) bands appear at 3. 3, 6.2, 7.7, 8.6, 11.3, 12.6 and 16.4 µm together with some faint features. The UIR bands have been observed in various astrophysical environments, including photo-dissociation regions (PDRs), reflection nebulae, planetary nebulae (e.g., Peeters et al. 2002), the diffuse interstellar medium (ISM) (e.g., Onaka et al. 1996;Mattila et al. 1996), nearby galaxies of various types (e.g., Dale et al. 2006;Kaneda et al. 2008;Smith et al. 2007), and distant galaxies (e.g., Lutz et al. 2005;Sajina et al. 2007). The carriers of the UIR bands are generally thought to be polycyclic aromatic hydrocarbons (PAHs) or PAH-containing carbonaceous compounds (e.g., Sakata et al. 1984;Puget & Leger 1989;Papoular et al. 1989;Allamandola et al. 1989). PAHs are excited by absorbing a single UV photon and emit a number of IR photons corresponding to vibration modes of C-C and C-H bonds. The 3.3 µm band is assigned to C-H stretching modes, the 6.2 µm band to C-C stretching modes, the 7.7 µm band to blending of several C-C stretching modes and C-H in-plane bending modes, the 8.6 µm band to C-H in-plane bending modes, the 11.3 µm band to solo C-H out-of-plane bending modes, and the 12.6 µm band to trio C-H out-of-plane bending modes, respectively (Allamandola et al. 1989).
Recent laboratory experiments and quantum chemical calculations suggest that the properties of the UIR bands (e.g., shapes, center wavelengths, interband ratios, etc.) reflect the chemical and physical properties of PAHs (e.g., molecular structure, size distribution, ionization state, temperature, etc.), which may be altered in interstellar and circumstellar environments (Tielens 2008). Therefore the UIR bands have a great potential to be used as efficient diagnostic tools to infer the physical condition of the ISM even in remote galaxies.
Observations of the diffuse Galactic radiation and normal galaxies have shown very little variation in the mid-infrared (MIR) UIR band spectra (6-12 µm) until recently (Chan et al. 2001;Lu et al. 2003;Sakon et al. 2004), whereas small variations in the MIR UIR bands have been reported between the disk and halo regions or the arm and interarm regions of galaxies (Irwin & Madden 2006;Sakon et al. 2007). Latest Spitzer and AKARI observations clearly show distinct variations in the MIR UIR spectra in particular galaxies (Kaneda et al. 2007;Smith et al. 2007;Galliano et al. 2008) for the first time. However, the diagnostic for the physical conditions of the ISM as well as the chemical and physical evolution of PAHs in galaxies by means of the UIR bands is not yet fully explored.
In this paper, we present the results of near-infrared (NIR) to MIR spectroscopic observations of the ISM with different radiation conditions in the Large Magellanic Cloud (LMC) with the infrared camera (IRC) onboard AKARI (Murakami et al. 2007;Onaka et al. 2007b). The LMC is a nearby irregular galaxy and is located at the distance of 50 kpc from the Milky Way (Feast 1999;Keller & Wood 2006). In addition to its proximity, the almost face-on orientation (i ∼35 • ; van der Marel & Cioni 2001;Olsen & Salyk 2002;Nikolaev et al. 2004) provides us with a unique opportunity to investigate regions with different physical conditions without confusion because of the spatial resolution of ∼ 5 ′′ in the MIR of AKARI/IRC. The IRC spectroscopy has a unique characteristic that it can obtain a spectrum from 2.5 to 13 µm simultaneously with the same slit (Ohyama et al. 2007). This has an advantage for the spectroscopy of extended objects over other space instruments since the Short Wavelength Spectrometer (SWS) onboard the Infrared Space Observatory (ISO) had different diaphragms from the NIR to MIR (de Graauw et al. 1996) and the Infrared Spectrograph (IRS) on Spitzer lacks a channel in the NIR (Houck et al. 2004). Vermeij et al. (2002) report the MIR UIR band ratios of H II regions of the LMC based on observations with the ISOPHOT/PHT-S instrument on board ISO, but do not include the 3.3 µm band in their analysis because of the low signal-to-noise ratio (S/N) in the short wavelength channel. The 3.3 µm band, in fact, is most sensitive to the smallest PAHs (e.g., Schutte et al. 1993) and its relative intensity to the MIR UIR bands provides us with significant information on the average temperature of PAHs, which depends on the size distribution and the excitation condition of PAHs. In this paper, we investigate variations in the relative intensity of the UIR bands in the NIR to MIR of the diffuse radiation from regions with different radiation field conditions in the LMC and discuss them in relation to the physical properties of PAHs.
In §2, the observation and the data reduction are described together with the selection of the target positions. The obtained spectra are presented in §3. In §4, the observed variations in the UIR band ratios in different radiation field conditions are investigated in terms of the PAH model. Diagnostic of the physical conditions of the observed regions is also discussed based on the UIR band ratios. A summary and conclusions are given in §5.
Observations
The present study employs the datasets of eight pointed observations (observation IDs: 1400330, 1400346, 1400318, 1402426, 1400324, 1402422, 1400334 and 1400320) collected as part of the AKARI mission program "ISM in our Galaxy and Nearby Galaxies" (ISMGN; Kaneda et al. 2009). All of the observations were performed with the slit spectroscopic mode with the choice of the grism for the NIR disperser (AOT04 b:Ns; Ohyama et al. 2007).
The accurate slit position is determined from the 3.2 µm image (N3), which is taken during each pointed observation (see §2.3.1), by referring to the positions of point sources in the 2MASS catalog.
Target Selection
The targets are selected by taking account of the CO mapping data (Mizuno et al. 2001) and the IRAS colors of I 25 µm /I 12 µm and I 60 µm /I 100 µm , which indicate local star formation activities (Boulanger et al. 1988;Onaka et al. 2007a). Sakon et al. (2006) have shown that the extremely large IRAS colors of I 25 µm /I 12 µm and I 60 µm /I 100 µm (∼3 and ∼0.6, respectively) in the diffuse emission in the LMC can be accounted for by a large contribution from nearby young (< 30 Myr) clusters to the incident interstellar radiation field and that the small IRAS colors of I 25 µm /I 12 µm and I 60 µm /I 100 µm (∼1 and ∼0.3, respectively) are those expected from the heating by the incident radiation field inside quiescent molecular clouds (e.g., Miville-Deschênes et al. 2002). According to these results, we select several infrared bright positions with different IRAS colors as the targets of the present study, where molecular clouds are recognized on the CO maps (Mizuno et al. 2001;Fukui et al. 2008). Since the beam size of the IRAS data is larger than the size of the slit of the AKARI/IRC, we also derive the AKARI color of I L24 /I S11 from the dataset of Ita et al. (2008), where I L24 is the flux density at the L24 (24 µm) band and I S11 is that at the S11 band (11 µm) of the AKARI/IRC, to obtain the local radiation field conditions. The flux density is measured over an aperture of 5 ′′ in diameter around the central position of the slit. The IRAS and AKARI colors of the present targets are summarized in Table 2. The trend of the AKARI color of I L24 /I S11 is very similar to that of the IRAS I 25 µm /I 12 µm color, whereby we confirm that the selection based on the IRAS colors is in fact relevant to the purpose of the present study. Hence, the target positions cover a wide range of incident radiation field conditions, including molecular clouds, PDRs and H II regions.
In Figure 1, the slit positions of those datasets are shown over the false color image of the LMC obtained by the AKARI IRC LMC survey program (Ita et al. 2008). The S11 band images of a 10 ′ × 10 ′ area including each slit position are also shown in We split the spectrum of Position 8 into 2 parts: the one including the point-like source (Position 8-1) and the other without the point-like source (Position 8-2). For the other positions, the spectrum is extracted over the almost entire slit length of 40-50 ′′ .
The AKARI color of I L24 /I S11 and the IRAS colors of I 25 µm /I 12 µm and I 60 µm /I 100 µm at Positions 1, 2, 3, 4, 5 and 6 exhibit only a limited range of 0.5-1.8, 1.0-1.9, and 0.3-0.5, respectively. These targets are not associated with SWB 0 or I type star clusters in Bica et al. (1996), indicating that they are surrounded by relatively quiescent environments (Sakon et al. 2006). The AKARI and IRAS colors at Positions 7, 8-1 and 8-2 exhibit large values. These targets are located in regions associated with OB star clusters as well as Herbig Ae/Be star clusters (N158-O1 and N158-Y1 at Position 7 and N159-Y4 at Position 8; Nakajima et al. 2005), confirming that the infrared colors manifest recent star-formation activities in the ISM.
Data Reduction
The present data reduction basically follows the standard toolkit for the IRC spectroscopy. However, most of the present targets are faint and require careful data processing. Thus, some part of the process is carried out separately from the toolkit with special care. Details of the data reduction process are described in the following.
Slit spectroscopy with AKARI IRC/NIR
During a single pointed observation with AOT04 grism mode, eight to nine exposure frames of NIR grism spectroscopic (NG) data and one exposure frame of 3.2 µm imaging (N3) data are taken with the IRC/NIR (Ohyama et al. 2007). The dark current is measured in one frame each at the first and last parts of the pointed observation. A single exposure frame consists of one short-exposure and one long-exposure images. In the present study, only the long exposure data are used. The dark image for each NIR observation is obtained by averaging three long-exposure images of the dark current, which are collected from the adjacent pointed observations including itself to correct for any high-energy ionizing particle (hereafter cosmic-ray) effects by a 1.5-σ-clipping method. In the present analysis, only the dark current data measured in the first part of each pointed observation are used to avoid the latent image effects and subtracted from the observation images.
We recognize small shifts in position due to the pointing instability by at most ∼ 5 ′′ during each pointed observation except for Position 5, where an extraordinary large shift of ∼ 15 ′′ is found. The shift in the direction parallel to the slit is corrected so that the spectra of the same area of the sky are extracted. Because the shift in the orthogonal direction is uncorrectable, the exposure frames shifted in the direction perpendicular to the slit by more than a pixel (∼ 1.5 ′′ ) as well as those affected by severe artifacts are discarded, except for Position 5 (see §2.3.3). The remaining images are averaged taking account of the shift in positions in the direction parallel to the slit by a 1.5-σ-clipping method to remove cosmic-ray events and the artifacts.
The most critical part in the data-reduction of NG spectra is the removal of artificial patterns as well as the foreground components originating from the zodiacal light and diffuse Galactic emission. In NIR observations, pixels saturated by cosmic-ray hits or extremely luminous objects often produce artificial line-like patterns, sometimes termed as "column pull-down" and "multiplexer bleed" (Pipher et al. 2004). Particularly an artificial line-like pattern in the direction parallel to the slit mimics emission or absorption features in the slit spectrum. The position of the artificial line-like structure sometimes differs among different exposures even in the same pointed observation. When an artificial line structure emerges in a certain exposure image, it is removed by replacing the data of the affected pixels with those of the unaffected pixels of other exposure images at the same position.
To estimate the foreground components from the zodiacal and diffuse Galactic emission, we employ the datasets obtained at positions off the LMC with AOT04 a;Ns (observation IDs:1500719 and 1500720). In these observations the NIR spectra are taken with the prism mode (NP) instead of the grism mode (NG). Because the emission at the off-LMC positions are too faint for the observations with the NG mode, we use the data with the NP mode to obtain a reliable foreground spectrum. The slit positions of both observations are centered at (α 2000 , δ 2000 ) = (06 h 00 m 00. s 0, −66 • 36 ′ 30 ′′ ), which is almost at the same ecliptic latitude (β ∼ −90 • ) as that of the LMC (∼ −85 • ), but is at ∼ 9.7 • away from the center of the LMC. The observation log of the off-LMC position is also given in Table 1.
Slit spectroscopy with AKARI IRC/MIR-S
The data reduction procedures including the dark-current subtraction and the cosmic-ray correction for MIR-S spectroscopic observations are basically the same as those for NIR spectroscopic observations. During a single pointed observation with AOT04, four exposure frames of SG1 data, four to five exposure frames of SG2 data, and one exposure frame of 9.0 µm imaging (S9W) data are taken with the MIR-S channel. The dark current is measured in one frame each in the first and the last parts of the pointed observation. A single exposure frame consists of one short-exposure image and three long-exposure images.
The dark image for each MIR-S observation is obtained by averaging three long-exposure images of the dark current by a 1.5-σ-clipping method to correct for the cosmic-ray effects.
In this process, only the dark current data measured in the first part of each pointed observation sequence are used to avoid the latent image effects. The same shifts in position as those recognized in the NIR data are expected in the MIR-S observations because the same field-of-view is shared by the NIR and MIR-S channels and they are corrected in the same manner as in the NIR data. The shift in position among three long-exposure images taken in a single frame is negligible in most cases and, therefore, they are averaged by a 1.5-σ-clipping method to remove the cosmic-ray effects.
The subtraction of the foreground components (zodiacal light and diffuse Galactic emission) is a more serious problem in the data reduction of MIR spectroscopy than in NIR.
In addition, the MIR detector suffers scattered light originating in the scattering within the detector array (e.g., Sakon et al. 2007). To estimate the foreground emission and the scattered light component, the SG1 and SG2 spectra collected at the position off the LMC (Observation IDs: 1500719 and 1500720) are used. The spectrum at off-position is obtained by averaging the two observations and is subtracted from the spectra of the target positions.
The MIR-S spectra are basically dominated by the zodiacal light and thus the scattered light component of the off-position spectrum is almost the same as that in the spectra of the target. Therefore, the subtraction of the off-position spectrum works effectively not just to remove the foreground emission but also to correct for possible artifacts and greatly improves the resultant spectra.
Continuous spectra from NIR to MIR
Using the spectral response function of each module, we obtain NG, SG1 and SG2 segmental spectra at the same area of the sky except for positions 5 and 8-1 (see below).
Each segmental spectrum is truncated at the wavelengths where the S/N becomes low: NG is truncated at 2.55 µm and 4.9 µm, SG1 at 5.5 µm and 7.9 µm and SG2 at 7.9 µm and 13.4 µm. Then, the correction for the slit efficiency for extended sources (Sakon et al. 2008) is applied and continuous spectra from 2.55 to 13.4 µm are obtained with a small gap between 4.9 and 5.5 µm. Because of the severe artifacts (column-pulldown) the NG spectrum is truncated at 3.8 µm and 4.5 µm for Positions 2 and 5, respectively. Note that although there is a small gap between the NG and SG1 segments, all the NG, SG1, and SG2 segmental spectra are smoothly connected to each other without scaling, suggesting that the subtraction procedure of the foreground emission and scattered light works well and reliable spectra are obtained.
For Positions 5 and 8-1, we cannot obtain segmental spectra precisely at the same region of the sky between SG1 and SG2 because the SG1 and SG2 observations are not carried out simultaneously during a pointed observation. A relatively large positional shift (∼ 15 ′′ ) recognized during the pointed observation for Position 5 prevents us from obtaining the SG1 and SG2 spectra from the same region of the sky. For Position 8-1, the positional stability during the pointed observation is almost the same as the other observations.
However, a small shift in the position of the point-like source in the slit of 5 ′′ width between the SG1 and SG2 observations changes the source flux to some extent and makes a small difference in the flux level between the SG1 and SG2 spectra. Since the positional stability was better when the SG2 spectrum was taken than the SG1 spectrum for the observations of both positions, only the NG data taken simultaneously with the SG2 are used for the data at both positions. Then, the SG1 spectrum is scaled to match with the SG2 spectra in the spectral region of 7.3 to 7.9 µm. The scaling factors are 0.9 and 0.8 for Position 5 and Position 8-1, respectively. The scaling of the SG1 spectrum is taken into account in the derivation of the band intensity ratios ( §3) and does not make serious effects on the following discussion.
Results
The resultant spectra toward our target positions are shown in Figure 3. The UIR bands are clearly seen at 3.3, 6.2, 7.7, 8.6 and 11.3 µm in every spectrum. A weak feature around 5.70 µm is also seen at Positions 7 and 8-A (Boersma et al. 2009 in three of the spectra. We note that a small bump seen around at 9.6 µm is an artifact, originating from the latent of the S9W exposure frame taken just before the SG2 exposure frames. Therefore, the spectral data from 9.4 to 9.8 µm are not used in the following model fit, and we cut 9.4-9.8 µm from spectra in Figure 3.
To derive the intensity of each UIR band and emission line, we fit the observed spectra where λ is the wavelength. The first term represents the continuum, which is modeled with a polynomial function of the 5 th order and is constrained to be non-negative. The second and third terms correspond to the UIR bands and the emission lines, respectively. For the UIR bands, we include 17 components centered at 3.30, 3.41, 3.46, 3.51, 3. 56, 5.70, 6.22, 6.69, 7.60, 7.85, 8.33, 8.61, 10.68, 11.23, 11.33, 11.99, 12.62 µm according to Smith et al. (2007). Except for the 3.46, 3.51 and 3.56 µm components the UIR band components have band widths larger than or similar to the spectral resolution of AKARI/IRC and are thus modeled with Lorentzian profiles (the second term), where λ k l is the center wavelength, γ k l is the FWHM and b k l is the height of each component. As for the 3.30, 3.41, 5.70, 6.22, 6.69, 7.60, 7.85, 8.33, 8.61, 10.68, 11.23, 11.33, 11.99, 12.62 components, γ k l is fixed to the best-fit value obtained for the spectrum that has the highest S/N ratio with the spectral resolution of AKARI/IRC as the minimum value. The adopted value of γ k l for each component is summarized in Table 3. Only the height b kl is left as a free parameter.
The integrated intensity of each component is calculated as πb k l · γ k l /2. In the following analysis, the 7.7 µm band is defined as a combination of the 7.60 and 7.85 µm components, and the 11.3 µm band as a combination of the 11.23 and 11.33 µm components. We note that the 12.6 µm band is defined as one component is not detected at a significant level in all the spectra due to the poor S/N and the low spectral resolution.
The 3.46, 3.51 and 3.56 µm components as well as the emission lines, which have the band widths smaller than the spectral resolution of AKARI/IRC, are modeled with Gaussian profiles (the third term), where λ kg is the center wavelength, γ kg is the FWHM and c kg is the height of each component. In the fit, γ kg is fixed to match with the spectral resolution of AKARI/IRC at the corresponding segment. Only the height c kg is a free parameter in the fitting. The integrated intensity of each Gaussian component is given by (π/ln2) 1/2 · c kg · γ kg /2. The adopted values of γ kg for the 3.46, 3.51 and 3.56 µm components and the emission lines are summarized in Tables 4 and 5, respectively.
The best-fit model spectra given by Eq.
(1) are plotted together with the observed spectra in Figure 4. The residual spectra are also plotted in the lower panel of each plot. Tables 6 and 7, respectively, where those detected with more than 2σ are indicated. The uncertainties in the intensity are estimated from the fitting errors taking account of the observational uncertainties.
As shown in (Allen 1973), respectively, Positions 7, 8-1 and 8-2 are exposed to the hard incident radiation field powered by young massive stars and are associated with H II regions. This view is consistent with the radiation field conditions suggested by the IRAS and AKARI colors. Based on the characteristics of the observed spectra, we classify the targets into two groups: "Group A", which includes Positions 1, 2, 3, 4, 5 and 6 and "Group B", to which Positions 7, 8-1 and 8-2 belong. The members of Group A are supposed to be exposed to incident radiation fields of weak to moderate intensities and consist mostly of molecular clouds and PDRs. Group B members are all associated with H II regions.
We investigate the effects of extinction on the spectra observed at the present target positions according to Dobashi et al. (2008). The visual extinction A V toward the present target positions ranges from 0.0 to 2.5 as shown in the last row of Table 1. We assume the "LMC avg" extinction curve provided by Weingartner & Draine (2001) to estimate the infrared extinction. We also estimate the value of A V from the observed ratio of Brβ to Brα at Positions 7, 8-1, and 8-2, assuming the Case B condition of T e = 10 4 K and n e = 10 4 cm −3 (Storey & Hummer 1995). The UIR band intensities corrected with A V provided by Dobashi et al. (2008) differ from those corrected with A V estimated from the Case B condition by less than 10 % at Positions 7, 8-1, and 8-2. We adopt those corrected with A V provided by Dobashi et al. (2008) at these positions for consistency with the other target positions. The effect of extinction correction on the UIR band ratios is small (< 15 %), and does not affect the following discussion.
Next, we evaluate a contribution from the unresolved emission line Pfδ at 3.30 µm to the 3.3 µm band, and that from Pfα at 7.46 µm to the 7.7 µm band. We assume that the intensities of Pfδ and Pfα are equal to 9.3% and 30.1% of that of Brβ, respectively, according to the Case B condition and subtract them from the intensity of the 3.3 µm and 7.7 µm bands. In the present target positions, the contribution from Pfδ to the 3.3 µm band is less than 10% and that from Pfα to the 7.7 µm band is less than 3%, both of which are similar to the measurement uncertainties and thus do not affect the results. The intensity for which these corrections are applied is also listed in the lower row for each position in Tables 6 and 7. The values of the corrected UIR band ratios are listed in Table 8. The effect of these corrections on the values of the UIR band ratios is less than ∼ 10% for all the targets. We estimate the uncertainties in I 6.2 µm and I 7.7 µm as the difference between the intensities with and without scaling, which dominates over the fitting errors.
Figures 5a-f show the plots of the corrected UIR band ratios of the I 3.3 µm /I 11.3 µm , I 6.2 µm /I 11.3 µm , I 7.7 µm /I 11.3 µm , I 8.6 µm /I 11.3 µm , I 12.6 µm /I 11.3 µm , and I 6.2 µm /I 7.7 µm against the IRAS color of I 25 µm /I 12 µm . Figures 5g-l plot those ratios against the AKARI color of I L24 /I S11 . Group A forms a sequence with a positive slope in the plots of I 3.3 µm /I 11.3 µm , I 6.2 µm /I 11.3 µm , I 7.7 µm /I 11.3 µm and I 8.6 µm /I 11.3 µm against the IRAS and AKARI colors, while Group B does not follow the sequence in the plots of these band ratios. The I 12.6 µm /I 11.3 µm and I 6.2 µm /I 7.7 µm ratio does not show a systematic trend with the colors either for Groups A or B. Little variation found in the 6.2 µm to the 7.7 µm band ratio is similar to the trend seen in external galaxies (Galliano et al. 2008). The present observation shows that the 3.3 µm band is weak in Group B compared to Group A. The results are discussed in the next section. Szczepanski & Vala 1993;Allamandola et al. 1999;Bakes et al. 2001). Therefore, the ionized-to-neutral band ratios (e.g. , I 6.2 µm /I 11.3 µm , I 7.7 µm /I 11.3 µm , I 8.6 µm /I 11.3 µm ) are supposed to indicate the ionization conditions of PAHs. Observational studies report variations in I 7.7 µm /I 11.3 µm and I 8.6 µm /I 11.3 µm within a reflection nebula along the distance from the central star (Bregman & Temi 2005;Joblin et al. 1996), among Herbig Ae/Be stars with different spectral types (Sloan et al. 2005), between the interarm and arm regions of the star-forming galaxy NGC6949 (Sakon et al. 2007), and between the inner and the outer Galaxy (Sakon et al. 2004). They are reasonably interpreted by the difference in the ionization conditions of the band carriers.
While very small PAHs (n C <∼ 10 2 , where n C is the number of carbon atoms in the PAH) radiate strongly at 3.3 µm, large PAHs radiate mostly at longer wavelengths: PAHs as large as n C ∼ 10 2 -10 3 efficiently convert the absorbed energy to the 6.2, 7.7 and 8.6 µm bands and PAHs as large as n C ∼ 4000 to the 11.3 µm band (Schutte et al. 1993;Draine & Li 2007). The 6.2 µm and 7.7 µm bands are attributed to stretching modes of C-C bonds, while the 3.3 µm, the 8.6 µm and the 11.3 µm bands are attributed to stretching modes, in-plane bending modes, and out-of-plane bending modes of C-H bonds, respectively (Allamandola et al. 1989). Therefore the ratios of the short-to-long wavelength UIR bands from the same bonds (e.g., I 3.3 µm /I 11.3 µm , I 6.2 µm /I 7.7 µm ) can be used to infer the average temperature or the size distribution of PAHs (e.g., Jourdain de Muizon et al.
1990
; Sales et al. 2010;Boersma et al. 2010). The NIR to MIR spectra we discuss here are obtained from the same region in the sky and thus we can discuss the band intensity ratios of the major UIR bands at 3.3, 6.2, 7.7, 8.6 and 11.3 µm concurrently.
As described in §3, Group A forms a sequence on the diagrams of the IRAS and AKARI colors v.s. the ionized-to-neutral UIR band ratios: I 6.2 µm /I 11.3 µm , I 7.7 µm /I 11.3 µm and I 8.6 µm /I 11.3 µm . The sequence suggests that a larger fraction of PAHs is ionized as the radiation field becomes stronger. However, Group B, whose radiation fields are much stronger and harder than those of Group A, does not follow the sequence. This can be attributed to the lower ionization fraction of PAHs due to an increase in the recombination under the high electron density in H II regions relative to molecular clouds or PDRs. Papoular (2005) argues that hydrogen impact might also play a role in the excitation of PAHs in PDRs and molecular clouds.
There is a positive correlation between IRAS and AKARI colors and the short-to-long wavelength UIR band ratio, I 3.3 µm /I 11.3 µm in Group A. This can be interpreted in terms of an increase in the excitation temperature of PAHs with the IRAS and AKARI colors in Group A. Destruction of PAHs is expected to be inefficient in environments of Group A (Micelotta et al. 2010a(Micelotta et al. ,b, 2011 and the size distribution of PAHS does not change considerably. Thus the excitation temperature of PAHs is mainly controlled by the hardness of the incident radiation field, but not by its intensity. On the other hand, the IRAS and AKARI colors indicate its intensity, but not the hardness directly (Sakon et al. 2006;Onaka et al. 2007b). The correlation seen in Figure 5 thus indicates that the incident radiation field becomes harder as the intensity becomes larger in Group A. This is a reasonable consequence of strong incident radiation fields, for which a contribution from young massive stars becomes dominant. The present observations indicate this trend explicitly based on the NIR to MIR UIR band ratios. Harder incident radiation fields also increase the I 6.2 µm /I 11.3 µm , I 7.7 µm /I 11.3 µm , and I 8.6 µm /I 11.3 µm . We show in §4.2.2 that the ionization fraction is the major factor for the increase in these ratios with a minor contribution from the hardness of the incident radiation field. According to the discussion in the previous subsections and the results presented in §3, we explore possible diagnostic diagrams to investigate the physical conditions of the ISM by means of the UIR band ratios. The ratio I 3.3 µm /I 11.3 µm is found to be a good indicator of the size distribution of PAHs, whereas I 6.2 µm /I 11.3 µm , I 7.7 µm /I 11.3 µm , or I 8.6 µm /I 11.3 µm has been indicated to be a measure of the ionized fraction of PAHs as discussed above.
The 8.6 µm band is typically weak and situated on the shoulder of the strong 7.7 µm band due to the limited resolution provided by the IRC. Thus I 8.6 µm /I 11.3 µm is less reliable than the other two ratios and will not be considered. Taking account of these, we investigate two-band-ratio plots of I 3.3 µm /I 11.3 µm v.s. I 7.7 µm /I 11.3 µm or I 6.2 µm /I 11.3 µm as shown in In Figure 6a where I 3.3 µm /I 11.3 µm is plotted against I 7.7 µm /I 11.3 µm , Group A forms a sequence from the bottom-left to the top-right. This sequence can be interpreted in terms of the change in the spectrum of the incident radiation field together with the change in the ionization fraction of PAHs. Group B, however, does not follow the sequence and is located distinctly from Group A. Figure 6b shows a plot of I 3.3 µm /I 11.3 µm v.s. I 6.2 µm /I 11.3 µm . Group In the present results, Groups A and B can be distinguished more clearly in the diagram of I 3.3 µm /I 11.3 µm v.s. I 7.7 µm /I 11.3 µm than that of I 3.3 µm /I 11.3 µm v.s. I 6.2 µm /I 11.3 µm .
It suggests a potential of the diagram of I 3.3 µm /I 11.3 µm v.s. I 7.7 µm /I 11.3 µm as a diagnostic tool for the radiation field conditions. The 6.2 µm band is usually thought as a more reliable indicator than the 7.7 µm band, since the 6.2 µm consists of a single component, while the 7.7 µm contains more than one component Smith et al. 2007 The 11.3 µm band is due to solo C-H out-of-plane bending modes and probes long straight edges, whereas the 12.6 µm band is due to trio C-H out-of-plane modes and probes corners.
The contribution from compact PAHs enhances the 11.3 µm band relative to the 12.6 µm band. The different compactness of PAHs is supposed to appear more distinctly in the I 12.6 µm /I 11.3 µm ratio than in the I 3.3 µm /I 11.3 µm . The ratio of I 12.6 µm /I 11.3 µm is 0.40 ± 0.10 and 0.44 ± 0.03 for Group A and B, respectively. Although the intensity of the 12.6 µm band has a large uncertainty, the ratio does not show a distinct difference as seen in I 3.3 µm /I 11.3 µm , suggesting that the molecular structure does not change appreciably among the present targets and is not the major factor for the difference in I 3.3 µm /I 11.3 µm . (2011) for Group A suggests that the excitation of PAHs as indicated by I 3.3 µm /I 11.3 µm is enhanced with the ionization fraction, which is indicated by I 7.7 µm /I 11.3 µm or I 6.2 µm /I 11.3 µm . Thus the ratio I 3.3 µm /I 11.3 µm is a useful measure for the incident radiation field conditions for Group A targets. This interpretation suggests that the data at the lower left in Figure 6 are objects in an early stage of the cloud evolution and those at the upper right are more evolved PDR-type objects. Further observations are important to confirm the interpretation. In both plots of Figure 6, the ratio I 3.3 µm /I 11.3 µm plays a crucial role not only to separate Group A from B, but also to estimate the incident radiation field conditions of the target in Group A.
Comparison with model
The previous subsection suggests possible diagnostic diagrams based on the UIR band ratios and qualitative interpretation is given. In this subsection we employ simple models of PAH emission and investigate the observed diagrams quantitatively.
To derive the intensity of the UIR bands emitted from a mixture of neutral and ionized PAHs of various sizes, we employ a simple theoretical model of infrared PAH emission following Schutte et al. (1993). The A-coefficients of the UIR bands in 3-20 µm are calculated from the infrared cross sections of neutral and ionized PAHs recently provided by Draine & Li (2007) and the internal energy to temperature relation provided by Draine & Li (2001) is adopted (see also Boersma et al. 2010;Bauschlicher et al. 2010).
The number of carbon atoms in PAHs is assumed to be distributed between n min C and n max C , and the size distribution of PAHs is assumed to be given by the same power-law function as graphite grains. Details on the model calculation are given in Appendix A.
The UIR band ratios of I 3.3 µm /I 11.3 µm and I 7.7 µm /I 11.3 µm are calculated for different temperatures of the heating source T * and ionization fractions of PAHs f i . Two sets of (n min C , n max C ) are calculated in the following analysis: Case I assumes (n min C , n max C ) = (20, -23 -4000). The UIR band ratios of I 3.3 µm /I 11.3 µm and I 7.7 µm /I 11.3 µm calculated for various T * and f i are plotted in Figure 7a. Case II assumes (n min C , n max C ) = (100, 4000). The UIR band ratios of I 3.3 µm /I 11.3 µm and I 7.7 µm /I 11.3 µm calculated for various T * and f i are plotted in Figure 7b. The observed data of Group B are distributed in a region corresponding to T * ∼30,000-40,000 K and f i ∼20-40 %, which is compatible with the effective temperature of O-type main sequence stars of 30,000-50,000 K. The small f i is consistent with the increase in recombination with electrons in high electron density environments of H II regions. Therefore, the deviation of Group B from the sequence of Group A on the plot of I 3.3 µm /I 11.3 µm v.s. I 7.7 µm /I 11.3 µm is reasonably accounted for by the difference in the minimum size of PAHs. Therefore, the model calculations quantitatively support the interpretation of the trend seen in the plot of I 3.3 µm /I 11.3 µm v.s. I 7.7 µm /I 11.3 µm . We stress here that an accurate measurement of I 3.3 µm /I 11.3 µm ratio should be the key to obtain the average excitation temperature of PAHs, which is determined by the size distribution of PAHs and the hardness of the incident radiation field. The present calculation of Case I and II is made only for representative purposes and the choice of n min C is rather arbitrary. The value of I 3.3 µm /I 11.3 µm is sensitive to the specific heat as well as the size distribution of PAHs that we assume in the calculation. The actual values of T * , f i , and n min C inferred from the plot depend on the assumed PAH properties such as the size distribution and the specific heat, which are also questionable.
Summary
We present the results of NIR to MIR slit spectroscopic observations of the diffuse radiation toward nine positions with different radiation field conditions in the LMC with AKARI/IRC. We obtain continuous spectra from 2.55 to 13.4 µm of the same slit area, which allow us to investigate variations in the relative intensity of the UIR bands from 2.55 to 13.4 µm emitted from exactly the same region.
The target positions are selected based on the IRAS colors of I 25 µm /I 12 µm and I 60 µm /I 100 µm , which indicate star formation activities, to cover a wide range of the intensity of the incident radiation field. The AKARI color of I L24 /I S11 at the present target positions shows a similar trend to that of the IRAS I 25 µm /I 12 µm color, confirming that the selection based on the IRAS colors is in fact relevant to the purpose of the present study. Group A shows a sequence on the plots of the UIR band ratios of I 3.3 µm /I 11.3 µm , I 6.2 µm /I 11.3 µm , I 7.7 µm /I 11.3 µm and I 8.6 µm /I 11.3 µm against the IRAS and AKARI colors, but Group B does not follow the sequence. These results can be interpreted in terms of the facts that (1) in Group A, PAHs are heated to higher excitation temperatures and their ionization fraction increases as the radiation field becomes harder and stronger and that (2) in Group B, very small PAHs (n C < 100) are efficiently destroyed, possibly due to electron collisions, and the ionization of PAHs is suppressed by an increase in the electron density inside H II regions. The present observations also show that the incident radiation field becomes harder as the intensity increases in Group A based on the UIR band ratios and the IR colors. There is little variation in I 6.2 µm /I 7.7 µm as reported in previous studies and we find no systematic trend against the colors.
The observed data points of Groups A and B are well separated on the plot of I 3.3 µm /I 11.3 µm v.s. I 7.7 µm /I 11.3 µm as well as that of I 3.3 µm /I 11.3 µm v.s. I 6.2 µm /I 11.3 µm . These trends can be interpreted in the same way as described above, suggesting a potential of the diagram of I 3.3 µm /I 11.3 µm v.s. I 7.7 µm /I 11.3 µm as a useful diagnostic tool for the radiation field conditions. Simple model calculations support the interpretation quantitatively. Further investigation is needed to ensure the applicability to this diagram a wide range of objects.
The present study shows the importance of the UIR band at 3.3 µm. The ratio I 3.3 µm /I 11.3 µm plays a crucial role not only to separate Group A from B, but also to estimate the incident radiation field conditions of the target in Group A. Recently Seok et al. (2011) report the detection of the 3.3 µm UIR band at the supernova remnant (SNR) N49 in the LMC, suggesting the presence of significant processing of PAHs in SNR shocks. The 3.3 µm band provides significant information on the size distribution and/or the excitation conditions of PAHs.
This work is based on observations with AKARI, a JAXA project with the participation of ESA. The authors thank all the members of the AKARI project and the members of the Interstellar and Nearby Galaxy team for their help and continuous encouragements. The ISSA data were obtained from the NASA Astrophysics Data Center (ADC). We express our gratitude to K. Dobashi for providing us with the extinction data of the LMC. Additionally, we express our gratitude to Y. Ita for providing us with the LMC survey program data by AKARI. We thank A. Kawamura and Y. Fukui for providing us the LMC 12CO survey data at 2.7 mm taken by the NANTEN millimeter-wave telescope of Nagoya University.
This work is supported in part by a Grant-in-Aid for Scientific Research from the Japan Society of Promotion of Science (JSPS).
A. Appendix
The present model calculation basically follows Schutte et al. (1993) with the recent model parameters provided by Draine & Li (2001) and Draine & Li (2007) (see also Boersma et al. 2010;Bauschlicher et al. 2010). The emission intensity due to an IR active fundamental vibrational transition, i, from level (v-1) to level v, in a j-type PAH molecule with a total internal vibrational energy E is given by where h is the Plank constant, v is the vibrational quantum number, ν i is the frequency of the emitting mode, A 1,0 j,i is the Einstein coefficient of the 1 → 0 transition, ρ j (E) is the total density of vibrational states at total energy E, i.e., the number of ways the energy E can be distributed over all available states, and ρ j,r (E − vhν i ) is the density of vibrational states for all modes except the emitting mode at a vibrational energy E − vhν i . For the Einstein coefficient of the 1 → 0 transition, A 1,0 j,i is given by where λ i is the wavelength of the emitting mode, σ j,int,i is the cross section of the i-th mode integrated over the wavelength. We adopt the values described in Table 1 of Draine & Li (2007) as σ j,int,i for each type of PAH molecules. Here we calculate the model spectra with various parameters to semi-quantitatively compare with the observations and thus adopt a simple thermal approximation. The validity of the thermal approximation has been studied to a large extent and it is shown that the presence of the size distribution alleviates the difference and the effect on the relative band intensities is small enough for the present study (Allamandola et al. 1989;Schutte et al. 1993;Draine & Li 2001).
In the thermal approximation, the emitted intensity of a j-type molecule with internal energy E in the i-th mode, from level v to level (v-1), is described by where T j (E) is the vibrational excitation temperature of a j-type molecule with E and where k is the Boltzmann constant. The sum of equation (A3) from v = 1 to v = ∞, i.e., the total emitted intensity in the i-th mode, is given by In this approximation, the energy-temperature relation for a j-type molecule is given by where s is the number of vibrational modes of a j-type molecule equal to 3 times of atoms minus 6, i.e., 3n atom − 6. We derive this relationship, using equations (2)-(8) in Draine & Li (2001) for each type of PAH molecules.
The total energy emitted in the i-th mode following the absorption of a UV/visual photon of frequency of ν, f (j, ν, i) is given by where I(j, E, i)/I(j, E) is the fraction of the total IR intensity emitted by a j-type molecule with the internal energy E in the i-th mode. Hence, the total emitted intensity in the i-th mode of a j-type molecule exposed by a star, whose spectrum is approximated with a blackbody, P (j, T * , i) is calculated as where σ j,ν is the UV/visual absorption cross section of j-type molecule, as which we adopt the values described in equations (17)- (20) in Schutte et al. (1993). Then, the flux in the i-th mode from a certain size distribution of interstellar PAHs, exposed to a star with T * , is given by where n PAH (j) is the number density of the j-type PAH molecule.
In order to evaluate the effect of depletion of very small PAHs quantitively, we calculate the ratio of F (T * , i) relative to the F (T * , i'), the model ratio of the emitted intensity in the i-th mode relative to that in the i ′ -th mode in two cases; Case I and Case II (see text). In both cases, we assume that the number density of interstellar PAHs is given by a power-law distribution with the size, i.e., the number of carbon atoms contained by PAH molecules, n C . If the number of PAH molecules per interstellar H atoms with a radius a between a + da is proportional to a −α and n C ∝ a γ , the number of PAH molecules per interstellar H atoms with a number of carbon atoms n C between n C + dn C is given by We adopt α=3.5 and γ=3 as assumed in Schutte et al. (1993). Then the number density is given by In Case I, the minimum size of PAHs is set to n C =20. Allamandola et al. (1989) suggest that smaller PAHs with n C < 20 are photolytically unstable. In Case II, we set the minimum size of PAHs n C =100. In both cases, the maximum sizes of PAH molecules is fixed at 4000. Draine & Li (2007) suggest that larger PAHs with n C > 4000 do not contribute to the UIR band features at 3-11 µm. Then, we run the model twice with the cross sections for neutral PAHs and ionic PAHs provided by Draine & Li (2007), respectively, and calculate the ratio of F (T * , i) from a summation of the spectra of both components by varying the ionization fraction. The light-green, dark-green, yellow, coral, dark-pink, pink, light-blue and dark-blue lines show the UIR 3.3 µm, 3.4 µm, 6.2 µm, 7.6 µm, 7.8 µm, 8.6 µm, 11.2 µm and 11.3 µm bands of a Lorentzian function, respectively. We cut 9.4-9.8 µm from spectra in Figure 3 due to the artifacts (see text). The lower panel shows a residual spectrum at each plot. The bottom panel is the same as in Figure 3. (g)), the 6.2 µm to the 11.3 µm band ((b) and (h)), the 7.7 µm to the 11.3 µm band ( (c) and (i)), the 8.6 µm to the 11.3 µm band ((d) and (j)), the 12.6 µm to the 11.3 µm band ((e) and (k)), and the 6.2 µm to the 7.7 µm band ((f) and (l)) against the IRAS color of are calculated for a mixture of neutral and ionized PAHs with a power-law size distribution exposed to a blackbody with different temperatures T * and various ionized fractions of PAHs f i . Case I (a) is calculated with (n min C , n max C )=(20,4000) and Case II (b) with (n min C , n max C ) = (100,4000). See Appendix for details of the model calculation. The symbols of the observed points are the same as in Figure 6. | 2011-09-16T01:39:08.000Z | 2011-09-16T00:00:00.000 | {
"year": 2011,
"sha1": "b04703d4020202d3e105e105a553675d7ac20447",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1109.3512",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "43a4a9652a5ccbd3331fb1255cffd55507ef8255",
"s2fieldsofstudy": [
"Physics",
"Environmental Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
149309770 | pes2o/s2orc | v3-fos-license | The cross-cultural sensitivity of the Strengths and Difficulties Questionnaire (SDQ): a comparative analysis of Gujarati and British children.
The purpose of this study was to investigate whether the Strengths and Difficulties Questionnaire (SDQ) may be considered a reliable measure of child behaviour, social functioning and adjustment in an Indian Gujarati context. The sample comprised 351 children who were classified as coming from a 'poverty' or 'non-poverty' background. The means and standard deviations for the SDQ total and five behavioural scales, as rated by children themselves, were first calculated for the entire Gujarati sample, then for the poverty and non-poverty subgroups. The SDQ did prove to be an appropriate measure for behavioural assessment. Its cross-cultural sensitivity was ascertained by comparing it against a British normative population. Small effect sizes were seen in the Emotional subscale scores and scores for total difficulties, and medium and large effect sizes on the Prosocial and Peer subscales, respectively, with greater difficulties experienced by the Indian Gujarati sample than their British counterparts.
The authors would like to acknowledge the help of ms Kathryn Newberg with the preparation of this paper.
research paper the purpose of this study was to investigate whether the strengths and Difficulties Questionnaire (sDQ) may be considered a reliable measure of child behaviour, social functioning and adjustment in an Indian Gujarati context. the sample comprised 351 children who were classified as coming from a 'poverty' or 'non-poverty' background. the means and standard deviations for the sDQ total and five behavioural scales, as rated by children themselves, were first calculated for the entire Gujarati sample, then for the poverty and non-poverty subgroups. the sDQ did prove to be an appropriate measure for behavioural assessment. Its cross-cultural sensitivity was ascertained by comparing it against a British normative population. small effect sizes were seen in the emotional subscale scores and scores for total difficulties, and medium and large effect sizes on the Prosocial and Peer subscales, respectively, with greater difficulties experienced by the Indian Gujarati sample than their British counterparts.
The main aim of the present study was to find the prevalence and distribution of behavioural problems using the Strengths and Difficulties Questionnaire (SDQ; Goodman, 1997) in a sample of school-aged Gujarati children in order to identify socio-emotional patterns and adjustment issues. Additionally, a cross-cultural analysis compared the Gujarati sample's scores with those from a British normative sample of children. The SDQ has subscales (with five items per scale) covering conduct problems, hyper activity, emotional problems, peer and prosocial behaviour; the SDQ also gives a 'total difficulties score' (TDS), which, along with the prosocial score, indicates strengths such as positive social skills and general resilience.
It is critical to consider the cultural sensitivity of tools used for psychological testing (Birbili, 2000), especially when the population studied is different from the one in which the test was validated (Balaban, 2006). The SDQ is a brief yet comprehensive measure of a child's sociopsychological adjustment. Its factor structure, reliability and validity, sensitivity and specificity, and compara bility with other instruments have been assessed in Britain (Goodman & Scott, 1999), Germany (Klasen et al, 2000), Bangladesh (Mullick & Goodman, 2001) and Sri Lanka (Prior et al, 2005), among other cultures.
Method
A total of 358 children aged 8-16 years were administered the SDQ, parent and self-report version. Children included in the study were selected across two districts of Gujarat, covering two cities and two townships, approximately representative of children from families across the middle to low socioeconomic spectrum. However, it is acknowledged that this not an epidemiological study but one based on a convenience sample and constrained by funding and access to the population. While the sociodemographic data were analysed for these 358 participants, it was not possible fully to score the SDQ forms for 7 children; therefore SDQ comparisons are done for a sample of 351 children. This sample was divided into 'poverty' (n = 248) and 'non-poverty' (n = 103) groups (Table 1), on the basis of whether the children had been classified as poor on the school register (a determination made by the Gujarat state govern ment according to household income and family size). The British normative sample comprised 4228 children aged 11-15 taken from Goodman's norms database (http://www.sdqinfo. com/UKNorm.html).
The data were collected in December 2007 and February-March 2008. The aims and procedures of the study were explained to the parents and school teachers, and subsequently students were invited to participate. The teachers enabled testing to take place in the school settings and often helped by explaining the meaning of specific words or items in the questionnaires. The first step in data collection was to seek consent from parents as well as children. Information sheets with details of the study and researchers' contacts were distributed and once consent was given the questionnaires were distributed. The SDQ Gujarati self-report version was translated following a rigorous translation-back-translation procedure and establishment of semantic equivalence. The SDQ self-report versions in English and the newly translated Gujarati version were administered to children mostly at various schools and occasionally at homes.
The research was approved by the research ethics committee at University College London (UCL) and was part of the first author's doctoral work conducted at UCL (2005-10).
results and discussion rates of adjustment difficulties in Gujarati children
In this Gujarati sample, the SDQ indicated that 17.4% of the children had clinically significant emotional distress or behavioural problems, that is, were categorised as 'abnormal' on the TDS, while none of the children fell within the 'borderline' band and the other 82.6% of the sample recorded scores in the normal range (Table 2). On the Emotional, Conduct, Hyper activity, Peer and Prosocial subscales less than 10% of the sample were in the 'abnormal' band.
In the TDS, Conduct, Hyperactivity and Peer subscale scores there were differences between the poverty and non-poverty groups. The data also pointed towards general adjustment problems and emotional turbulence experienced by adolescents in the Indian context. Unfortunately, the influence of age on adjustment difficulties (which in fact had not been a primary area of investigation for the study) could not be retrospectively analysed because too many of the children from rural Gujarat were not aware of their exact age.
The frequency of 'borderline' scores for conduct problems and peer relations points to wards interesting cultural dynamics. In Indian culture, deference and obedience (Shweder et al, 1987) are generally demanded from children and young people. Many children during the assessment discussed how their parents and teachers had an authoritarian stance and moralistic social ethos. It could be that the higher borderline range of distress points to a dual awareness of cultural demands and the adolescent need to resist the imposition of norms and authority. Peer relations become critical at this stage, and it is interesting that the children seemed aware of their struggle to build friendships and bonds with people their age. It could be that there is tension between the two domains of peer relations and conduct (mainly played out within the familial domain), where energies may be diverted towards one at the cost of the other.
Poverty and non-poverty group differences
The poverty group had a significantly lower proportion of children in the abnormal band (13.3% v. 27.2%) than the non-poverty group (χ 2 (1, 351) = 9.762, P = 0.002); both groups reported higher levels of distress than the suggested 10% band for extreme scores. None of the groups had participants in the borderline range and the Small to medium effect sizes were seen in the TDS and on the Conduct, Peer and Hyperactivity subscales, with children in the poverty group scoring low or showing a tendency to underreport (it could be that they did not sufficiently understand items or got confused about the most appropriate response). In contrast, the non-poverty sample, even though their mean scores were well within the normal range, tended to report and share their difficulties actively. Of course, the two samples might differ in terms of functional literacy and socio-cognitive skills. It could be that children in the poverty group fare better despite economic constraints due to greater resilience in the face of adversity. Yet another explanation could be that psychological appraisal of one's difficulties and mental makeup might be possible only if one has some socioeconomic stability. Therefore, despite facing more difficulties, the poverty group reported fewer problems because they could not conceptualise the enormity of their struggles, whereas the non-poverty children engaged more with psychological turmoil and stress. The fact that the poverty sample consistently reported fewer difficulties could reflect a 'dismissing' style of response.
comparison with the normative British sample
Comparing the Gujarati and British samples (Table 3), the difficulties reported on the TDS and the Emotional subscale suggest that differences between the two samples could be attributable to socioeconomic disparities or gaps in educational exposure, given Goodman's prediction for the percentage spread of psychopathology in any population (Goodman, 1997(Goodman, , 2002. The biggest difference can be seen on the Peer subscale, where a large effect size is reported. The results suggest certain differences between the two national samples. The mean TDS of the Gujarati sample was higher and the small effect size conveys that, overall, the Gujarati children had experienced greater problems than their British counterparts. A significant difference between the two mean scores was seen in the Emotional subscale, where an effect size of 0.19 was found, with the Indian sample reporting higher mean difficulties than the British sample; a similar trend was seen on the Prosocial subscale, where an effect size of 0.30 was reported and the mean of the Gujarati sample was higher than that of the British sample. The higher the score on the Prosocial subscale, the lesser the difficulties and the greater the resilience, and a better mean score indicates that the Indian sample might have greater family or social support, which added to their resilience. In the case of the Peer subscale, a large effect size (0.58) was found, with the Indian sample reporting more difficulties than the British sample. The reasons for this effect have been discussed above.
limitations and concluding comments
The study was able to compare poverty and nonpoverty samples from Gujarat, and to highlight psychosocial and cultural differences between Indian and British samples. A recent study by Goodman et al (2012) showed that the relationship between SDQ 'caseness' indicators and disorder rates varied substantially between populations. Cross-national differences in SDQ indicators do not necessarily reflect comparable differences in disorder rates. Therefore the results of the present study need to be interpreted with caution. What can be concluded more reliably is that, in the Indian sample, the poverty sub sample faced additional challenges to the non-poverty subsample. For the Gujarati sample as a whole, the clinically significant difference found on peer relations indicates that they faced challenges in domains outside the family. A traditional family structure might help children to cope with some of these competing demands as low-income countries undergo social and economic changes.
The SDQ as a tool provides interesting and meaningful differentiations between the Indian and British and poverty/non-poverty subsamples that aid the overall purpose of this study. | 2019-05-11T13:06:52.217Z | 2013-05-01T00:00:00.000 | {
"year": 2013,
"sha1": "6ef90e8ac7b429a57fa94e7270c4f745ca60041c",
"oa_license": "CCBYNCND",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/6A92CCC08BA2A9695CE51691F5587D50/S1749367600003763a.pdf/div-class-title-the-cross-cultural-sensitivity-of-the-strengths-and-difficulties-questionnaire-sdq-a-comparative-analysis-of-gujarati-and-british-children-div.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "ea59ef59819b9156a9c7b424dcd975189051f0f4",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
256204738 | pes2o/s2orc | v3-fos-license | Int6 reduction activates stromal fibroblasts to enhance transforming activity in breast epithelial cells
The INT6 gene was first discovered as a site of integration in mouse mammary tumors by the mouse mammary tumor virus; however, INT6’s role in the development of human breast cancer remains largely unknown. By gene silencing, we have previously shown that repressing INT6 promotes transforming activity in untransformed human mammary epithelial cells. In the present study, guided by microarray data of human tumors, we have discovered a role of Int6 in stromal fibroblasts. We searched microarray databases of human tumors to assess Int6’s role in breast cancer. While INT6 expression levels, as expected, were lower in breast tumors than in adjacent normal breast tissue samples, INT6 expression levels were also substantially lower in tumor stroma. By immunohistochemistry, we determined that the low levels of INT6 mRNA observed in the microarray databases most likely occurs in stromal fibroblasts, because far fewer fibroblasts in the tumor tissue showed detectable levels of the Int6 protein. To directly investigate the effects of Int6 repression on fibroblasts, we silenced INT6 expression in immortalized human mammary fibroblasts (HMFs). When these INT6-repressed HMFs were co-cultured with breast cancer cells, the abilities of the latter to form colonies in soft agar and to invade were enhanced. We analyzed INT6-repressed HMFs and found an increase in the levels of a key carcinoma-associated fibroblast (CAF) marker, smooth muscle actin. Furthermore, like CAFs, these INT6-repressed HMFs secreted more stromal cell–derived factor 1 (SDF-1), and the addition of an SDF-1 antagonist attenuated the INT6-repressed HMFs’ ability to enhance soft agar colony formation when co-cultured with cancer cells. These INT6-repressed HMFs also expressed high levels of mesenchymal markers such as vimentin and N-cadherin. Intriguingly, when mesenchymal stem cells (MSCs) were induced to form CAFs, Int6 levels were reduced. These data suggest that besides enhancing transforming activity in epithelial cells, INT6 repression can also induce fibroblasts, and possibly MSCs as well, via mesenchymal-mesenchymal transitions to promote the formation of CAFs, leading to a proinvasive microenvironment for tumorigenesis.
Background
The mammalian INT6 gene was first discovered in a genetic screen using the mouse mammary tumor virus (MMTV) to isolate the INT genes that mediate breast tumorigenesis [1]. MMTV insertions into the INT6 gene apparently cause the expression of C-terminally truncated Int6 proteins (Int6ΔC), which when ectopically overexpressed induce cell transformation and tumor formation in mouse models [2,3]. We and others [4][5][6] have identified an INT6 ortholog in the fission yeast Schizosaccharomyces pombe. While full-length human Int6 rescues the S. pombe Int6-null phenotype, Int6ΔC does not [7]. Therefore it is highly probable that Int6ΔC acts in a dominant-negative fashion to promote tumor formation in mouse mammary glands.
In human breast cancer, we and others have examined several breast cancer cell lines but found no evidence of Int6ΔC expression [3,8]. However, several earlier studies have shown lower levels of INT6 expression in breast cancer than normal tissues supporting the possibility that Int6 acts as a tumor suppressor [9][10][11]. Int6 is also known as eIF3e, a component of the eukaryote translation initiation factor [12]; in addition, Int6 has been shown to control nonsense-mediated mRNA decay [13]. These data suggest that Int6 can affect efficient translation. Using S. pombe, we revealed a new activity of Int6regulation of the 26S proteasome [7], abnormality in which increases levels of cyclin and securin, leading to abnormal mitosis and chromosome instability. Furthermore, using mass spectrometry to identify all Int6-interacting proteins, we found that Int6 is in a supercomplex-which we named the translasome-that contains all the components needed for translation as well as proteasome subunits [14]. This discovery led to the hypothesis that Int6 can fine-tune levels of key regulatory proteins by coordinating protein synthesis and protein degradation within the translasome. As such, Int6 reduction may induce tumor formation in breast epithelial cells by causing a net increase in the levels of proteins that promote tumorigenesis. In support of this hypothesis, we and others have recently shown that repressing INT6 expression in normal mammary epithelial cells induces a transforming phenotype, which correlates with the stabilization of a potent oncoprotein, Src3/AIB1, and altered translation of the ubiquitin genes as well as of genes controlling the epithelial-mesenchymal transition (EMT) [8,15].
While solid tumors are mostly derived from epithelial cells, increasing evidence suggests that the tumor microenvironment can play important roles in influencing tumor progression. Within the tumor microenvironment, as much as half the mass of the tumor microenvironment is made of stromal cells [16]. A key component of the stroma is the fibroblast. Fibroblasts isolated from solid tumors (called carcinoma-associated fibroblasts, CAFs), when co-transplanted with carcinoma cells, can strongly promote in vivo tumor growth as well as angiogenesis and metastasis [17][18][19]. Eliminating CAFs has been reported to suppress spontaneous metastasis and to enhance the antimetastatic effects of chemotherapy in mouse breast cancer models [20].
CAFs from invasive human breast carcinomas appear to control tumor cells by secreting a number of stromal cell-derived factors, the chief of which is stromal cellderived factor 1 (SDF-1), also called CXCL12 [18]. SDF-1 signals via its cognate receptor, CXCR4. When CXCR4 is expressed on the surface of carcinoma cells, CAFs can directly enhance the proliferation of these cells via an SDF-1/CXCR4 paracrine loop. The most potent CAFs for promoting tumor progression appear to be a fraction called myofibroblasts, which are marked by the expression of smooth muscle actin (α-SMA). High levels of myofibroblasts in CAFs correlate with robust growth of co-transplanted xenografted human tumors [21]; furthermore, in human breast cancer patients, high α-SMA levels in the tumor stroma correlate with poor clinical outcomes [22,23].
In this study, guided by analyses of tumor microarray databases, we came upon the surprising finding that, while INT6 mRNA levels are as expected lower in breast cancer than in normal tissue, INT6 mRNA levels are also very low in the tumor stroma. By immunohistochemistry (IHC) staining, we determined that Int6 levels are low in the fibroblasts in tumor stroma. We went on to show that INT6 repression can induce normal mammary fibroblasts to act like CAFs, apparently by activating a mesenchymal-mesenchymal transition (MMT). Our results suggest that the reduction of Int6 can promote breast tumor formation not only by activating oncogenic pathways in epithelial cells but also by inducing a CAF-like activity in the stromal fibroblasts.
Int6 is reduced in the fibroblasts in human breast cancer
To determine whether INT6 may act as a tumor suppressor for breast cancer, we searched Oncomine for gene expression changes by focusing on studies in which normal and tumor tissues were compared. INT6 levels were found to be significantly higher in normal tissues compared with invasive breast tumors and premalignant ductal carcinoma in situ (DCIS), according to data from the TCGA project ( Figure 1A, left) [24] and Curtis et al.
Most intriguingly, data from studies of gene expression in the stroma showed even more drastic differences between INT6 expression levels in tumors and normal tissues. For example, data from Finak et al. [9] showed that INT6 expression levels were about 42 times lower in the stroma of invasive carcinoma than in matched adjacent normal tissue ( Figure 1A, right). Likewise, data from Ma et al. [27] showed that INT6 expression levels were lower in the stroma of invasive carcinoma and also in DCIS than in normal tissue (data not shown).
To further examine whether Int6 is downregulated in tumor stroma at the protein level, we established an IHC protocol by first examining Int6 levels in parental and INT6-silenced MCF7 cells. As shown in Figure 1B, both Western blot and IHC showed a 50% reduction in Int6 levels in the silenced cells, suggesting that our IHC protocol could detect Int6 with a high degree of specificity. We then stained paraffin-embedded tumor tissues with hematoxylin and eosin (H&E) to select 20 samples that contained stroma at least 2 mm away from the tumor, which we and Finak at al. defined as normal stroma [9]. In these normal stromal regions, we readily detected fibroblasts that stained strongly for Int6 ( Figure 1C). In contrast, such Int6-positive fibroblasts were rare in the tissue proximal (<2 mm) to the carcinoma cells. We could also detect Int6 in plasma cells, but the levels did not differ substantially between normal and tumor tissues. We scored the Int6 intensity and calculated the histoscore differences between the normal and tumor regions ( Figure 1D). While there was not a dramatic difference in Int6 intensity between the normal and tumor regions (p = 0.66), the fraction of fibroblasts that were Int6-positive in the normal region was much higher (p < 0.001), leading to higher histoscores (p = 0.0014). These data agree with the concept that Int6 reduction in stromal fibroblasts (in addition to epithelial cells) may promote breast tumorigenesis.
Int6 reduction in normal human mammary fibroblasts induces CAF-like properties
To investigate whether Int6 reduction in fibroblasts can induce CAF-like activities, we repressed INT6 expression using siRNA in h-TERT-immortalized normal human mammary fibroblasts (HMFs) and then measured a key CAF marker, α-SMA. As shown in Figure 2A, repressing INT6 steadily increased α-SMA levels over a 5-day period after gene silencing. To determine whether the increase in α-SMA levels resulted in more efficient formation of α-SMA cables in the cells, we performed immunostaining. As shown in Figure 2B, greater than 2.5 times more INT6repressed HMFs contained α-SMA cables.
CAFs are also known to influence epithelial cells by paracrine signaling through the secretion of SDF-1. To examine whether INT6 repression induces the expression of CXCL12, which encodes SDF-1, in stromal fibroblast cells, we examined INT6-repressed HMFs and found that their mRNA levels increased over a 7-day period ( Figure 2C). Levels of secreted SDF-1 also increased in the medium of INT6-repressed HMFs ( Figure 2D). These data collectively suggest that CAFs can be derived from fibroblasts when Int6 levels are downregulated.
INT6-silenced HMFs enhance transforming activities in breast cancer cells
To determine whether INT6-repressed HMFs can functionally affect transforming phenotypes of breast cancer cells, we first analyzed colony formation in soft agar with or without co-cultured HMFs. As shown in Figure 3A, while parental HMFs can weakly enhance colony formation by MCF7 cells in soft agar, when INT6 was repressed in HMFs, colony formation increased 5 fold. We have obtained similar results with a variant of MCF7 cells [21] (data not shown). To further investigate this concept, we examined two preinvasive cell lines, MCF10AT [28] ( Figure 3B) and SUM102 cells [29,30] (Figure 3C), and found that INT6-repressed HMFs can also readily enhance colony formation in soft agar. Next, we analyzed cell invasiveness using a Matrigel-coated invasion chamber and found that INT6-repressed HMFs can more efficiently attract MCF7 cells across the Matrigel, suggesting that the invasiveness of cancer cells can be enhanced by INT6-repressed HMFs ( Figure 3D). To determine whether SDF-1 is a key signaling molecule secreted by INT6-repressed HMFs to influence transforming activities in cancer cells, we added the SDF-1 receptor antagonist AMD3100 and found that HMF-induced colony formation in soft agar was greatly reduced ( Figure 3E), with a concurrent reduction of (See figure on previous page.) Figure 1 Reduction of Int6 in the fibroblasts in human breast tumors. (A) Left: gene expression data from the breast cancer TCGA project were directly exported from Oncomine, in which mRNA levels of INT6 in normal breast tissue and invasive ductal carcinomas were compared. INT6 mRNA levels were reported to be 50% lower in the latter. Right: stroma gene expression data in the Finak study available in Oncomine were analyzed to show that INT6 mRNA levels were approximately 42 times higher in the tissue surrounding the normal adjacent ducts than in the stroma in the tumor. (B) Control or INT6-repressed MCF7 cells were analyzed by Western blot (left) or IHC (right) by an anti-Int6 antibody. We note that agreeing with our previous finding using GFP-tagging [26], Int6 is mainly cytoplasmic. (C) This is a typical IHC experiment examining fibroblasts in the adjacent normal and tumor region from the same human tumor sample. The top pictures were captured using a 10× objective, and one area in each was then examined by a 40× objective to reveal more details. Closed and open arrowheads mark Int6-positive vs. Int6-negative fibroblasts. (D) The histoscore differences between the normal and tumor regions were compared by the Wilcoxon signed rank test. While there was no difference in Int6 intensity between the normal and tumor regions (p = 0.66), a much lower percentage of Int6-positive fibroblasts was found in the tumor. As a result, all but two samples (marked red) show lower values for tumor fibroblasts. The mean normal and tumor histoscores are marked orange. On the right is a boxplot of the differences between normal and tumor histoscores (individual values shown as circles, mean difference shown in orange). Mean difference ± SEM = 27.0 ± 7.2 (p < 10 −3 , Wilcoxon signed rank test).
the CAF marker α-SMA ( Figure 3F). Collectively, these results demonstrate that INT6-repressed HMFs can function like CAFs to enhance transforming phenotypes of several breast cancer cell lines.
Int6 reduction may promote CAF formation via a MMT mechanism
The mechanism(s) by which CAFs are generated are poorly understood. However, Int6 reduction has been shown to induce EMT [15], suggesting that Int6 reduction may induce more mesenchymal traits in the cell. We thus investigated whether INT6 repression promotes CAF-like properties by inducing mesenchymal traits in HMFs. As shown in Figure 4A, when INT6 expression was silenced, protein levels of two mesenchymal markers, vimentin and N-cadherin, showed an average increase of 1.5 and 2.5 times respectively, in two experiments.
Mesenchymal stem cells (MSCs) have been shown to produce CAF-like cells, presumably by a trans-differentiation process called MMT, and these cells can be detected when A fraction of these cells was analyzed by Western blot to confirm the reduction in Int6 and α-SMA levels (data not shown). The rest were mixed with MCF7 cells before seeding in triplicate (n = 3). The colonies in each well (a representative area from each is shown below the graph) were counted after 15 days. We note that HMFs seeded alone do not form colonies in soft agar (column 4 and column 5). To confirm that the emerged colonies are of cancer cells, we tagged MCF7 cells with mCherry and found that all colonies were enriched with mCherry-positive cells (right). The HMFs were already tagged by GFP (marked by white arrow heads) [31], but we did not detect large colonies full of GFP-positive cells. (B) MCF10AT cells were examined similarly as in panel A, except that we did not include the HMF-alone control. (C) SUM102 cells were examined similarly as in panel B. (D) MCF7 cells were loaded in an invasion chamber and submerged in conditioned medium from control or INT6-repressed HMFs. Invaded cancer cells from five different areas (n = 5) on each insert membrane were counted. (E) Normal and INT6-repressed HMFs were mixed with MCF7 cells and seeded in triplicate (n = 3) into soft agar with or without AMD3100 (500 ng/mL). Colonies were counted after 15 days. (F) INT6-repressed HMFs were co-cultured with MCF7 cells for 4 days before AMD3100 was added (500 ng/mL). After 24 hours, the α-SMA levels in HMFs were measured by Western blot. α-SMA levels, normalized by the loading control GAPDH from the control cells, were set as 1 (n = 4 separate experiments). co-cultured with cancer cells [32,33]. As shown in Figure 4B, as MSCs were so induced to produce CAF-like phenotype, we found that Int6 levels were decreased in MSCs. To directly investigate whether INT6 repression in MSCs can also induce CAF-like activity, we repressed INT6 and found that α-SMA levels were indeed elevated in the MSCs ( Figure 4C).
Discussion
In this study, we show that INT6 expression is reduced not only in tumor cells but also in the stroma. Our IHC data illustrate that this reduction is mainly due to low levels of Int6 in fibroblasts. To directly investigate that Int6 reduction in fibroblasts is functionally relevant to tumorigenesis, we repressed INT6 in an immortalized HMF cell line and found that these cells can promote anchorage-independent growth and invasion in co-cultured breast cancer cells. These HMFs show CAF-like phenotypes-in particular, their SDF-1 secretion is increased; SDF-1 repression can substantially retard transformation. These results support the model that Int6 reduction in fibroblasts can induce them to a CAF-like state, creating a proinvasive tumor microenviro nment.
While the presence of CAFs is widely accepted as a key factor for promoting tumorigenesis, the origin of these CAFs remains largely unclear. One key area of focus is the stem cells in various stromal components (e.g., endothelial vessels, fat) that appear to differentiate into CAFs under the influence of cancer cells. In addition, breast cancer frequently metastasizes to the bone, and it has been shown that MSCs in the bone can be reprogrammed to form CAFs, thus creating a microenvironment favorable for bone metastasis [34]. In addition to these possibilities, data from this study and others [21] support the concept that CAFs can also be derived from fibroblasts themselves. We further speculate that Int6 reduction may be one of the common steps leading to the formation of CAFs, because when MSCs were induced by co-cultured cancer cells to form CAFs, Int6 levels were reduced, and a direct reduction of Int6 in MSCs can induce a CAF-like phenotype.
While our data are consistent with the model that Int6 reduction can induce MMT, the molecular mechanism for how this occurs is complex and not resolved in this study. Int6 is also known as eIF3e, a component of the translation initiation factor 3. We and others have shown that Int6 is not essential for translation; rather, it can selectively control the translation of a subset of genes [8,35]. In addition, we and others have found that Int6 can control the stability of key regulatory proteins by interacting with 26S proteasomes. Collectively, these data suggest that Int6 can control levels of key regulatory proteins via a combined and coordinated alteration of translation and proteolysis to globally reprogram cellular activity in an efficient manner. Targeting translation or inhibiting proteasomes have already emerged as powerful clinical approaches to treat cancers [36,37].
Conclusions
We have uncovered a previously unknown tumor suppressor activity for Int6, namely, that its loss can stimulate the formation of stromal fibroblasts to a CAF-like state to enhance transforming activities in breast epithelial cells. Our data support a model in which CAFs can be generated not only from MSCs but also from fibroblasts themselves when Int6 activity is attenuated. It is possible that Int6 inactivation promotes a mesenchymal state by altering the translation and/or stability of a subset of key regulatory proteins. cultured as previously described [8,31,38]. MCF7, MDA-MB-231, MCF7-ras and SUM102 cells were cultured in Dulbecco's Modified Eagle Medium (GIBCO) supplemented with 5% fetal bovine serum, 100 IU/mL penicillin, 0.1 mg/mL streptomycin, and 2 mM glutamine (GIBCO). For co-culturing HMFs with cancer cells, the former were seeded first to allow for attachment before a 0.4-μm Millicell insert (EMD Millipore) was placed on top of them. Cancer cells were then loaded into the insert and grown on the membrane in the insert. These thus shared the same culture medium but were kept physically separated. SDF-1 in the culture media was detected by an enzyme-linked immunosorbent assay (ELISA) kit from R&D Systems. The siRNA used to knock down INT6 expression was thoroughly tested as previously described by us and by others [8,39].
Assays for anchorage-independent growth and invasion
The cells were seeded and grown in soft agar as described previously [8], except that the cancer cells were mixed with HMFs at a ratio of 1:1. Ten thousand cells were seeded in each well of a 6-well plate. To block SDF-1, AMD3100 (Sigma Aldrich) was added to the soft agar culture medium. When mCherry-tagged MCF7 cells and HMFs were co-cultured, the colonies in soft agar were examined by fluorescence microscopy on day 9. The invasion assay was performed using the BD Bio-Coat Matrigel Invasion Chamber (BD Biosciences). After transfection with siRNA, HMFs (1 × 10 4 cells/well) were cultured for 3 days and then a Matrigel insert, loaded with 2.5 × 10 4 MCF7 cells, was placed in the well. In this setup, the medium for HMFs underneath the insert was the source of the chemoattractants. After 2 days, the noninvading cells on the top of the membrane were removed by scrubbing with a cotton-tipped swab, while the invaded cells on the opposite side were stained with Diff-Quik (TECHLAB). Invaded cells from 5 different fields of each membrane were counted under a light microscope with a 40× objective.
Semiquantitative RT-PCR
Total RNA was extracted using the RNeasy Mini Kit (Qiagen), and the cDNAs were generated using the SuperScript First-Strand Synthesis System (Invitrogen). The cycle number was adjusted to allow detection within the linear range of product amplification. The forward and reverse primers for the SDF-1 and actin genes were (5′ to 3′): TGAGAGCTCGCTTTGAGTGA and CACCAG-GACCTTCTGTGGAT, and GTGGGGCGCCCCAGG-CACCA and CTCCTTAATGTCACGCACGATTTC.
Acquisition of human tissues
The samples used in this study are anonymized tissues collected between 2000 and 2012 from several sites in the United States and Europe by companies that specialize in tissue acquisition. Pathologists at the hospitals where the tissues were collected performed gross examination to set aside enough tissue for diagnostic purposes. The pathologists then cut the remaining tumor tissue in half, flash-froze half in liquid nitrogen and fixed the other half in formalin, to be embedded in paraffin later. The staff pathologists at the tissue acquisition companies then performed quality control to determine cellularity and to confirm the histology. Because these samples are anonymized, no clinical follow-up is possible. The Institutional Review Board at Baylor College of Medicine determined that these samples are exempt from review.
Immunohistochemistry and data analysis
We first used cell lines with different Int6 levels to optimize the IHC protocol. These cells were lifted by Versene (GIBCO), and fixed with 10% neutral buffered formalin for 2 hours. The cell pellet was solidified in 4% molten agar (Sigma) and then put in a tissue cassette before finally being embedded in paraffin. Prior to staining, 3-μm sections were cut and deparaffinized by xylenes and a graded alcohol series. Antigen retrieval was performed in 10 mM citrate buffer (pH = 6) in a pressurized cooker with heating (90°C). The sample was then washed and suspended in Tris-Buffered Saline and Tween 20, and 3% H 2 O 2 was added to block endogenous peroxidases. Two antibodies against Int6 were tested, one of which was described previously [8], and one of which was from Sigma. Although both antibodies worked well, the signal-to-noise ratio is better with the former, so it was used throughout this study (1:200 dilution with cell pellet and 1:50 dilution with tissues; all incubation at room temperature). A negative control was created by treating the same set of samples with non-immune antibody (Dako) matched for species, type, isotype, and concentration. Peroxidase conjugation was performed using the Envision + HRP labeled Polymer Kit from Dako, and diaminobenzidine (DAB+, Dako) was the chromogen. The samples were also counterstained with Harris Hematoxylin.
We selected 20 formalin-fixed, paraffin-embedded human breast tumors that contained "normal" stroma, defined as regions that are at least 2 mm away from the tumor, and then evaluated Int6 levels by IHC in many fibroblasts (57-255) from each region in each sample. These samples were scored by two pathologists independently. Histoscore (possible values 0-300) was computed as the product of the percentage of fibroblasts (possible values 0-100) that scored positive for Int6, and the intensity of Int6 staining in fibroblasts (possible values 0-3). Differences in fibroblast Int6 histoscore and intensity between the normal and tumor regions were assessed by the Wilcoxon signed rank test.
Statistical analysis
For general quantification of protein intensity and cell growth, values are shown as averages ± SEM. Unpaired student t tests were performed to obtain p values.
Microscopy
Cells were seeded and grown on poly-L-lysine-coated coverslips (BD Biosciences) and later fixed in 3.7% paraformaldehyde, followed by permeabilization in 0.1% Triton X-100. Antibodies against α-SMA (1A4, Dako) and tubulin (Cell Signaling) were used after a 1:200 dilution (2 h at room temperature). The secondary antibody was either conjugated Alexa Fluor 488 or Alexa Fluor 594 from Invitrogen (1:250, 1 h at room temperature). The samples were mounted in Vectashield with DAPI (Vector). Images were captured using an Olympus IX70 microscope via a 60×/1.4 oil objective and deconvolved (constrained iterative) by Slidebook software (Intelligent Imaging Innovations) from a stack of 12 images collected at 0.5-μm intervals. To mark MCF7 cells with mCherry, we constructed the pMCherry vector by replacing the coding sequence of GFP in pEGFP-C1 (Clontech), which carries a neomycin selectable marker, with the coding sequence of mCherry. Subconfluent MCF7 cells were transfected and selected by neomycin. mCherry expression in these cells was confirmed by microscopy. | 2023-01-25T15:15:43.994Z | 2015-03-08T00:00:00.000 | {
"year": 2015,
"sha1": "50bc450fef37543d6d42806152c58069a789b8b7",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s13578-015-0001-6",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "50bc450fef37543d6d42806152c58069a789b8b7",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
231721890 | pes2o/s2orc | v3-fos-license | Ebola virus antibody decay–stimulation in a high proportion of survivors
Neutralizing antibody function provides a foundation for the efficacy of vaccines and therapies1–3. Here, using a robust in vitro Ebola virus (EBOV) pseudo-particle infection assay and a well-defined set of solid-phase assays, we describe a wide spectrum of antibody responses in a cohort of healthy survivors of the Sierra Leone EBOV outbreak of 2013–2016. Pseudo-particle virus-neutralizing antibodies correlated with total anti-EBOV reactivity and neutralizing antibodies against live EBOV. Variant EBOV glycoproteins (1995 and 2014 strains) were similarly neutralized. During longitudinal follow-up, antibody responses fluctuated in a ‘decay–stimulation–decay’ pattern that suggests de novo restimulation by EBOV antigens after recovery. A pharmacodynamic model of antibody reactivity identified a decay half-life of 77–100 days and a doubling time of 46–86 days in a high proportion of survivors. The highest antibody reactivity was observed around 200 days after an individual had recovered. The model suggests that EBOV antibody reactivity declines over 0.5–2 years after recovery. In a high proportion of healthy survivors, antibody responses undergo rapid restimulation. Vigilant follow-up of survivors and possible elective de novo antigenic stimulation by vaccine immunization should be considered in order to prevent EBOV viral recrudescence in recovering individuals and thereby to mitigate the potential risk of reseeding an outbreak.
Neutralizing antibody function provides a foundation for the efficacy of vaccines and therapies [1][2][3] . Here, using a robust in vitro Ebola virus (EBOV) pseudo-particle infection assay and a well-defined set of solid-phase assays, we describe a wide spectrum of antibody responses in a cohort of healthy survivors of the Sierra Leone EBOV outbreak of 2013-2016. Pseudo-particle virus-neutralizing antibodies correlated with total anti-EBOV reactivity and neutralizing antibodies against live EBOV. Variant EBOV glycoproteins (1995 and 2014 strains) were similarly neutralized. During longitudinal follow-up, antibody responses fluctuated in a 'decay-stimulation-decay' pattern that suggests de novo restimulation by EBOV antigens after recovery. A pharmacodynamic model of antibody reactivity identified a decay half-life of 77-100 days and a doubling time of 46-86 days in a high proportion of survivors. The highest antibody reactivity was observed around 200 days after an individual had recovered. The model suggests that EBOV antibody reactivity declines over 0.5-2 years after recovery. In a high proportion of healthy survivors, antibody responses undergo rapid restimulation. Vigilant follow-up of survivors and possible elective de novo antigenic stimulation by vaccine immunization should be considered in order to prevent EBOV viral recrudescence in recovering individuals and thereby to mitigate the potential risk of reseeding an outbreak. Limited EBOV outbreaks have been recorded since 1976 1 . The much larger 2013-2016 West African epidemic (28,610 cases) and the ongoing 2018 Eastern Zaire outbreak (3,188 cases as of September 2019) (https://www.who.int/emergencies/diseases/Ebola/drc-2019) in the Democratic Republic of the Congo (DRC) have been more extensive. These larger outbreaks have indicated that the virus can persist in some individuals, with the potential for subsequent viral transmission 2 . Because the number of Ebola outbreaks has been small, we have limited understanding of natural induced immune responses, and our knowledge of vaccine-induced responses comes largely from animal models 3 . These models have indicated that total levels of IgG-binding antibodies can correlate with protection and with neutralizing antibody (nAb) responses, which can typically be low.
Outbreaks in humans have provided valuable information regarding therapeutic 4 and vaccine intervention strategies [5][6][7] for EBOV. More recently, nAbs have been the focus of therapeutic development [8][9][10][11][12] . A cocktail of monoclonal antibodies (mAbs) was administered during the 2013-2016 outbreak 12,13 , and trials conducted in the DRC showed evidence of efficacy 14 . In early 2015, two related studies (Ebola-Tx 15 and Ebola-CP 16 ) were established to recruit apparently health survivors of EBOV with the intent of using their convalescent plasma (CP) to treat disease 4,16,17 . We used CP from the donors of the Ebola-CP study (Supplementary Table 1a), in which samples were collected longitudinally (30-500 days) to better ascertain how nAb responses evolve. Such responses have previously been studied in both humans and primates with broad nAb activity 4,[18][19][20] .
We initially developed a range of solid-phase enzyme-linked immunoassays (EIAs), based on the Mayinga EBOV strain recombinant antigen, to characterize antibody responses in potential donors of therapeutic CP 21 . To circumvent the difficulty of using replication-competent EBOV in expanding the analysis to characterize neutralization responses, we used single-round infectious pseudo-particle viruses (PVVs; see Methods). Optimal virus production and infectivity were identified by limiting dilution of a plasmid expressing variant EBOV glycoprotein from the 2014 strain (EBOV14-GP; Extended Data Fig. 1a). Glycoproteins from three EBOV strains were used for PPV production; the early 2014 epidemic strain (pEBOV14-GP; accession KP096421 in NCBI database Nucleotide (https://www.ncbi.nlm.nih.gov/nucleotide/)) 22 ; a modified variant (pEBOV14m-GP) with mutations that appeared early during the outbreak (Fig. 1b, Supplementary Table 2); and the 1995 Kikwit strain (pEBOV95-GP; accession KC242799) 23 , which was represented in the vaccine administered latterly in the 2013-2016 outbreak. EBOV14-GP PPVs demonstrated consistently lower infectivity than the other strains (Fig. 1a), presumably because of the T544I amino acid mutation previously described 24 . The A82V alteration (pEBOV14m-GP), which appeared early in the epidemic and was subsequently found in more than 90% of 2013-2016 isolates, was also reported to have a higher infectivity profile 25 . Notably, this genotype was not associated with altered disease pathogenicity in a primate model system 26 .
We used the above PPV infection assay to quantify nAb responses in CP donors (using limiting dilutions of plasma). To identify non-specific neutralization effects, we tested EBOV antibody-negative plasmas (n = 6) to find the range of non-specific inhibition (Extended Data Fig. 1b), and CPs with results that fell within this range were considered to lack neutralizing potential. We also used PPVs expressing the HIV-1 envelope protein to test a high-titre EBOV antibody-positive plasma that was within the non-neutralizing range (Extended Data Fig. 1b). The WHO Anti-EBOV Convalescent Plasma International Reference Panel (NIBSC 16/344) was used to demonstrate the neutralizing potential of EBOV antibody-positive sera (half-maximal inhibitory concentration (IC 50 ) range, 6.33-7.01 log 2 [plasma dilution]; Extended Data Fig. 1c), which was comparable to previously published values 27 . We tested the robustness of the assay using plasma from EBOV survivors to inhibit the three PPV strains produced, each in different batches with the assay repeated in two biologically independent experiments (Extended Data Fig. 1d).
CPs (n = 52) demonstrated a wide range of neutralization potential (Supplementary Table 1b-d); however, they had comparable profiles when assayed using all three EBOV PPV strains (Extended Data Fig. 2a-c). Half-maximal (IC 50 ) and 70% (IC 70 ) inhibitory concentrations were correlated (Extended Data Fig. 3). Within this cohort we found no differences in neutralizing titres among the three virus strains (Fig. 1c, Extended Data Fig. 1e). Neutralization of PPVs expressing pEBOV14-GP, with the lower infectivity profile (Fig. 1a), did not differ from that of the pEBOV95-GP strain isolated twenty years earlier, or the pEBOV14m-GP strain carrying early epidemic mutations, including the A82V variant associated with higher infectivity. However, individual CPs that had high IC 50 and IC 70 values against one virus strain did not necessarily neutralize the other two (Fig. 1d, Extended Data Fig. 1f), potentially highlighting epitope diversity among individual participants as well as virus strains. A subset of donor CPs (n = 5) with sequential samplings (totalling n = 30; Supplementary Table 1e) were assayed against the replication-competent EBOV (RCE) Makona 2014 isolate. There was a significant correlation between the two neutralization platforms (Fig. 1e, Extended Data Fig. 1g; r = 0.52, P < 0.0001). In addition, our neutralization data demonstrated a similarly significant correlation with total anti-EBOV reactivity measured using the double antigen bridging assay (DABA) (Fig. 1f (r = 0.50, P < 0.0001, IC 50 ), Extended Data Fig. 1h (r = 0.55, P < 0.0001, IC 70 )). These data corroborate previous results 21 and further validate the PPV neutralization platform used here. The RCE and PPV assays demonstrated a stronger correlation than did neutralization versus DABA. The RCE and PPV assays target the same antibodies, whereas DABA measures all antibodies and some individuals would have differential responses (this was observed only in a very small number of donors).
Although nAbs are thought to develop later in infection 28 , our data demonstrate their presence as early as 30 days after the end of infection, consistent with previous studies 29,30 , which found that nAb levels are detectable and persist following viral clearance. Cross-sectional analysis of antibody responses did not indicate notable changes in titres during the observation window (about 500 days; Fig. 2a, b), comparable to the findings for neutralization of fully replicating virus (Fig. 2c). However, within individuals, sustained decline was often followed by a sharp increase in antibody titre (Fig. 2a-c).
GP regions
Glycan alteration
Article
The observed declines and subsequent rises in nAb levels identified in some of the study participants ( Fig. 2a-c) indicates de novo antigen stimulation after recovery. This was comprehensively demonstrated in donor CP-Pat-045, whose antibody reactivity in all EIAs and including nAbs initially decreased over a 45-day period (sampled six months after recovery) before increasing suddenly over a 23-day period (Fig. 2d). It should be noted that all donors were tested for plasma EBOV RNA twice in Sierra Leone and were shown to be aviraemic before being discharged from Ebola treatment units. Furthermore, all samples received in the United Kingdom were subsequently re-tested upon arrival, and there was no detectable viraemia in the available samples taken in this observation period. Notably, the increase in antibody reactivity was higher against EBOV95-GP than EBOV14-GP, although participation in any vaccine study (where the immunogen would mimic the 1995 strain) was ruled out through self-reporting and later confirmed by the lead investigators of the two Ebola vaccine studies.
Following on from these observations, we tested samples from donor CP-Pat-045 and donors CP-Pat-018, -019, -021 and -049 using an additional panel of EIAs (targeting the EBOV glycoprotein, nucleoprotein and VP40 matrix protein) and found similar variations in antibody responses (Fig. 2e, f, Extended Data Fig. 4) indicating that antibody restimulation targeted viral antigens that are not present in current vaccines, and not just the glycoprotein. Furthermore, antibody responses showed similar variations in an IgG capture assay and a competitive antibody-binding immunoassay, both targeting the glycoprotein (Extended Data Fig. 5). Antibodies from donors CP-Pat-019 and -021 also increased before subsequently decreasing. The increases in EBOV antibodies from donor CP-Pat-045 occurred between mid-December 2015 and mid-January 2016. Two cases of Ebola were reported in mid-January 2016 in the northern districts, although Sierra Leone had been declared 'Ebola free' in November 2015 (https://www.theguardian.com/world/2015/ sep/04/sierra-leone-village-in-quarantine-after-ebola-death). These donors and a control group of donors who did not demonstrate late rises in nAb reactivity were interviewed. All denied any intercurrent illness, known exposure to individuals with Ebola or participation in EBOV vaccine studies. It should also be borne in mind that by definition these convalescent donors had to meet individually the Sierra Leone National Safe Blood criteria for fitness to donate blood. Furthermore, interviews and physical examinations were undertaken at each attendance for plasmapheresis. Although re-exposure to EBOV cannot be excluded, we assume that the increase in antibody reactivity represents de novo antigenic stimulation at immune-privileged sites, boosting immunity. The presence and ongoing replication of EBOV in such sites has been described as late clinical recrudescence and reporting of sporadic viral transmission [31][32][33][34][35][36] .
Given this high degree of intra-patient fluctuation in EBOV virus antibody responses, we used the available data to develop compartmental population pharmacodynamic models to quantify antibody stimulation and decay trends in this cohort. The strong association between nAbs and total antibody binding measured by DABA reactivity (Fig. 1f, Extended Data Fig. 1h) enabled us to use the more replete DABA dataset, which incorporates extensive longitudinal time-points (Supplementary Table 1f), to perform model selection for stimulation and decay trends. The best fitting models for stimulation and decay were objectively identified by comparison of the log-likelihood-based Akaike information criterion (AIC) and Bayesian information criterion (BIC) metrics (see Methods, Supplementary Table 3a-c) as a one-compartment model with reduced stimulation at high antibody levels (Fig. 3a) and a two-compartment decay model with saturable recycling of antibody (Fig. 3b). The rate constant for stimulation for total antibody binding reactivity was 0.03 per day, equivalent to a doubling time of 23 days (Supplementary Table 4), whereas the decay model provided a variable antibody concentration-dependent rate constant equivalent to 30 days at half the maximum antibody level measured (Supplementary Table 4). We then fitted the two best structural models, as selected using the DABA data, with the nAb titre values for the EBOV14-GP and EBOV95-GP strains and performed simulations ( Fig. 3e-h). The calculated stimulation rate constants for the virus strain variants were 0.067 per day and 0.046 per day, respectively, possibly reflecting variation in epitope targets. The calculated endogenous nAb decay rates were similar for the different virus strains (0.025 per day for EBOV14-GP and 0.025 per day for EBOV95-GP) and matched the results initially found while modelling DABA reactivity (population mean of 0.028 per day; Supplementary Table 4). We calculated the resulting concentration-dependent half-lives at half maximal observed antibody levels as 51 and 70 days for EBOV14-GP and EBOV95-GP, respectively. Notably, our findings are congruent with recent studies that have modelled endogenous antibody metabolism 37 . To our knowledge, this is the first population model of antibody level dynamics in EBOV survivors. We next simulated the stimulation and decay profiles for 1,000 survivors of EBOV using the developed population models (Fig. 3c-h). The interquartile range of total antibody levels varied widely for the simulated cohort when tracked longitudinally, indicating a wide-ranging array of doubling times and half-lives. The mean simulated doubling times were 18.93 days (interquartile range: 11.68-33.62), 10.36 days (9.96-10.81) and 13.76 days (9.52-23.56), for total binding antibody, nAbs against EBOV14-GP and nAbs against EBOV95-GP, respectively, indicating that overall EBOV14-GP was stimulated most quickly and with the least variability, which is reasonable given that this was the 2013-2016 epidemic strain. The median simulated endogenous decay half-lives were 20 Increasing antibody reactivity was shown in a high proportion of study participants, somewhere between 200 and 300 days after recovery, by longitudinal analysis of DABA (Fig. 2e, Extended Data Fig. 6), blocking EIA (Fig. 2e, Extended Data Fig. 4), IgG capture and competitive EIA (Extended Data Fig. 5), as well as with antibody neutralization measurements (Fig. 2d). This suggests that as antibody responses are waning, antigen levels increase, resulting in a boost to the residual primary antibody response. When we compared the lowest observed antibody titres after decline with the highest antibody titres following stimulation (before further de novo decline), we observed a statistical difference in antibody levels (n = 18, P < 0.0014; Extended Data Fig. 7). We have simulated a typical decay-restimulation-decay profile based on population median parameters and starting levels, demonstrating a projected typical scenario in a substantial proportion of EBOV survivors (Fig. 3i).
Analyses of naturally occurring nAb responses in our Ebola-CP donor cohort revealed a high degree of variation in the strength and breadth of induced responses. Longitudinal analysis of B cell responses in EBOV-infected individuals has revealed stark changes in immunoglobulin subclass switching, heightened alterations to hypermutations and naive B cell restimulations over time 38 . Our results suggest that EBOV antigen re-exposure contributes to these observed alterations in antibody phenotypes. It is encouraging to find strong neutralization cross-reactivity between EBOV strains representing outbreaks 20 years apart. This provides confidence that antibodies induced either through natural infection or via a vaccine should provide protection against future outbreaks. Our results indicate that the evolution of EBOV 39,40 , albeit slow, may result in altered neutralizing potential and therefore loss of vaccine efficacy (Fig. 1c, Extended Data Fig. 1f). Furthermore, if CP with broadly neutralizing activity were to be used in therapeutic protocols, then combining plasmas from several individuals may ensure a more successful outcome. The best option could be the preparation of a hyperimmune intravenous immunoglobulin blood product from a panel of donors, rather than relying on the use of individually sourced components as at present. The high frequency of de novo antigenic stimulation described within our cohort indicates a need for heightened surveillance of survivors to meet the potential clinical needs associated with virus recrudescence. Subclinical recrudescence may intensify the long-lasting post-Ebola sequelae suffered by most EBOV survivors 41,42 . The cohort of CP donors studied here, however, represents a highly selected group of healthy individuals, further chosen through the use of field testing to have plasma antibodies to EBOV in the upper quartiles of serological reactivity 21 . Therefore, they may represent the convalescent individuals who are least likely to suffer viral recrudescence. Occult virus persistence is therefore likely to be more frequent than previously predicted, supporting findings that the virus persists at sequestered sites in some individuals 2,43 . In a case study of an immunocompromised individual infected with HIV-1 (CD4 cell count 46 μl −1 ), EBOV was detected in semen two years after the individual was discharged from the treatment unit 37 , further underlining the importance of immune-competence for EBOV clearance. A longer and more frequent sampling would provide a more accurate indication of the extent of antibody restimulation in these survivors of Ebola.
The calculated mean half-life at median antibody levels allowed us to predict the time taken to reach 95% depletion of any given level after antigenic stimulation; given the exponential decay rate, we predict that the duration of six half-life periods (about 180-417 days) will result in Article depletion of antibody levels by more than 95%. As a result, protection of EBOV survivors from viral recrudescence mediated by acquired immunity is likely to last for 0.5-2 years after recovery unless boosted. Continued surveillance of EBOV survivors is warranted, considering the frequency of sub-clinical de novo antigenic stimulation we have described. Vaccination could be considered to boost protective antibody responses in survivors. This would also have a particular role if EBOV survivors are to be considered as plasma donors for use in future anti-Ebola passive immunotherapy.
Online content
Any methods, additional references, Nature Research reporting summaries, source data, extended data, supplementary information, acknowledgements, peer review information; details of author contributions and competing interests; and statements of data and code availability are available at https://doi.org/10.1038/s41586-020-03146-y.
Ebola survivor cohort
Ebola virus disease survivors (n = 115), previously described 42 with certificates (issued by Ebola treatment centres on discharge) were recruited as potential donors through 34 Military Hospital, Freetown, and the Sierra Leone Association of Ebola Survivors as participants in the study 'Convalescent plasma (CP) for early Ebola virus disease in Sierra Leone'. The study (ISRCTN13990511 & ACTR201602001355272) was approved by the Scientific Review Committee and Sierra Leone Ethics, authorized by the Pharmacy Board of Sierra Leone (PBSL/CTAN/ MOHSCST001) and sponsored by the University of Liverpool. All participants provided written consent for data collected in this study.
Volunteers were considered suitable to donate plasma if they tested negative for blood-borne infections (hepatitis B, hepatitis C, HIV, malaria and syphilis), had had two documented negative EBOV PCR tests 72 h apart, had no acute febrile illness and had no comorbidity, such as heart failure, to suggest that they might be at increased risk of adverse events during apheresis. Volunteers were not excluded if they exhibited indications of post-Ebola syndrome (PES; for example, musculoskeletal pain, headache or ocular problems), although such complaints were noted and subsequently contributed to the characterization of PES 41,42,44 . The majority of the participants were male (n = 82), their age ranging between 18 and 52 with a median of 27 years old. The female (n = 33) age range was between 18 and 42 with a median of 27 years old (Supplementary Table 1a).
For transfusion safety reasons, donor identity numbers were not confidential to donors during the conduct of the study; for the avoidance of doubt, donor identity numbers have since been dissociated. All participants (n = 115) were tested using DABA, blocking EIA and IgG capture immunoassays 21 . PPV antibody neutralization assays were performed with a subset of participants not selected on any criteria other than sample availability (n = 52). The compartmental population pharmacodynamics model was developed on the more replete DABA dataset using those participants with longitudinal data (n = 51) (Supplementary Table 1f).
EBOV PPV construct design
Three viral strain glycoprotein genes were cloned into pCDNA3.1 produced by GeneArt using gene synthesis: a 2014 isolate (KP096421) 22 , a variant carrying the A82V, T230A, I371V, P375T, and T544I mutations (Fig. 1b) identified by analysis of sequenced EBOV strains between March and August 2014 39 and the AY354458 1995 Kikwit isolate 50 . The latter was used in ring vaccinations during the 2014 epidemic.
EBOV PPV production
We chose to utilize the HIV-1 SG3 ΔEnv and EBOV-GP expression plasmids, co-transfected into HEK293T cells, to generate infectious PPV stocks 47,51,52 . The EBOV-GP-pseudotyped lentiviral system generates single-cycle infectious viral particles. HEK293T cells were plated at a density of 1.2 × 10 6 in a 10-cm diameter tissue culture dish (Corning: 430167) in 8 ml complete DMEM and incubated overnight. The cells were transfected with 2 μg pSG3Δenv along with 0.285 μg of a plasmid expressing EBOV-GP using a cationic polymer transfection reagent (Polyethylenimine, Polysciences: 23966-2), in the presence of OptiMEM (Invitrogen: 31985-070). OptiMEM was replaced 6 h after transfection with 8 ml complete DMEM. Seventy-two hours after transfection, supernatant containing the generated stock of single-cycle infectious EBOV-GP pseudotyped virus particles was harvested, passed through a 0.45-μM filter and stored in aliquots at −80 °C. EBOV-GP plasmid (285 ng per 10-cm culture dish) was used to produce a large virus stock that was tested for infectivity (Fig. 1a) then pooled, aliquoted and stored at −80 °C.
EBOV PPV infection
EBOV infectivity was determined through infection of TZM-bl cell lines where luciferase activity (expressed from LTR promoter) is under the control of Tat expressed from the HIV-1 backbone. We used 100 μl EBOV-GP virus to infect 1.5 × 10 4 TZM-bl cells per well for 6 h in a white 96-well plate (Corning: CLS3595). Following infection, 150 μl per well DMEM complete was added to the cells. Forty-eight hours after infection, medium was discarded from the wells, cells were washed with phosphate-buffered saline (PBS, ThermoFisher: 12899712) and lysed with 30 μl cell lysis buffer (Promega: E1531), and luciferase activity was determined by luciferase assay (Promega: E1501) using a BMGLabtech FluoroStar Omega luminometer. Negative controls included pseudotyped virus bearing no glycoproteins and TZM-bl cells alone, which routinely resulted in luminescence of 3,000-7,000 relative light units (RLU).
EBOV PPV neutralization
Plasma samples (n = 52) from Ebola convalescent plasma and healthy blood donors (n = 6) were heat treated at 56 °C for 30 min and centrifuged for 15 min at 13,000 RPM. Aliquots were then stored at −80 °C. Plasma samples were serially diluted 50% with complete DMEM; 13 μl plasma dilution was incubated with 200 μl EBOV-GP PPV for 1 h at room temperature. We used 100 μl of virus/plasma dilution to infect TZM-bl cells as described above. Luciferase activity readings of neutralized virus were analysed (i) by considering 0% inhibition as the infection value of the virus in the absence of convalescent plasma included in each experiment, (ii) by considering 0% inhibition as the infection value of two consecutive high dilutions that did not inhibit virus entry. Both methods produced highly correlated results (Extended Data Fig. 2d) and the latter was used. The neutralization potential of a CP was represented as the plasma dilution that reduced viral infectivity by 50% (IC 50 ) or by 70% (IC 70 ).
Double antigen bridging assay (DABA).
We measured EBOV GP targeting antibody present in Ebola survivor CP samples. EBOV GP antigen, Mayinga Zaire EBOV strain (IBT Bioservices: 0501-016) was pre-coated onto the 'solid phase', while a second antigen conjugated to horseradish peroxidase (HRP) acted as the detector, binding to EBOV antibodies captured on the solid-phase antigen in the first incubation step. Antibody reactivity was expressed as arbitrary units per ml (AU/ml) as compared to a standard comprising five reactive donor samples that were pooled and set as 1,000 AU/ml 21 .
Blocking EIA. Antibody levels in CP to EBOV GP (glycoprotein), VP40 and NP (nucleoprotein) were determined by blocking of the binding of specific rabbit EBOV anti-peptide (GP, VP40, NP) antibodies (IBT Bioservices) to EBOV Makona virion-coated microplates. Microplate wells were coated with a 10,000-fold dilution of concentrated Ebola virions. EBOV patient CP and negative control CP dilutions (1/100) were reacted on virion-coated microplates for 4-6 h. CP dilutions were removed and plates were then reacted with EBOV anti-peptide antibodies. Bound rabbit antibodies were detected by species-specific horseradish peroxidase conjugate (DAKO: P03991-2). Evidence of EBOV protein-specific human antibodies in CP was determined by blocking of the binding of the antipeptide antibody compared to the blocking of binding by the CP negative control. Results were expressed as a percentage of blocking of the CP negative control reactivity.
IgG capture assay. IgG antibodies present in CP were captured onto a solid phase coated with rabbit hyperimmune anti-human γ-Fc and interrogated in a second incubation with HRP-conjugated EBOV GP as above. Reactivity was expressed as binding ratios derived as sample OD/cut-off OD 21 .
Plaque reduction neutralization test
The wild-type strain used for assays was EBOV Makona (GenBank accession number KJ660347) 21 , isolated from a female Guinean patient in March 2014 (virus provided to PHE Porton by S. Günther, Bernhard-Nocht-Institute for Tropical Medicine, Hamburg, Germany). The virus was propagated in Vero E6 cells and culture supernatant virions were concentrated by ultracentrifugation through a 20% glycerol cushion; pellets were resuspended in sterile PBS at a titre of 10 9 focus-forming units (FFU) per ml.
The wild-type virus neutralizing antibody titre in CP was determined by reacting serial dilutions of CP with 100 FFU of EBOV virions for 1 h at room temperature to allow antibody binding. The EBOV virion CP mixture was adsorbed to Vero E6 monolayers for 1 h and then overlaid with cell growth medium containing 1% (v/v) Avicel (Sigma-Aldrich). After 80-90 h, EBOV foci were visualized by immunostaining with anti-VLP (Zaire EBOV) antibodies (IBT Bioservices). All work was undertaken under ACDP containment level 4 conditions.
EBOV antibody decay and restimulation modelling
Compartmental population analysis was performed to model the stimulation and decay of antibody levels. All modelling and simulations were performed using Pmetrics version 1.4 53 within R version 3.2.2 54 . Antibody levels of EBOV survivors were sampled at different number of instances, at varying intervals post convalescence due to limitations of follow-up adherence in the field. Different parts of decay-stimulation profiles were therefore captured, with only a few instances of contiguous decaystimulation or stimulation-decay profiles being captured. Stimulation and decay data were therefore modelled separately to most efficiently use the data. Antibody stimulation-decay trends with 2 or more data points were included in population analysis as this methodology has been proven to maximally use sparse clinical data for drug development 55,56 . All points were plotted and visualized. An 'ascend' or a 'descend' was defined according to the prevailing trend. A 20% alteration in direction was tolerated as part of the prevailing ascend or descend as appropriate.
Structural model
Structural model selection was performed for the most replete DABA data set. Model fitting and selection were performed using previously published protocols for fitting clinical data sets as described below 57,58 . In brief, linear regression (intercept close to 0, slope close to 1) was used to assess the goodness-of-fit of the observed-predicted values, the coefficient of determination of the linear regression and minimization of log-likelihood, AIC and BIC values were used for model selection.
A change in BIC drop of more than 2 is generally considered to be significant; with 2-6 indicating positive-to-strong evidence, 6-10 indicating strong evidence and >10 indicating very strong evidence 59 .
Further details of this analysis leading to the choice of models and analysis of the fit of models to data can be found in Supplementary Tables 3, 4 and Extended Data Figs. 8, 9. All chosen structural models showed strong-to-very-strong evidence of describing the data the best out of the compared models. Two structural models were tested for antibody stimulation, a one-compartmental stimulation model and a one-compartmental model with saturable stimulation, based on the logistic growth model. The logistic growth model framework allows for plateauing antibody levels, as observed for a subset of stimulation profiles. For antibody decay, four structural models were tested; a one-compartment decay model with first order elimination, a two-compartment decay model with first order elimination from the central compartment, and the above two structural models with saturable recycling offsetting the endogenous elimination rate.
Antibody stimulation was best modelled using the onecompartmental model with saturable stimulation as described by equation (1): where X 1 , k growth and K max denote antibody level in the compartment, the first order rate constant for endogenous antibody stimulation and the maximal antibody level at which stimulation plateaus, respectively. For antibody decay, the two-compartment decay model with saturable FcRn-dependent recycling (equations (2)(3)(4)) as used to model antibody decay in multiple laboratory studies 37 was found to best describe the data.
where X 1 and X 2 are the antibody levels in the central and peripheral compartments. The rate constants k decay , k cp and k pc denote the empirically observed antibody level dependent rate constant and the first order rate constants to and from the peripheral compartment, respectively. k decay is in turn dependent on the endogenous decay rate k end , which is offset by an antibody-dependent saturable recycling rate described by a Michaelis-Menten term with parameters V max and K m denoting the maximal recycling rate and antibody level at which half the maximal recycling rate occurs, respectively. The optimal structural models above were then used to model the more sparse nAb assay datasets, allowing for comparability between DABA and nAb model parameters.
Generally, individual predicted versus observed value correlations were excellent (R 2 > 0.8) and population predictions versus observed values were good (R 2 > 0.6). Monte Carlo simulations were performed using Pmetrics as previously described 57,58 . In brief, 1,000 individuals were randomly sampled from parameter distributions defined in the population models of antibody stimulation and decay. The interquartile range of modelled antibody levels was then plotted longitudinally for average starting antibody levels for decay and stimulation profiles (Fig. 3c-h).
With regard to the choice to model the stimulation and decay data separately: in principle, an immune response followed by a gradual return to baseline post-stimulus could be characterized by a single pharmacodynamic model. In the simplest form, the dynamics can be described by a single-compartment model with the stimulus placed on the input rate and first order elimination, although more mechanistic models based on known pharmacology may also be appropriate if the data are of sufficient quality to estimate the unknown model components. In a controlled trial setting, the onset of a stimulus event would be controlled and the subsequent immune response measured relative to this origin with sufficient frequency to capture the dynamics over time. By contrast, this study was observational with plasma samples taken intermittently that captured only part of the changing levels in the nAbs-either the growth or decay phase in most cases, but on occasion both. Given the lack of detectable viral load and the observational nature of the nAb response data, the ability to fit a single, integrated pharmacodynamic model to the data is limited. The most tractable solution in this case was to split the data into two groups and model them separately: the first model quantifying the rate of increase in nAbs and the second model describing the subsequent decay. The antibody decay was based on ref. 37 . While this two-stage approach did not allow data from the 'stimulation' phase to inform the model fit of the 'decay' phase-and vice versa-it did enable accurate and quantitative characterization of both the stimulation and decay dynamics, which have not been characterized for EBOV disease before this study, and which may be used to inform future work in this area and other impactful viral diseases such as COVID-19.
Statistical analysis
Statistical analyses of data were implemented using GraphPad Prism 6.0 software. Unpaired sample comparisons were conducted for all data; individual figure legends state the corresponding statistical tests performed. These include parametric and non-parametric t-tests (Student's t-test and Mann-Whitney U-test); parametric and non-parametric ANOVAs (ordinary ANOVA and Kruskal-Wallis test). *P < 0.05, **P < 0.01, ***P < 0.001, ****P < 0.0001.
Reporting summary
Further information on research design is available in the Nature Research Reporting Summary linked to this paper.
Data availability
All datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request. Source data are provided with this paper. The resulting pseudotyped virus, quantified by a HIV-1-p24 capsid ELISA (squares), was tested for infectivity in TZM-bl cells as measured by luciferase activity (data are mean ± s.d.). The red marked square identifies the glycoprotein concentrations that can be used in the assay. b, Inhibition profiles with negative plasma donated from six individuals (grey squares), indicating no specific plasma inhibition during the neutralization assay. All negative assays and plasmas were combined to define the range within which negative plasma control were acceptable (red squares) thus defining a valid assay. The blue line shows the lack of reactivity on the HIV-1-enveloped pseudo-typed virus by EBOV neutralizing convalescent plasma (CP) (squares and circles indicate the median and the vertical lines the standard error). c, Neutralization profiles of pEBOV14-GP by the WHO reference panel of anti-EBOV CP. The standard identifiers are shown. d, Reproducibility of the neutralization assay determined by measuring the IC 50 of CP on the three EBOV isolates (yellow-pEBOV14-GP, purple-pEBOV95-GP and green-pEBOV14m-GP). The two-tailed parametric paired t-test was used. e, Neutralization potential of CPs against three virus strains (pEBOV14-GP/n = 83, pEBOV95-GP/n = 69 and pEBOV14m-GP/n = 77) expressed in IC 70 (data are presented as mean values ± s.d. Kruskal-Wallis test was performed). f, delta-IC 70 neutralization titres between virus strain pairs by each post-cure study participant. g, Positive association between PPV IC 70 titres the live virus plaque reduction neutralization test (PRNT). h, Positive association between PPV IC 70 neutralization titres and the double antigen bridging assay (DABA). The longitudinal G-capture (pink) and competitive (green) EIAs were performed against the glycoprotein as previously described 21 . The antibody reactivities were overlaid with pseudotyped virus particle IC 50 neutralization values against EBOV14-GP (light blue) and EBOV95-GP (dark blue). | 2021-01-29T05:26:46.819Z | 2021-01-27T00:00:00.000 | {
"year": 2021,
"sha1": "cc1d031f4922b1fae2add70213eaa345c4b84caa",
"oa_license": null,
"oa_url": "https://www.nature.com/articles/s41586-020-03146-y.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "cc1d031f4922b1fae2add70213eaa345c4b84caa",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
268983764 | pes2o/s2orc | v3-fos-license | Alterations of non-motor symptoms in Parkinson's disease, after of subthalamic deep brain stimulation
The effect of subthalamic deep brain stimulation (STN DBS) on motor symptoms of Parkinson's disease (PD) has been thoroughly analyzed. The influence of STN DBS on non-motor symptoms (NMS) is still debatable. We analyzed the effect of STN DBS on NMS in PD. Materials and methods 17 PD patients were qualified for STN DBS according to CAPSIT-PD criteria. Demographic data and clinical status according to the Hoehn–Yahr (H–Y) were recorded. The efficacy of STN DBS on NMS was measured with the NMS Scale before surgery and twelve months after surgery. Results Global NMS Scale score decreased by 1–75 points (mean 25,67) in 12 patients. No improvement or deterioration was reported in 5 patients (29%). The mean age of the improved group was 56 years and 59,8 years in the non-improved group. The mean duration of PD in the improved group was 11 years and 21 years in the non-improved group. In the non-improved group, four patients were rated 4 and one patients 3 according to the H–Y Scale. In the improved group, two patients were rated 4, six patients 3 and four patients 2 according to the H–Y Scale The most significant improvement of the NMS Scale was recorded in the domain IV- Perceptual problems/Hallucinations- (by 77%), domain I- Cardiovascular including falls- (by 68%) and domain III- Mood/Cognition- (by 58%). Deterioration of the NMS Scale was reported in the domain IX- Miscellaneous- (by 10%) and the domain VII- Urinary- (by 6%). Conclusions STN DBS has a positive impact on NMS among PD patients. The most important factors that influence improvement are: young age, short disease duration, and good clinical status measured with the H–Y Scale. The NMS Scale domains that tend to respond the best are the domains I, III and IV. The NMS Scale domains that might deteriorate after STN DBS are the domains VII and IX.
Parkinson's disease (PD) is one of the most common movement disorders and the second most common neurodegenerative progressive disease with age-dependent increasing prevalence (1-3% in the population aged over 65 years). 1 PD is a serious medical and socio-economic problem and remains incurable until today.Loss of dopaminergic neurons in substantia nigra-is considered a hallmark of PD. 1 It is believed that reduced dopaminergic input is responsible for the main motor symptoms of PD (bradykinesia, rigidity, resting tremor, and postural instability) and explains a remarkable clinical response to dopamine replacement therapy. 2 Progression of PD symptoms besides initial effective conservative treatment (pharmacological "honeymoon") led to a renaissance of neurosurgical neuromodulation that included deep brain stimulation (DBS).The introduction and popularization of DBS was the next milestone in the understanding and treatment of PD. 1 Until today the treatment of movement disorders have has focused on, and developed treatment algorithms that can extend patients' life lives and positively influence their quality of life by alleviating motor symptoms of PD. 3,4,5,1,2 The development of the animal, laboratory model of PD in 1976, allowed us to identify alterations of direct and indirect cortical-basalthalamic-cortical loops that are responsible for the development of PD symptoms.Recently it has become apparent that the neuropathological changes of PD extend beyond the basal ganglia system, affecting also the olfactory, limbic and autonomic systems with morphological changes in Abbreviations: DBS, deep brain stimulation; LEDD, levodopa equivalent daily dose; NMS, non-motor symptoms; PD, Parkinson's disease; STN, subthalamic nucleus.
the brain stem and cortex.Even though non-motor symptoms (NMS) of PD like: pain, fatigue, changes of in blood pressure, restless legs, bladder and bowel problems, skin sweating, sleep alterations, swallowing, and saliva control, communication issues and eye control remain essential, until recently were not in the main field of research. 6,7,8,9,10NMS can be measured in a repeatable manner with the NMS scale provided by the Movement Disorders Society where: cardiovascular symptoms, alertness, mood, cognition, hallucinations, attention, memory, gastrointestinal tract symptoms, urinary and sexual functions, pain, taste and smell, weight changes and excessive sweating are evaluated. 11The extra basal ganglia pathological changes of the brain are considered to be responsible for the NMS that influences the quality of PD patient's life.
Objectives
To evaluate the influence of STN DBS for PD on NMS.To identify a group of PD patients that will benefit the most from STN DBS in the aspect of NMS.To identify the domains of the NMS Scale those tend to respond the best and the worst to STN DBS for PD.
Materials and methods
Seventeen eligible PD patients 12 qualified by movement disorders specialists for STN DBS according to the CAPSIT-PD criteria 13 entered the study.Consent forms for the study were obtained before each interview.Ethics committee approval was not required as long as the NMS Scale provided by the Movement Disorders Society belongs to a standard test package of PD evaluation. 4,1,2Demographic data were collected: initials, age, gender, disease duration, clinical status according to the H-Y Scale and date of birth of eight female and nine male patients were analyzed.The mean age was 57,17 (30-75 years old, standard deviation, SD = 12,084).The mean H-Y Scale score was 3,1 (four patients were rated 2, seven patients were rated 3 and six were rated 4, SD = 0,781).The mean PD duration before STN DBS was 12,76 (7-22 years, SD = 4789).The 30-questions NMS Scale evaluates nine domains of NMS in PD with a total score from 0 (no impairment) to 360.The presurgical interviews for the NMS Scale were carried out at the in-patient clinic before implantation.On the day before surgery, patients underwent MRI and CT.The stereotactic frame was placed under local anesthesia.MRI and CT images were fused with a neuronavigation system and the coordinates of STN were calculated using direct and indirect methods.During surgery the neurophysiological evaluation was conducted by a neurophysiologist and the neurological state of the subjects during macrostimulation were was evaluated by a neurologist at the operating theater.The characteristic pattern of STN was recorded bilaterally in each patient.Macrostimulation was performed later and permanent DBS electrodes were implanted.The internal pulse generators were implanted in the subject's chest.On the day following the surgery, a control brain CT scan was performed.The stimulation was initialized after implantation.The main outcome measure was the NMS Scale score.The interviews for the NMS Scale and adverse effects were carried out at the out-patient clinic six to twelve months after implantation.Two subgroups were identified.
-improved group: a group of patients with decreased global NMS Scale score, -non-improved group: a group of patients with unchanged or increased global NMS Scale score.
Results
Seventeen patients completed evaluation before implantation and after STN DBS.All patients underwent standard battery tests following surgery that included UPDRS part III and psychological evaluation.LEDD (levodopa equivalent daily dose) and its changes were recorded before and after surgery.In the analyzed group of patients, the LEDD did change after surgery.The global NMS Scale score of the analyzed group decreased by 29%.Global NMS Scale score decreased by 1-75 points (mean 25,67, SD = 24,889) in 12 patients (71%) (Table 1, Fig. 1).
The non-improved group included four patients with an increased global NMS Scale score by 1-20 points (mean 14 ,25) and one patient had an unchanged NMS Scale score.The mean age of the non-improved group (comprising four females and one male) was 59,8 years (SD = 6685).The mean age of the group with reported improvement (which included four females and eight male patients) was 56 years (SD = 13, 846) (Table 1, Fig. 2).In the non-improved group, four patients were rated 4 and one patient 3 according to the H-Y.In the improved group, two patients were rated 4, six patients 3 and four patients 2 according to the H-Y Scale (Table 1).The mean duration of PD in the non-improved group was 17 years (SD = 4).The mean duration of PD in the group that reported improvement was 11 years (SD = 4) (Table 1, Fig. 3).The most significant improvement in the analyzed group was measured in the domain IV (Perceptual problems/hallucinations)-by 77%, in domain I (Cardiovascular including falls)-by 68% and domain III (Mood/Cognition)-by 58%.Deterioration was reported in the domain IX (Miscellaneous)-by 10% and the domain VII (Urinary)-by 6% (Fig. 4).No adverse effects related to the therapy were reported in the analyzed group.
Discussion
The progressive degeneration of the dopaminergic system in PD is responsible for the appearance of side effects caused by the long-term dopaminergic therapy, like motor fluctuations and dyskinesias.Those motor symptoms are poorly managed by the oral therapy and more than 10% of PD patients should be qualified for DBS. 13,1The best candidates are those with severe motor fluctuations (severe off-medication conditions and substantial benefit from the L-dopa therapy).The main exclusion criteria are: suspicion of atypical p Parkinsonian syndrome or presence of psychiatric (depression, hallucinations) or cognitive alterations. 1 During the last three decades, functional neurosurgery has developed rapidly, mainly due to the introduction of DBS.Previously published studies have confirmed the significant improvement of motor symptoms observed in PD after STN DBS. 3,4,5,2Long-term studies provided evidence that DBS -induced motor improvement was evident at 8year follow-up. 15,8However, it has to be kept in mind that DBS does not modify the speed of PD progression.With time, patients can develop disabling motor and NMS symptoms. 7,16,17,9NMS significantly impairs the quality of PD patients' life.NMS in PD have been more attentively analyzed in recent years, but the influence of DBS on those symptoms is not thoroughly evaluated and understood. 18,19,20The W whole group of patients qualified to for the study met CAPSIT-PD criteria and suffered of from motor and non-motor symptoms.None subject of the analyzed group had psychiatric or cognitive alterations.
The DBS effect is mainly based on inhibiting the target structure (STN) that is excessively active and responsible for symptoms observed in PD.The main mechanisms of STN DBS are the depolarization blockade of neurons and axons (the inactivation of sodium ion channels), synaptic depolarization, antidromic release of GABA within the basal ganglia network, and activation of local inhibitory mechanisms within STN.The most significant feature of DBS is the reversible power of inhibition of hyperactive target structure.This explains why it is reasonable to implant DBS in the portion of the STN that is hyperactive.Neuroimaging methods of visualization of the STN use a high-field, 1,5T or 3T MRI.Direct visual STN identification is mainly based on highresolution T2 axial scans.Indirect identification is based on the position of the anterior and posterior commissure and the walls of the third ventricle.MRI-based determination of the target point in the majority of cases is the same as one identified by intrasurgical neurophysiological evaluation.Selected reports indicate sole MRI to be sufficient for STN identification, although the majority of centers find neurophysiological evaluation during surgery (microrecording and macrostimulation) to be necessary to achieve maximal clinical benefit. 1 In the dedicated DBS centers for movement disorders, specific neuroimaging techniques and intraoperative neurophysiological evaluation are used routinely to maximize the therapeutic effect of the treatment and minimize the risk of adverse events.During the surgeries y of patients the from analyzed group all indicated and available techniques were used to maximize potential benefits and minimize the risk of adverse events.The mechanism of DBS action is not completely understood.By inhibiting STN by DBS, the harmony between basal ganglia, thalamus and brain cortex is being restored (the cortical-basal-thalamic-cortical circuit).The presented mechanism of DBS explains the improvement of motor symptoms in PD. 1 The influence of DBS on NMS seems to be more complex.STN is a small structure of almond shape and size.In this deeply located structure, three portions are defined: the motor portion (target for the DBS treatment), the limbic portion (responsible for mood and its alterations) and the associative portion (responsible for cognitive functions).The size of a DBS electrode is that of a match (1.3 mm in diameter integrating four contacts of 1.5 mm length each, spaced with 0, 5 or 1,5 mm gaps and connected to the internal pulse generator implanted on the chest) and depending on the voltage amplitude might inhibit solely the motor portion of STN or the surrounding structures as well (for instance limbic or associative portion of STN) influencing appearance or disappearance of NMS. 16,1,21The S standardized anatomical target for the electrode at in the presented study was the dorso-lateral (motor) portion of STN.Microrecording and macrostimulation performed during surgery allowed to optimize the placement of permanent electrodes in depth (20 mm) and in a radius of 2 mm (anterior, posterior, central, lateral and medial path).
According to the CAPSIT-PD criteria, 13 patients qualified for DBS should be younger than 65 years of age.Recently the age regime has been widened making the biological age more important than the metrical age. 18In the presented study six patients were 65 years old or older.Undoubtedly more advanced age carries a higher risk of adverse events and gives fewer chances for improvement in the motor and non-motor aspects of PD.In the presented study deterioration following DBS measured with the NMS Scale was reported in the advanced age group of patients (above 65 years of age).Disease duration was also indicated in the CAPSIT-PD criteria as one of the main elements of the qualification protocol.As long as PD is a progressive disorder, with time, the patient's condition measured with the H-Y Scale gets poorer.It has been reported by Schuepbach 22 that qualification for DBS at the early stage of the disease gives better results in the aspect of motor symptoms of PD.Better results measured with NMS were reported in the presented study, in patients with a shorter history of PD and in better clinical conditions measured with the H-Y Scale. 18,22,5he impact of STN DBS on the global score of the NMS Scale has been previously reported.This study confirms the positive effect of STN DBS on NMS. 23,7,8,9,10The results for patient domains of the NMS Scale vary.In contrast to Dafsari et al, who reported no significant difference at a three years follow-up in domain I (Cardiovascular including falls) in the short -term study (three to six months) presented here, an improvement was registered in this domain mainly due to a decreased number of falls. 7In the presented study improvement in domain II (Sleep/fatigue) was less significant and this result is in line with the study by Choi et al. 15 Improvement of domain II is mainly related to the improved quality of sleep.Lilleeng in contrast reported a decreased domain II score as a result of the worsening of fatigue. 19However, this result might be limited by the fact that the conservative treatment remained high in his study at postoperative follow-up and sleep alterations and fatigue are common side effects of pharmacological treatment.On the other hand, DBS, by influencing cortical-basal-thalamic-cortical loops, might reduce directly nocturnal motor symptoms of PD.In contrast to previous evidence, 2 at in the presented study improvement in domain III (Mood/-Cognition) has been reported.The subjective improvement of those aspects in a short -term follow-up might be related to the motor improvement of the patients and their positive, high expectations regarding another DBS-related "honeymoon" (mood improvement has not been confirmed by objective, dedicated psychological tests). 2 The most significant improvement in domain IV in this study (Perceptual problems/hallucinations) that has been previously observed and reported by Yoshida et al might be related to the reduction of conservative treatment whereas hallucinations are reported to be one of its adverse effects. 24Additional analysis needs to be undertaken to establish the relationship between hallucination outcome and conservative dopaminergic and psychotropic treatment, and its dependency to on other neuropsychiatric aspects of PD.Slight improvement in the presented study in domain V (Attention/Memory) further supports Zangaglia's results, however in the same study Zangaglia also reported verbal fluency performance deterioration after DBS. 20Zangaglia indicated that logical executive function tasks might be impaired transiently after DBS as well.No significant changes in the domain VI (Gastrointestinal tract) in this, short-term study stay in opposition to Lilleng's study, who reported a lower prevalence of constipation in 24 months follow up. 17,19illeng however, did not employ validated scales in his study.In the presented study deterioration of VII domain (Urinary) stays in opposition to Herzog's study.Herzog et al analyzed ameliorations of bladder functions in a short-term follow-up along with modulation of blood flow of the thalamus and brain cortex. 25No significant improvement in domain VIII (Sexual function) goes in line with the results of Kurcova et al. 26,17 It is assumed that the impact of STN-DBS on sexual function mainly depends on demographic parameters like sex and age of the subjects. 17In the presented study the most significant deterioration was recorded in domain IX (Miscellaneous).No pain reduction, and no changes in smell or taste were reported.A group of patients from the presented study reported increased sweating and decreased body weight and those might be related to increased involuntary movements (dyskinesias) observed in the short-term follow-up as a result of microleasion effect after surgery that fade away after several weeks. 6,7,4,10dverse events related to the implantation of DBS are primarily intracranial bleeding (average risk is estimated at 2% in most reports) and infection (4%).STN DBS can harm speech and gait in a group of PD patients that require an adjustment of stimulation.STN DBS can affect mood, especially if mood alterations were reported before surgery-the depression tends to worsen. 1 Several authors report that neuropsychiatric symptoms might appear after STN DBS (hipomania hypomania, pathological gambling); however, those symptoms are usually transient if managed appropriately.While keeping those adverse events in mind, STN DBS is believed to give great symptomatic benefits in cognitively and psychiatrically intact PD patients. 1
Conclusions
The
Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Fig. 2 .
Fig. 2. Correlations between the Non Motor Symptoms Scale (NMSS) score (global) value changes (higher number-more significant improvement) and age of 17 patients.Better results were recorded among younger patients.
Fig. 3 .
Fig. 3. Correlations between the Non Motor Symptoms Scale (NMSS) score value changes (higher number-more significant improvement) and duration of Parkinson's disease of 17 patients.Better results were recorded among patients with shorter duration of the disease.
Fig. 4 .
Fig. 4. Mean percentage changes following surgery of nine domains of Non Motor Symptoms Scale (NMSS) score among 17 patients.The highest improvement was recorded at domain IV(Perceptual problems/hallucinations) by 77%.The highest deterioration was recorded at domain IX (Miscellaneous) by 10%.
study confirms that STN DBS has a positive effect on non-motor symptoms in Parkinson's disease measured with the Non-Motor Symptoms Scale.The most important factors that influence improvement measured with The Non-Motor Symptoms Scale after subthalamic deep brain stimulation among Parkinson's disease patients are: young age, short disease duration, and good clinical state measured with the Hoehn Yahr Scale.Non-Motor Symptoms Scale domains that tend to respond the best are domain IV (Perceptual problems/hallucinations), domain I (Cardiovascular including falls) and domain III (Mood/Cognition).The domains that might tend to deteriorate are: domain IX (Miscellaneous)and domain VII (Urinary).CRediT authorship contribution statement Victor H. Mandat: Writingoriginal draft, Software, Resources, Methodology, Formal analysis, Data curation, Conceptualization.Paweł R. Zdunek: Resources, Data curation.Bartosz Krolicki: Resources, Methodology.Tomasz Mandat: Writingreview & editing, Validation, Methodology, Formal analysis, Conceptualization.
Table 1
Patient's number, age, sex, duration of the disease (years), clinical status in the Hoehn-Yahr Scale score, clinical status in the Non Motor Symptoms Scale score: before and after surgery. | 2024-04-07T15:04:11.331Z | 2024-04-01T00:00:00.000 | {
"year": 2024,
"sha1": "f4f442498e158a3c75ac30cd17cfc5d372578017",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "dc8e2eefd60a105a66c2e80ddc79ab0ae4a3e40d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
169032428 | pes2o/s2orc | v3-fos-license | “It is like a mind attack”: stress and coping among urban school-going adolescents in India
Background Mental health problems are leading contributors to the global disease burden in adolescents. This study aims to highlight (1) salient context-specific factors that influence stress and coping among school-going adolescents across two urban sites in India; and (2) potential targets for preventing mental health difficulties. Methods Focus group discussions were undertaken with a large sample of 191 school-going adolescent boys and girls aged 11–17 years (mean = 14 years), recruited from low- and middle-income communities in the predominantly urban states of Goa and Delhi. Framework analysis was used to identify themes related to causes of stress, stress reactions, impacts and coping strategies. Results Proximal social environments (home, school, peers and neighborhood) played a major role in causing stress in adolescents’ daily lives. Salient social stressors included academic pressure, difficulties in romantic relationships, negotiating parental and peer influences, and exposure to violence and other threats to personal safety. Additionally, girls highlighted stress from having to conform to normative gender roles and in managing the risk of sexual harassment, especially in Delhi. Anger, rumination and loss of concentration were commonly experienced stress reactions. Adolescents primarily used emotion-focused coping strategies (e.g., distraction, escape-avoidance, emotional support seeking). Problem-focused coping (e.g., instrumental support seeking) was less common. Examples of harmful coping (e.g., substance use) were also reported. Conclusions The development of culturally sensitive and age-appropriate psychosocial interventions for distressed adolescents should attend to the challenges posed by home, school, peer and neighborhood environments. Enhancements to problem- and emotion-focused strategies are needed in order to bolster adolescents’ repertoire of adaptive coping skills in stressful social environments. Electronic supplementary material The online version of this article (10.1186/s40359-019-0306-z) contains supplementary material, which is available to authorized users.
Background
Adolescence is often described as a period of "storm and stress" [1], marked by increased susceptibility to mental disorders. Early identification and successful management of mental health problems in the adolescent years can improve long-term health outcomes and social adjustment [2]. Such efforts require an in-depth understanding of environmental risks, signs and idioms of psychological distress, and coping strategies for vulnerable youth across different contexts.
The psychological outcomes of an individual's interactions with his or her environment can be understood through Lazarus and Folkman's "stress-coping" theory [3]. In particular, an imbalance between internal/external demands and the perceived resources to deal with these challenges leads to negative emotional responses. Specific outcomes are mediated by appraisals of events in terms of perceived threat, control and access to coping resources. A persistent imbalance in this transactional stress-coping system contributes to the development and maintenance of a range of mental disorders, including both internalizing and externalizing difficulties [4,5].
The majority of the world's adolescents live in lowand middle-income countries (LMICs), where they are exposed to a range of psychosocial adversities [6]. India alone is home to more than 250 million adolescents aged 10-19 years, or 20% of the global adolescent population [7]. The National Mental Health Survey (2016) estimated that 13.3% of all adolescents residing in metropolitan areas have "mental morbidity," double the prevalence in rural areas [8]. Correspondingly, studies conducted among school-going adolescents in urban India indicate that at least one in five adolescents endure high stress levels in their daily lives [9][10][11][12][13]. Although the relative importance of stressors differs across studies, commonly identified examples include academic pressure, adverse family events, educational/career concerns, challenges in romantic and sexual encounters, and navigating peer group dynamics [9,[14][15][16]. Adolescents reportedly adopt a wide range of coping strategies including problem solving, seeking support from parents and friends, praying, positive reframing, distraction, and avoidance [9,14,17].
Much of this surveyed literature from India is based on small and non-representative samples. The available studies provide little by way of in-depth exploration of key environmental stressors, impacts and mitigating strategies across different ages, genders and localities. A nuanced understanding of such contextual factors is essential for identifying intervention components that are culturally relevant and acceptable. In addition, in-depth knowledge of the local ecological context is needed for cultural adaptation of treatments proven to be effective elsewhere (e.g. through the inclusion of local metaphors). This is especially important in low-and middle-income countries such as India, where there is a relatively scarce local evidence base on adolescent mental health interventions.
The current study attempted to address this knowledge gap by using qualitative methods to explore: 1) common ecological stressors faced by adolescents in two predominantly urban states in India; 2) adolescents' subjective experiences of stress; and 3) strategies used by adolescents to manage stress reactions across age, gender and sites. The ultimate aim was to provide contextually relevant insights for developing mental health interventions in Indian schools. A pragmatic approach was adopted to match the methods to study objectives, guided by principles of interpretivism and reflexivity [18,19]. We used semi-structured focus group discussions with a large sample, allowing for variation in age, gender and geographic location. This permitted sensitive inquiry across diverse perspectives. For analysis, we employed a structured framework approach for thematic analysis, which has been widely used in other applied health and psychology research [20,21]. The study is part of a larger research program (PRIDE), which seeks to develop and evaluate a suite of psychological interventions for common mental health problems in school-going adolescents in India [22].
Design and setting
This exploratory qualitative study was conducted in Delhi (India's capital) and Goa, the country's most highly urbanized state [7]. The methods have been reported in line with the consolidated criteria for reporting qualitative studies -COREQ [23]. A completed COREQ checklist for this study has been provided among the supplementary materials (Additional file 1 -COREQ checklist).
Participating students in Delhi were drawn from eight Hindi-medium high schools, run by the Delhi Government, and one English-medium private sector school. The Government schools were relatively large (with an average population of 2800 students across grades [6][7][8][9][10][11][12], providing single-gender education in low-income areas. The private-sector school provided co-education in a middle-class locality. In Goa, participating students were drawn from seven high schools (classes 5-10), run by the Archdiocese Board of Education. These schools were relatively small (with an average population of 500 students) and provided co-education in Konkani and English in middle-class localities.
Sample
We conducted 22 focus group discussions (FGDs; Delhi = 12 and Goa = 10) with N = 191 adolescents (n = 112 girls, n = 79 boys; n = 108 in Delhi, n = 83 in Goa). Each focus group included 5-16 participants (median = 9), purposively sampled to maximize variation across age, gender and sites (Table 1). Participants ranged in age from 11 to 17 years, with students of similar age grouped together. Separate boys, girls and mixed groups were organized and participants within a given group often knew each other. Adolescents were invited to participate through classroom announcements by researchers and visits by researchers to community-based youth organizations working with adolescents from the participating schools. Representativeness was addressed by continuously monitoring participation rates across age, gender and site. Rates of non-participation were not systematically assessed, since recruitment activities focused on classrooms rather than individuals. Adolescents who expressed an interest in participating were provided with a printed information sheet containing details about study aims and methods. A parallel parent version of the information sheet was distributed when adolescents were aged under 18 years. Prior written informed consent was obtained from all adolescents, and additional passive parental consent (active opting out of research) was obtained for all participating adolescents. The consent process and other study procedures were conducted in accordance with protocols approved by Institutional Review Boards at the Public Health
Data collection
A semi-structured interview guide was developed specifically for this study, including open-ended questions on causes/experiences of stress and use of coping strategies (see supplementary materials, Additional File 2). Additional questions explored preferences for counselling and self-help interventions, findings for which are reported elsewhere [24]. Two researchers (usually RP and MS; both females and holding postgraduate degrees in public health) co-facilitated each FGD over 45-60 min. One researcher moderated the discussion, while the second researcher maintained notes and asked clarifying questions. Other interviewers (see Acknowledgments) included both males and females. FGDs were conducted in Hindi (12), English (9) and Konkani (1). All but two FGDs were audio-recorded, as administrators at the private-sector school denied permission for audio-recording. All audio-recordings were transcribed verbatim. The sole Konkani FGD was further translated into English, as none of the coders were Konkani speakers. We analyzed detailed notes from the two FGDs which were not audio-recorded. Data saturation was discussed within the team on an ongoing basis. Interim FGD summaries were continuously monitored for emergent themes by the lead researcher (RP) in consultation with co-authors. FGDs were concluded when saturation was reached within each subsample (boys/ girls, older/younger adolescents across the two sites). Overall, 22 FGDs were conducted: 19 in schools and three at local community sites.
Analysis
Thematic analysis was undertaken using a framework approach [20,21]. Transcripts were coded using Nvivo 11 software. Development of the analytical framework began with a set of deductive codes derived from the research questions and background literature. The framework was refined to include codes emergent from the data. Initial codes were assigned to discrete responses comprising phrases, sentences or paragraphs communicating a relevant idea. These were ordered into categories conveying inter-related ideas. The transcripts were distributed among three authors (RP, MS, MK) for coding. RP and MS organised the data in a matrix containing codes and categories in columns, and FGDs in rows. Themes were generated by comparing and contrasting data within and across the FGDs according to age, gender and site attributes. Data triangulation was achieved initially by comparing and contrasting assignment of codes horizontally (i.e. between codes/categories) and vertically (i.e. between FGDs) within our analytic matrix. Higher-order triangulation was undertaken by scrutinizing themes across different sub-groups. Areas of agreement and disagreement have been highlighted in the narrative summary of results.
Results
Themes have been organized into three broad categories: 1) descriptions of stress in relation to the ecological context ('common ecological stressors'); 2) experienced reactions to stress ('stress reactions'); and 3) commonly employed methods for coping ('coping strategies'). A number of distinct and interrelated sub-themes have been used to elaborate differences across site, age and gender. Quotes from Hindi and Konkani have been translated into English and highlighted with an asterisk(*). Table 2 presents an overview of ecological stressors across family, peer, school, community/ neighborhood domains, with key developmental challenges organized as cross-cutting themes and described under sub-themes below.
Academic pressure
Academic pressure was the most commonly identified stressor across the sample, irrespective of age, gender and site. This was largely driven by parental and teacher expectations, as well as personal ambitions. Adolescents expressed that parents were embarrassed, disappointed and would "hate" them due to academic underperformance. Teachers were seen as providing excessive homework, which added to the pressure. Parents and teachers often resorted to shouting, beating, and restriction of extra-curricular and recreational activities in a bid to improve adolescents' focus on academic performance and thereby boost future career prospects. The pressure was often counterproductive, establishing a vicious cycle of guilt, low self-confidence, lack of productivity and poor performance, even driving some students to contemplate suicide.
"Suppose [a student] studies well, and because of depression and tension he also loses his marks, and then parents shout on him why did you get less marks, then all the tension comes and the child is now in more tension, and then sometimes he makes suicide." (Boy, 12-15 years, Goa)
Romantic relationships
Adolescents frequently described emotional distress caused by challenges in forming, maintaining and ending romantic relationships, such as romantic rejection, one-sided attractions, arguments with partners, lack of money to buy gifts, break-ups and infidelity. These stressors seemed to be more pronounced in Delhi and were compounded by poor social acceptability for pre-marital relationships, especially for girls. Many girls considered romantic relationships "bad", and reflected that it caused "loss of personal reputation", "shame and embarrassment to parents"* and suggested "poor upbringing". Girls also anticipated coercive responses from parents such as shouting, grounding and initiation of early marriage.
"Where boys and girls go around together like boyfriend and girlfriend… this is not right. This will affect your parents." (Girl, 13-16 years, Delhi)
Negotiating autonomy
Older adolescents described stress stemming from limited personal freedoms, such that parents seemed to prescribe their life choices and decisions in areas such as education, employment and partners, especially in Goa.
"In my opinion, some parents come in the group of peer pressure because they tell the students to go to a particular school, so after they get the job they would get more money." (Boy, 13-17 years, Goa) Prevalent sexism and parental expectations to follow gender roles led girls, particularly in Delhi, to feel even more restricted, compounded by the additional burden of household chores. Younger adolescents were more accepting of parental influences, yet felt anxious about peer acceptance and described conflicts with friends as being particularly stressful. Older boys additionally discussed peer pressure for smoking, chewing 'gutka' (an inexpensive mixture of tobacco, areca nut and slaked lime), drinking alcohol and using other substances. Self-assertion was identified as key to dealing with peer pressure.
"They (peers) provoke him, taunt him that he is not capable enough to do it (take drugs), and then, if he is not mentally strong, he goes for it, and although he regrets it, he keeps doing it." (Boy, 15-17 years, Delhi)* Restrictions on selection of subjects and limits on choices for vocational growth, especially in 'non-academic' fields such as sports and arts.
Restrictive social norms requiring adolescents to abide by family and school expectations.
Safety / victimization
Harsh/physical discipline directed at adolescents; exposure to domestic violence between parents (linked to paternal alcohol use); sexism and gender discrimination against girls, including lower access to material and financial resources and greater burden of household chores.
Bullying. Corporeal punishment from teachers; lack of support to deal with bullying from peers.
Violence and sexual harassment (of females by males).
Safety
Adolescents across both sites faced actual and threatened violence and/or victimization in their daily lives. Girls in Delhi experienced a high risk of public sexual harassment, known colloquially as 'eve teasing, ' including both verbal and physical encounters in their neighborhoods.
"If let's say that a guy (in the bus) attacks you… then they (parents) will not send us to school. And no one supports us in this problem, neither friends nor teachers." (Girl, 13-16 years, Delhi) Younger boys discussed being teased and bullied by older students. Boys also experienced physical punishments at home and school more often than girls. Common reasons for physical punishment were failure to complete homework, poor exam performance and disruptive classroom behavior. Further threats to safety included witnessing domestic violence and the closely related problem of alcoholism among male family members.
"I get tensed when my dad is fighting at home. I feel like doing something to myself." (Girl, 13-16 years, Delhi)* Additionally, younger adolescents in Delhi highlighted poverty and consequent hopelessness as stressors.
"Poor people's financial situation is quite bad. Parents do not have a salary that can cover rent, groceries, and everything… and because of that the child also becomes depressed. He worries what would happen… because of this he doesn't feel interested in home or school." (Girl, 14-16 years, Delhi)*
Stress reactions
The English terms "tension" and "stress" were used almost universally across the sample to describe everyday experiences of emotional distress. More pronounced stress reactions were also evident from the use of terms like "mind attack", "depressed", "suffering", "fear" and "sadness".
"Firstly, we have to face family problems at home, and we feel bad, and then we can't even concentrate on studies (in school) … It is like a mind attack." (Boy, 17 years, Delhi)* "You cannot express to another person. Means you cannot feel well and you cannot tell anyone and then you feel depressed. You feel suffocated and also cry." (Boy, 13-15 years, Goa) Sudden and explosive anger, associated with shouting, throwing and breaking things, was also commonly described. Some adolescentsmore often boysresorted to hurting themselves or others when angry. Stress was also associated with irritability, arguments and fights following minor provocations, as well as loneliness and social withdrawal, which were more commonly reported by girls.
"When I get angry, I hit my brother and sister." (Boy, 14 years, Goa)* "When angry, we hit ourselves in front of the mirror." (Boy, 13-15 years, Delhi)* "Sometimes we get angry suddenly, we can't control on ourselves. We can't concentrate on one thing. We get confused… Some of them, they say that I don't want life fully, say I want to die." (Girl, 13-15 years, Goa) Both boys and girls also experienced physiological reactions like loss of appetite and sleep, fever, sweating, headaches and nausea, and cognitive changes such as confusion, poor concentration, forgetfulness and intrusive ruminative thoughts.
"So I can't sleep properly because all the tension comes in the night." (Girl, 11-13 years, Delhi) "I can't concentrate on studies. I study, but can't remember anything… There are many thoughts that keep coming from all sides." (Boy, 17 years, Delhi)*
Coping strategies
Adolescents described a range of coping mechanisms, depending on the type and intensity of stressors, perceived resources and socio-cultural norms.
Support seeking
Across both sites, younger adolescents and girls were more likely to seek advice and instrumental support from parents and teachers, particularly for academic difficulties and 'ragging' (referring to junior students being harassed, humiliated or abused by senior students [25]). Friends were generally preferred for emotional support, particularly in situations where adults were considered not to be "open minded" about the stressor (e.g., romantic relationships, sexual harassment).
"Depends on how big the problem is actually. Big problem like ragging or some problem with the teachers, studies, I prefer I should tell my parents about it." (Boy, 12-16 years, Goa)
Distraction
Distraction was widely used for immediate relief from negative affect and preoccupying thoughts.
Escape and avoidance
Many adolescents, especially boys, took active steps to avoid confrontations with parents and teachers about academic issues. This included avoiding discussion of exam results with parents, withdrawing from other family interactions, truancy when school work was incomplete, and staying away from particular teachers.
"When schools are to declare exam results, I often go to my aunt's place to avoid my parents." (Boy, 12-13 years, Delhi)*
Self-soothing
Girls were more likely than boys to describe self-soothing strategies like yoga, meditation, deep breathing and private expressions of affect (e.g., through diary entries and crying). Students also comforted themselves through eating and sleeping.
"And to get away from that bad feeling I cry, because, when my tears come and I cry I feel light inside." (Girl, 11-13 years, Delhi)
Problem solving
Active problem solving was relatively uncommon overall and was largely confined to older adolescents. This included a handful of instances where adolescents described specific steps of problem solving.
Prayer
In desperate times, when support was not available from other sources, some adolescents turned to prayer.
"Sometimes… in these problems, no one is there to decide on us, then we are left very lonely… Then who will listen to us? Then we starting asking God." (Boy, 14-15 years, Goa)
Substance use
A minority of boys used substances, including tobacco, cannabis and alcohol, as a means to "forget about the stress" and "reduce tension." However, almost all groups suggested that substance use may lead to temporary relief but would ultimately cause harm.
"Some stress they have, they will go drink or smoke, they will think that everything is ok now I'm free from this world, and no pressure is there in their mind… They say that after drinking all our problems are solved, but instead, because of drinking they are getting more pressure, they are spoiling their health." (Boy, 11-14 years, Goa)
Suicide
Suicide was considered a last resort to find relief from severe stressors like sexual assault and rape, and severe and sustained academic pressure. Some adolescents identified depression as part of a pathway from stress to suicide.
"So first they go in depression… and then they say that no one is talking to me at all and what will I do… no one will help me… so they then do suicide." (Girl, 11-14 years, Delhi)
Discussion
We have reported one of the largest ever qualitative studies on stress and coping among adolescents in India or globally. The large sample size and inclusion of two diverse urban sites enabled us to explore commonalities and differences in adolescents' experiences of stress and coping in depth. The findings have direct implications for developing and adapting interventions that are responsive to the dynamic interplay of age-related changes in thinking, behaviour and emotional reactivity, and the wider social ecology of adolescents' lives.
Participating adolescents were drawn from low-and middle-income communities and experienced a variety of stressors related to family, peers, school and their wider communities/ neighborhoods. Broad terms like "tension" and "stress" and specific reactions like explosive anger, irritability and rumination were frequently used to describe stress reactions. Adolescents generally favoured emotion-focused over problem-focused coping strategies; avoidance was employed more widely than active coping. Maladaptive strategies such as substance use and attempted suicide were also mentioned to manage intense emotional reactions.
Notwithstanding differences across age, gender and sites in the relative frequency and salience afforded to different types of stressors, a common thread appeared to be the broad developmental challenge of establishing an independent social identity. This struggle is characteristic of adolescence across cultures, as adolescents attempt to establish autonomy in their romantic and other peer relationships, educational/employment transitions and other life choices [26,27]. Extensive research from the field of developmental psychopathology has shown that social challenges in adolescence operate within interacting ecological systems, which render differences in the experience of stress and coping according to an individual's intrinsic characteristics, the immediate physical and social environment, and broader social, political and economic conditions [28]. Within this transactional framework, stress reactions may be amplified by neurobiological processes that affect adolescents' general predisposition to emotional reactivity [1,29].
Our study has highlighted a number of areas in which contextual factors have a particular bearing on stress and coping for adolescents in urban India. First, adolescents experienced persistent academic pressure, notably around exam performance, which was closely related to parental aspirations for adolescents to attain high-status occupations. This is corroborated by other contemporary studies from across India, indicating how rapid social changes are causing growing differences between familial expectations and adolescents' priorities [16,[30][31][32]. Relatedly, the cultural proscription against pre-marital romantic relationships was reflected in the social derogation experienced by adolescents around dating and other pre-marital relations. This was especially pronounced for girls and in Delhi, with violations feared to result in severe punishments from parents. Girls also encountered restrictive gender norms that placed a high burden on their involvement in household chores, while outside the home they faced a high risk of sexual harassment. These are further indications of how contemporary trends in Indian society may be exacerbating intergenerational stresses for adolescents [33,34]. Boys, on the other hand, appeared to be particularly vulnerable to corporeal punishments at home and in school, a practice which continues commonly in India despite legal prohibitions [35].
We observed a general preference for emotion-focused and avoidant coping across our sample. Studies from other countries have observed a similar tendency towards emotion-focused coping among adolescents, related to perceived lack of control over everyday stressors, especially in family and school domains [36,37]. Although avoidant coping is generally associated with worse mental health outcomes [38], approaches such as behavioral disengagement and focused distraction may be adaptive when the result is to limit exposure to harmful stressors or to re-direct attention away from negative thoughts without direct suppression [39]. However, it is otherwise notable that predominant emotion-focused and avoidant coping have been linked with self-harm [40] and substance use [9,41]; this was also borne out in the current study.
Our findings suggest a need for interventions that focus on development of a healthy repertoire of coping skills among adolescents, and which can be applied to mitigate ecological stressors and corresponding stress reactions. Risks for suicide and substance use also require assessment and appropriate interventions. The credibility of alternative coping strategies should be accounted for while developing these interventions, especially given previous research showing significant areas of mismatch between practice elements in evidence-based psychotherapies and adolescents' habitual coping strategies [42]. Accordingly, there is significant scope for strengthening and streamlining interventions such that constituent elements are more reflective of adolescents' own preferences and priorities [43].
For example, efforts may be needed to balance the observed dependence on emotion-focused coping with potential enhancements in problem-focused coping. When considering how to bolster adolescents' coping repertoire, it is notable that problem solving is one of the most common elements of evidence-based psychological interventions for a range of internalizing and externalizing problems among adolescents worldwide [44,45], suggesting the global relevance of this core practice element. Problem solving has been widely applied using self-care and other 'low-intensity' modalities, which is significant in terms of designing scalable psychological interventions at low-cost [46].
In addition, systemic interventions may be required to address contextual factors that are typically beyond adolescents' individual control, such as coercive and restrictive parenting practices [47], bullying and corporeal punishments in schools, and repressive gender norms [48]. Sustaining change at an ecological level would require the committed involvement of key sectors beyond health. As such, schools have been recommended as a promising platform for delivering mental health interventions, and healthy school environments have shown to promote mental health and well-being among adolescents [49]. In India, a recently concluded study successfully used a multi-component whole-school intervention to improve aspects of school environment that are linked with important health and well-being outcomes in adolescents [48].
We note some limitations of our study. First, we did not include participants from rural areas. On the other hand, the large sample size enabled us to explore common and divergent themes across age, gender and different urban localities within India, allowing us to reflect more confidently on the relevance to the vast and growing population of urban adolescents [7]. Second, use of FGDs for data collection may have prevented in-depth exploration of sensitive issues related to sexuality, self-harm and substance use. Third, we were unable to explore variation across socio-economic groups due to relative homogeneity in SES at each site. Finally, although detailed summaries were used, audio-recording was not permitted for two FGDs; some loss of data cannot be ruled out.
Conclusions
This large qualitative study from India has elucidated the interplay between developmental challenges and contextual factors related to home, school, peers and socio-cultural norms in shaping adolescents' experiences of stress and coping. The findings have direct implications for preventing adolescent mental health problems, insofar as interventions should equip adolescents with age-appropriate and ecologically valid strategies for coping with key stressors and concomitant stress reactions. Efforts to design suitable interventions should balance contextually relevant considerations with broadly applicable evidence from developmental science and the global evidence base on psychotherapies, in order to ensure optimal fit for the target demographic, locality and service resources. The funding agency had no role in study design, data collection, analysis, interpretation, writing up nor the decision to submit the manuscript for publication. The sponsor of the study had no role in study design, data collection, analysis, interpretation, writing up nor the decision to submit the manuscript for publication.
Availability of data and materials
Qualitative study data are available from the corresponding author on reasonable request.
Authors' contribution RP: developed the study concept and design, drafted the study protocol and data collection tools, collected qualitative data, and led the qualitative analysis and writing up. MS: drafted data collection tools, collected qualitative data and contributed to qualitative analysis and drafting of manuscript. MK: contributed to qualitative analysis and drafting of the manuscript. PC: developed the study concept and design, and made critical revisions to the study protocol and manuscript drafts. VP: developed the study concept and design, and made critical revisions to the study protocol, analytic framework and manuscript drafts. DM: developed the study concept and design, supervised data collection, and made critical revisions to the study protocol, data collection tools, analytic framework and manuscript drafts. All authors have read and approved the final manuscript.
Ethics approval and consent to participate Prior written informed consent was obtained from all adolescents. We also obtained passive parental consent (active opting out of the research) for adolescents aged under 18 years prior to the adolescents' participation in the study. The consent process and other study procedures were approved by the Institutional Review Boards at the Public Health Foundation of India
Consent for publication
Not applicable. | 2019-05-29T05:15:12.030Z | 2019-05-28T00:00:00.000 | {
"year": 2019,
"sha1": "10d8455559e9994f6ac83069ac12baaab4388a4a",
"oa_license": "CCBY",
"oa_url": "https://bmcpsychology.biomedcentral.com/track/pdf/10.1186/s40359-019-0306-z",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "10d8455559e9994f6ac83069ac12baaab4388a4a",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
} |
233168843 | pes2o/s2orc | v3-fos-license | Bootstrap Inference for Hawkes and General Point Processes
Inference and testing in general point process models such as the Hawkes model is predominantly based on asymptotic approximations for likelihood-based estimators and tests. As an alternative, and to improve finite sample performance, this paper considers bootstrap-based inference for interval estimation and testing. Specifically, for a wide class of point process models we consider a novel bootstrap scheme labeled 'fixed intensity bootstrap' (FIB), where the conditional intensity is kept fixed across bootstrap repetitions. The FIB, which is very simple to implement and fast in practice, extends previous ideas from the bootstrap literature on time series in discrete time, where the so-called 'fixed design' and 'fixed volatility' bootstrap schemes have shown to be particularly useful and effective. We compare the FIB with the classic recursive bootstrap, which is here labeled 'recursive intensity bootstrap' (RIB). In RIB algorithms, the intensity is stochastic in the bootstrap world and implementation of the bootstrap is more involved, due to its sequential structure. For both bootstrap schemes, we provide new bootstrap (asymptotic) theory which allows to assess bootstrap validity, and propose a 'non-parametric' approach based on resampling time-changed transformations of the original waiting times. We also establish the link between the proposed bootstraps for point process models and the related autoregressive conditional duration (ACD) models. Lastly, we show effectiveness of the different bootstrap schemes in finite samples through a set of detailed Monte Carlo experiments, and provide applications to both financial data and social media data to illustrate the proposed methodology.
Introduction
Point processes are well-known to be useful tools to characterize dynamics of event occurrence times. This includes the homogeneous Poisson process where the intensity process is constant over time, the inhomogeneous Poisson process, where the intensity is a deterministic (or strictly exogenous) time-varying function, as well as the class of 'self-exciting' point processes, such as the well-known and much applied Hawkes process. In particular, for the Hawkes process, the conditional intensity process depends on all past history of the events and thereby allows for (exponential or fractional) memory features, similar to autoregressive or fractional time-series processes in discrete time series econometrics. The self-exciting class of models, which are the focus of this paper, were originally proposed for modelling earthquake sequences; see Ogata (1988) and the references therein. Later, they have been put to use in a wide range of applications such as financial transactions (Bowsher, 2007;Bauwens and Hautsch, 2009), financial contagion (Aït-Sahalia et al., 2015), monetary policy (Dolado and María-Dolores, 2002), criminal fights and relations (Mohler et al., 2011), forecasting electricity price spikes (Clements et al., 2015) and the rich literature on social network information diffusion (Rizoiu et al., 2017), among others. Hawkes processes are also closely related to the class of autoregressive conditional duration [ACD] models of Engle and Russell (1998), which are well known and much used in financial economics; see Sections 2 and 6 below for the relation between the two classes of processes.
Inference for self-exciting point process models is generally performed through classic, likelihood-based asymptotic inference and testing 1 , as originally discussed in Ogata (1978). However, see e.g. Reinhart (2018) and Wang et al. (2010), the finite sample performance of asymptotic inference is not always satisfactory. This is in general the case because the finite sample distributions of the estimators are often very skewed and far from the Gaussian asymptotic distribution.
In this framework, a key motivation for the results presented in the paper is to provide a simple to implement, and theoretically well-grounded, bootstrap approach to inference in self-exciting point process models. We do this by providing six main contributions.
The first contribution is to propose a novel (non-)parametric bootstrap scheme for such point process models, which we label as 'fixed intensity bootstrap' (FIB). The FIB is simple and fast to implement in practice -particularly so when compared to existing (recursive) applications of the bootstrap. The key difference between the new and the classic bootstrap schemes is how to generate the sequence of waiting times in the bootstrap world. Specifically, while for standard, recursive bootstrap schemes, the bootstrap event times are generated recursively through the past bootstrap events, for the novel bootstrap scheme the bootstrap event times are generated using a 'fixed' conditional intensity function, which entirely depends on the event times in the original world. Therefore, the FIB contrasts with existing implementations of the bootstrap, see e.g. Embrechts et al. (2011) and Sarma et al. (2011), which utilize a (possibly highly complex and time consuming) sequential update of the bootstrap conditional intensities.
The second contribution is to provide bootstrap (first-order) asymptotic theory, including establishing bootstrap validity for inference and testing in point process models for both the novel (FIB) bootstrap and for the classic recursive bootstrap (for which no theory exists in the literature). We show that the bootstrap based on the FIB is valid under regularity conditions which are milder than those required for validity of recursive bootstrap schemes (hereafter, RIB).
The third contribution is to introduce novel 'non-parametric' implementations of the FIB and RIB schemes, which are based on resampling time-changed transformations of the original waiting times, rather than generating the (transformed) waiting times through a parametric model (usually the exponential distribution), as is in the literature. These implementations are likely to be robust to model misspecifications which generate non-exponential transformed waiting times. We show how to scale the original time-changed waiting times properly and to resample them; we also show that, in the homogeneous case, validity of the implied bootstrap follows from a time-change functional central limit theorem derived by Billingsley (1968) which, as far as we are aware, has never been applied to the bootstrap of self-exciting point process models.
The fourth contribution of the paper is a detailed Monte Carlo simulation study on the performance of the bootstrap for self-exciting Hawkes processes. Possibly due to the high computational costs involved in the implementation of a simulation study for the bootstrap in this framework, to the best of our knowledge studies like ours have not been attempted in the literature. We show that for Hawkes processes with exponential kernels, the coverage probabilities of confidence intervals based on the Gaussian asymptotic approximation may be well below the nominal level. In contrast, the bootstrap is able to correct this and, in particular, FIB implementations are particularly well performing in terms of coverage probabilities.
The fifth contribution is to provide two real data examples where we illustrate the key differences between asymptotic and the various bootstrap inference methods in applications. The first refers to the problem of modeling and predicting extreme financial results, see Embrechts et al. (2011). We use this example, including the data sample considered in Embrechts et al. (2011), to compare the outcome of the four different bootstrap schemes discussed in the paper. The second example is based on social media data and considers the flows of tweets and re-tweets proceeding and following a political announcement. Specifically, using recent tweets related to the COVID-19 pandemic in Denmark, we show how bootstrap-based inference is able to detect structural breaks (in the mean intensity as well as in the decay rate of intensity) induced by the announcement, which may not be detected based on asymptotic inference.
Sixth, we discuss the link between the proposed bootstraps based on the point process representation and bootstrap inference for autoregressive conditional duration (ACD) models. We establish the relation between our proposed bootstrap schemes and bootstrap algorithms based on the ACD representation. Specifically, we show that our recursive bootstrap corresponds to a residual-based bootstrap in the ACD world (as discussed e.g. in Perera and Silvapulle, 2021), with the crucial difference that the number of events generated through our scheme is random, rather than being fixed. This is a key improvement, as our bootstrap ensures that the sum of the bootstrap waiting times always cover the original time interval. We also show that residual-based implementation of the bootstrap in the ACD world corresponds to our proposed non-parametric bootstrap with re-sampling based on the (estimated) transformed waiting times. Finally, we discuss the relation between our proposed fixed intensity bootstrap and a bootstrap in the ACD framework, which is novel in the literature, where the conditional duration in the bootstrap world is fixed to the estimated conditional duration in the original sample.
Structure of the paper
The paper is organized as follows. In Section 2 likelihood-based analysis for point processes inference is presented, and in Section 3 the novel fixed intensity, as well as the recursive intensity, bootstraps are discussed, with theory and validity results in Section 4. Non-parametric bootstrap is discussed in Section 5, and the relation between our bootstraps and the bootstraps for ACD models is discussed in Section 6. Section 7 provides a Monte Carlo study of the different schemes. Section 8 contains two empirical illustrations, and Section 9 concludes. All proofs are contained in the Appendix.
Notation
We use the counting process N (t) to characterize the total number of events occurring before and including time t, with N (s, t] and N [s, t) the numbers of events in the interval (s, t] and [s, t), respectively, for s < t. For a right-continuous natural filtration (F t ) t∈R of a continuous time stochastic process, we denote by F t− the left limit of F t , which contains all the information before but not including time t. We use I(·) to denote the indicator function, and define R + := (0, ∞) and R + := [0, ∞). For x ∈ R, x := max z∈Z {z ≤ x}. For the bootstrap, as is standard, we denote by P * the probability measure induced by the bootstrap; expectation and variance computed under P * are denoted by E * and V * , respectively. For a sequence X * T computed on the bootstrap data, X * ) for all continuous bounded functions g, in each case as T → ∞. Finally, N denotes a Gaussian random variable and, for µ > 0, E(µ) denotes an exponential random variable with mean 1/µ.
Likelihood-based analysis of the point process
We discuss here likelihood-based estimation for a general class of point process models. For later use when establishing asymptotic validity of the bootstrap, we state explicit sufficient conditions for classic likelihood-based asymptotic theory. Precisely, and as in Ogata (1978), we establish consistency and limiting distributions of likelihood-based estimators, as well as the related (likelihood ratio) test statistics.
The model
By assumption, the observed event times are realizations from a univariate point process, i.e. a collection {t i } ∞ i=1 , t i > 0, of stochastic event times with associated waiting times (or durations), w i := t i − t i−1 , for i = 1, 2, ... with t 0 := 0; see e.g. Daley and Vere-Jones (2003) for an introduction to point processes. The point process can be equivalently characterized by the continuous-time counting process In addition, and as used here predominantly, a regular point process is uniquely defined by its conditional intensity process, λ(t), t ≥ 0, which captures the instantaneous conditional probability of event occurrences 2 and is defined as (2.2) Observe that, as the point process is assumed to be regular and orderly, λ(t) essentially captures the instantaneous conditional probability of observing a single event at each time t.
A key example used throughout is the 'self-exciting' Hawkes point process, where the conditional intensity is given by where the µ > 0 is the baseline intensity and γ(t) is the so-called kernel function, which typically is either exponential, or following a power law, where α, β, δ ≥ 0. Note as the sum in (2.3) is over all events t i prior to t, the Hawkes process has infinite (or long) memory. In contrast, if γ(t) = 0, λ(t) = µ > 0, then the point process reduces to a homogeneous Poisson process which has i.i.d. exponentially distributed waiting times w i with rate µ, that is, the w i 's are i.i.d. E(µ) distributed. Likewise, an example of counting process with finite memory (or, q 'lags'), sometimes referred to as a 'Wold process', is given by where γ(·) is a mapping from R q + to R + . A specific example is given by with γ i (·) being exponential or power law kernel functions as in (2.4) and (2.5) for i = 1, 2, ..., q. Notice that for q = 1, this is an example of a renewal process, with associated i.i.d. waiting times w i which are not exponentially distributed.
The class of self-exciting point process models is also linked to the ACD model of Engle and Russell (1998), which is based on the following dynamic equation for the waiting times w i := t i − t i−1 between events: where the ε i 's are strictly positive i.i.d. random variables with mean one. The ACD model can be given a point process representation; specifically, the conditional intensity associated to the model, see Engle and Russell (1998), takes the form where λ ε (·) = p ε (·)/S ε (·), with p ε and S ε denoting the pdf and the survival function of ε i , respectively. A simple example is the ACD(1) with exponential errors, with intensity given by the piecewise constant function which is a special case of a Wold process with two 'lags'. Should ε i be a continuous, non-exponentially distributed random variable, it follows from (2.8) that the intensity of the ACD(1) takes the form Finally, with α = 0 the ACD reduces to a renewal process with intensity λ(t) = λ ε (ω −1 (t − t N (t−) ))ω −1 .
Likelihood-based estimation
For the statistical analysis we assume that the conditional intensity λ(t) in (2.2) is parameterized by a finite-dimensional vector of unknown parameters θ ∈ Θ ⊆ R d , d := dim θ. To emphasize the dependence of the intensity on θ, we write λ(t; θ) and, for the associated counting process, N (t; θ). For notational convenience, when evaluated at the true value, we write λ(t; θ 0 ) =: λ(t) and N (t; θ 0 ) =: N (t). Consider a sample of event times t 1 , t 2 , . . . , t n T observed in a time interval [0, T ], with n T = N (T ) the total number of events in the interval. Standard arguments as in Daley and Vere-Jones (2003) imply that the joint log-likelihood function T (θ) can be written as where Λ(·; θ) is the so-called integrated intensity, given by and we assume t n T = T (such that T coincides with the last event time) in deducing the second equality in (2.9). The maximum likelihood estimator (MLE)θ T is defined byθ T := arg max θ∈Θ T (θ).
Asymptotic theory
For the asymptotic theory ofθ T we assume that the information set F t is defined as the σ-field generated by {N (s, t], −∞ < s ≤ t}. Under mild requirements (e.g. Ogata, 1978, Assumption C), the analysis presented below extends to the case where F t = {N (s), 0 ≤ s ≤ t}. Likewise, we assume for simplicity that t n T = T .
A key role in the asymptotic analysis here -as well as for the novel bootstrap asymptotics below -is played by the Doob-Meyer decomposition of N (t) in (2.1), which is given by Here M is a square integrable continuous-time F t -local martingale and A(t) is the compensator of N (t), which in this case is given by the integrated intensity Λ(t; θ 0 ) = Λ(t) in (2.10); that is, A(t) = t 0 λ(s)ds = Λ(t). By definition, is a continuous-time martingale, and we may write Ogata, 1978, p. 250) which will be used repeatedly throughout for both the standard and the bootstrap asymptotic analyses. Furthermore, we make the following technical assumptions.
Consistency of the MLE is given in the next theorem from Ogata (1978).
For the analysis of the score and the information, and for establishing the asymptotic normality of the MLE, we make use of the Assumption 2 below, where we use the following notation: for any function f (t; θ) of θ (and t), f (t) := f (t; θ 0 ), ∂ θ f (t; θ) = ∂f (t; θ)/∂θ and ∂ θ 0 f (t) = ∂f (t; θ)/∂θ| θ=θ 0 (and similarly for higher order and partial derivatives).
Note that Assumption 2(c) differs from standard requirements as in Ogata (1978) which address uniformity of the Hessian.
The bootstrap
We discuss here two bootstrap schemes. The first bootstrap, which is novel, is denoted as the 'fixed intensity bootstrap' (FIB). The FIB as proposed here builds on ideas from the 'fixed design bootstrap' in regression (and time series) models (see e.g. Wu, 1986;Gonçalves and Kilian, 2004), as well as the so-called 'fixed volatility bootstrap' in conditional volatility modelling (see Cavaliere et al., 2018), in the sense that the bootstrap intensity function is fixed across bootstrap repetitions. The second scheme, which has been applied in e.g. Embrechts et al. (2011) and Sarma et al. (2011), is here denoted the 'recursive intensity bootstrap' (RIB). As will be discussed later, in practice the FIB is simpler and faster to implement than the RIB, in addition to being valid under milder regularity conditions. Since no theory exists for either of the FIB and the RIB schemes, in Section 4 we establish validity of the bootstrap for both.
Random time change
A key property we employ in defining our bootstrap algorithms is that using the integrated intensity to transform the original event times {t i } to another sequence of event times {s i } gives a homogeneous Poisson process with unit intensity. Equivalently, the original and non i.i.d. waiting times (1), see also Daley and Vere-Jones (2003). The time change transformation t i → s i is given by where the integrated intensity Λ(t; θ) is defined in (2.10). Moreover, with the associated transformed waiting times v i (θ) are given by for i = 1, 2, .... By definition, at the true value θ 0 the transformed waiting times (1), such that the transformed event times, s i := s i (θ 0 ), form a homogeneous Poisson process with unit intensity. For the Hawkes process in (2.3), λ(t) = µ + t j <t γ(t − t j ), and hence which, for the exponential kernel in (2.4), reduces to For the implementation of the bootstrap, the reverse time transformation s i → t i is of key interest. Specifically, consider initially a sequence {v i } of waiting times in the transformed time scale, generated as i.i.d. and E (1)-distributed. Then, under the true model, we can (numerically) invert the mapping (3.4) and generate the ith waiting time w i (or, equivalently, the ith event time) recursively in terms of the ith waiting time in transformed time scale v i and the past event times {t j , j = 1, ..., i − 1}. The recursion is initiated by generating the first waiting time As detailed below, for both the FIB and RIB schemes, we generate i.i.d. random event times in the transformed time scale, which are next transformed to the original time scale using the intensity dynamics estimated from the data. The key difference between the two algorithms is whether the transformation from transformed to original event times is fixed or sequential (and hence random) across bootstrap samples.
Fixed intensity bootstrap
Given a sample of event times As is standard, one may for example set θ * T =θ T , the unrestricted MLE based on {t i } n T i=1 ; for hypothesis testing, one may also set θ * T =θ T , the MLE restricted by the null hypothesis.
For the FIB, where the intensity is kept fixed across replications, denote the intensity process implied by the bootstrap true value asλ(t) := λ(t; θ * T ), and the corresponding integrated intensity process aŝ By definition,λ(t) andΛ(t) depend on the original data through the observed event times {t i } n T i=1 and bootstrap true value θ * T . Therefore, by construction,λ(t) and Λ(t) are known and fixed conditionally on the data.
for i = 1, . . . , n * T , whereΛ is defined in (3.5) and the number of bootstrap events Some remarks are in order.
Remark 3.1 (i) As is standard, the distribution of T 1/2 (θ T −θ 0 ) is approximated by the empirical distribution (conditionally on the original data) of T 1/2 (θ * T −θ * T ) where θ * T =θ T for the restricted bootstrap and θ * T =θ T for the unrestricted bootstrap. Moreover, the bootstrap analog of the LR statistic in (2.16) is given by LR ). (ii) Notice that in the FIB log-likelihood (3.6) the last term T 0 λ(t; θ)dt only depends on the original data and hence is non-random upon conditioning on the original data.
(iii) A key feature of the FIB is that, since the bootstrap waiting times in the transformed time are i.i.d. E (1)-distributed, conditionally on the original data the bootstrap counting process N * (t) is an inhomogeneous Poisson process with timevarying intensityλ(t), t ∈ [0, T ]. Bootstrap algorithms specifically designed for inhomogeneous Poisson processes have been proposed in Cowling et al. (1996). In contrast, despite (conditionally on the original data) the bootstrap sample follows an inhomogeneous Poisson bootstrap process, our FIB allows inference in a more general class of point processes.
(iv) One of the main features of the FIB is that its implementation is straightforward and fast. Specifically, draws of the bootstrap sample are obtained easily, since it is only required to invert the observed (strictly increasing) functionΛ. Similarly, computation of the bootstrap likelihood and estimator is straightforward as λ(t; θ) is a function of the original data only.
Recursive intensity bootstrap
The RIB resembles the recursive bootstrap in time series models, see e.g. Cavaliere and Rahbek (2021) for a review. Thus, and in contrast to the FIB, the RIB conditional intensity, denoted here by λ * (t; θ), is constructed using the functional form of the original intensity λ(t; θ), but in terms of recursively obtained bootstrap event times t * i . This entails that, for any θ ∈ Θ, λ * (t; θ) is a random process, even conditionally on the original data, and hence differs from the FIB intensity, which is fixed across bootstrap repetitions. Note also that the recursively obtained bootstrap intensity process λ * (t; θ) inherits the same properties, in terms of e.g. differentiability with respect to θ, of the original intensity process λ(t; θ).
(ii) In the second step of Algorithm 2, the t * i 's are generated recursively by using the bootstrap event times in transformed time s * 1 , . . . , s * n * T obtained in the first step. Specifically, the first bootstrap event time t * 1 is obtained as the solution of
Validity of bootstrap inference
In this section, we establish bootstrap asymptotic validity for the FIB and RIB bootstrap schemes outlined above. As emphasized the bootstrap true parameter is assumed to be consistent, θ * T → p θ 0 , which holds, e.g., for the particular choices where θ * T =θ T (unrestricted bootstrap) or θ * T =θ T (restricted bootstrap) under the null.
Throughout, we let F * t denote the σ-field generated by {N * (s), 0 ≤ s ≤ t} and F * t− be its left limit. Notice that, since the distribution of N * depends on T , formally we have an array F * T,t := {N * T (s), 0 ≤ s ≤ t ≤ T , T ≥ 0}; for simplicity, in the following we suppress the dependence on T and write N * T (t) and F * T,t simply as N * (t) and F * t .
Preliminaries
As for the non-bootstrap asymptotic analysis, define the bootstrap martingale Here Λ N * (t) is the integrated conditional intensity of either the FIB or the RIB bootstrap process N * (t) (see also Remark 4.1(i)) and hence it corresponds to the bootstrap compensator of N * (t) conditionally on the data. Consequently, M * (t) is a continuous-time F * t local martingale conditionally on the data. Moreover, for any process ξ * (t) which (conditionally on the original data) is predictable with respect to F * t , the (Stieltjes) stochastic integral process is also (conditionally on the original data) a continuous-time martingale.
Remark 4.1 (i) For both bootstrap algorithms, the bootstrap waiting times {v * i } in the transformed time scale are i.i.d. E (1), and the transformation to the original time scale is continuous. Therefore, the conditional distributions of the bootstrap waiting times are absolutely continuous, and hence the bootstrap process N * (t) has well-defined integrated intensity function which is given by for the FIB, and for the RIB. As the conditional intensity process, the integrated intensity Λ N * (t) depends on the original sample; it is non-random in the bootstrap world for the FIB, and depends on the past bootstrap event times t * 1 , . . . , t * N * (t−) for the RIB; see also Remark 3.2.
(ii) In some cases, the theoretical arguments are simplified by working in transformed time rather than in the original time. Specifically, consider the bootstrap counting process in the transformed time, given by Q * (s) : (1) random variables, the cdf of each event time s * i , i = 1, 2, . . . , conditionally on the past event times is given by which is a continuous function for s > s * i−1 .
(iii) For both the fixed intensity and recursive intensity bootstraps, Q * is a homogeneous Poisson process with unit intensity, and the probability measure induced by Q * is independent of the original data. For the FIB, the process Q * is related to the bootstrap counting process N * through the relation and, equivalently, Q * (s) = N * (Λ −1 (s)). Using Q * , we can write the integral in (4.1) as where M * Q (s) := Q * (s) − s is a continuous-time martingale independent of the original data. For the RIB the formulas above are similar, withΛ(·) replaced by Λ * (·).
Notice that S * T (θ) and H * T (θ) depend on the bootstrap data only through N * (t) which, conditionally on the original data, is an inhomogeneous Poisson point process with fixed conditional intensity given byλ , the score and Hessian evaluated at the bootstrap true value θ * T can be rewritten as . Using the fact that M * is a martingale, we prove in the appendix the following lemma, which requires only a mild strengthening of the assumptions in Theorem 2.
Lemma 1 Under the assumptions of Theorem 2, provided that, additionally, (i) where I(θ 0 ) is defined in Assumption 2.
The following theorem shows the first-order validity of the FIB and of the associated likelihood ratio test.
Theorem 3 Under the conditions of Lemma 1, as T → ∞, it holds that Moreover, for the bootstrap likelihood-ratio statistic it holds that
Validity of the RIB
For the RIB, the bootstrap score and Hessian at the bootstrap true value θ * T mimic their counterparts on the original data, see (2.14)-(2.15). Specifically, with λ * (t) : where h * (t) := h * (t; θ * T ). The next lemma shows that the RIB score and Hessian mimic the large sample properties of the original score and Hessian. It requires an additional assumption, see (4.11) below, which is not required for the FIB. In order to introduce it, we emphasize that the quantity h(t; θ) in Assumption 2 depends on the data generating process, and hence on the true parameter θ 0 . That is, The proof is based on the fact that for any fixed T and conditionally on the data, the bootstrap sample can be made stationary.
For bootstrap consistency, we modify Assumption 2(c) as follows.
The modification is necessary in order to bound the third order derivatives of the RIB likelihood.
Remark 4.2 Condition (4.11) is required to show convergence of the bootstrap score in a neighborhood of the true value θ 0 . This is specific of the bootstrap and not necessary to show convergence of the original score at the true value. For proving convergence of the bootstrap Hessian in a neighborhood of θ, no extra conditions are needed, as under the bounds on the terms entering the third derivative of the likelihood function, see Assumption 2(c), such convergence is already implied, as shown in Ogata (1978, proof of Theorem 3).
Non-parametric FIB and RIB
In the presented parametric bootstrap, bootstrap event times are obtained in transformed time scale by cumulating randomly-generated i.i.d. E (1) waiting times. This was motivated by the fact that waiting times v i = v i (θ 0 ) in (3.3) are i.i.d. E (1)-distributed for i = 1, ..., n T and, moreover, with θ * However, in the case of a misspecified model, it may be the case that the transformed waiting timesv i are not exponentially distributed (asymptotically). Therefore, we consider here the point process bootstrap equivalent of the well-known residual-based i.i.d. bootstrap in discrete time series models. Specifically, after the point process model is fit to data, the residuals to resample from can be taken as the waiting times in transformed time scale, i.e.v i , i = 1, ..., n T . Then, the bootstrap waiting times in transformed time can be generated as an i.i.d sample from the sample {v i } n T i=1 . This algorithm is denoted here as the 'non-parametric bootstrap', and can be implemented for both FIB and RIB bootstraps, see below.
For the bootstrap in conditional mean and variance time series models, the residuals are typically centered and/or scaled prior to the implementation of the bootstrap. Similarly, here the waiting timesv i need to be properly standardized, such that the bootstrap transformed waiting times v * i match (as a minimum) the mean of the E (1) distribution, i.e. E * (v * i ) = 1. This is achieved by sampling from v c i given byv where u * i is an i.i.d. discrete uniformly distributed sequence on {1, . . . , n T }. The bootstrap transformed event times are then given by s * i = i j=1 v * j . (ii)-(iii) as in Algorithm 1 or Algorithm 2 depending on whether it is a fixed intensity or recursive intensity bootstrap.
Remark 5.1 (i) As mentioned, a crucial step of the non-parametric bootstrap is the rescaling of the waiting times in transformed time scale. By doing as above, it holds that Apart from matching the mean and, asymptotically, the variance of the E (1) distribution, scaling is a key ingredient to center the bootstrap score around 0. Additionally, the convergence of the variance of the bootstrap waiting times to unity guarantees that, in large sample, the variance of the bootstrap score matches the inverse of the bootstrap information.
(ii) Without rescaling it holds that E * (v * i ) → p 1 and V * (v * i ) → p 1. However, this is not enough for the bootstrap score to be centered around 0, because unless E * (v * i − 1) = o p (T −1/2 ) the bootstrap score will have a non-zero (and random) mean driven by the term T 1/2 (E * (v * i ) − 1). This is well-known for the bootstrap in time series models, where if the residuals are not centered, their O p (T −1/2 ) sample mean will induce randomness in the limit distribution of the bootstrap statistics (Cavaliere et al., 2015;Cavaliere and Georgiev, 2020).
To provide an intuition about validity of this bootstrap and about the importance of rescaling, consider a simple Poisson process model with intensity λ(t) = θ, where interest is in inference on θ using the (unrestricted) bootstrap. Recall that the log-likelihood for the original sample is T (θ) = log θdN (t) − θdt = n T log θ − T θ, with associated bootstrap score θ −1 dN (t) − T = θ −1 n T − T , which leads to the unique MLE,θ T = n T /T . To implement the non-parametric bootstrap, consider the transformed waiting times, see Section 3.1, which in this case are given byv i =θ T w i , with w i = t i − t i−1 the original observed waiting times. The non-parametric bootstrap generates the v * i 's by initially resampling from the rescaledv c i defined in (5.2); next, the v * i 's are transformed back in the original time scale using the inverse mapping w * i = v * i /θ T . This leads to the bootstrap event times t * i := i j=1 w * j with associated bootstrap counting process N * (t) := i≥1 I(t * i ≤ t). The bootstrap likelihood and score are then given by * where as earlier n * T denotes the total number of events, n * T = max{k : k 1 w * i ≤ T }. Consider next the bootstrap score at the true value θ * T =θ T , Because of the rescaling in (5.2), E * (v * i − 1) = 0. This is a key feature for the bootstrap score to mimic the large-sample behavior of the original score. In contrast, without rescaling, the bootstrap mean of v * i − 1 would be of order O p (n −1/2 T ) = O p (T −1/2 ) (the order being sharp), thereby introducing an asymptotically non-negligible (random) bias term in the distribution of the bootstrap score.
In order to analyze the large sample properties of the non-parametric bootstrap score, it is important to observe that a standard (bootstrap version of the) CLT cannot be applied to (5.4) because the number of terms in the sum is itself random. That is, S * T (θ T ) is a randomly selected partial sum. Its behavior can however be analyzed by considering the following FCLT for i.i.d. waiting times, which for non-bootstrap sequences is due to Billingsley (1968) (the extension to bootstrap random variables is straightforward and is omitted for brevity).
Theorem 5 Let u * 1 , u * 2 , . . . be bootstrap random variables which, conditionally on the original data, are i.i.d. with mean 1, varianceκ T (being a function of the original data) and a.s. positive. For T > 0, define with s ∈ [0, 1] the càdlàg process Assume that, as T → ∞,κ T → p κ > 0 and that a bootstrap FCLT holds for {u * i }, i.e. 1 with B(·) a standard Brownian motion. It then holds that, as T → ∞, By using the fact thatθ T is consistent and that the sample variance of the transformed waiting times converges to one, an immediate application of Theorem 5 yields that which matches the asymptotic distribution of the original score. For the Hessian, , in probability, applying again Theorem 5. By standard arguments this implies that The general (non-Poisson) case is more involved due to the fact that conditionally on the data the bootstrap waiting times in transformed time scale have a discrete distribution. Although this feature is not crucial in the Poisson case, the general case involves the analysis of random terms of the form ξ(t)dN * (t) and an explicit calculation of the compensator of N * (t).
We conclude by noticing that, as shown in the next section, the non-parametric bootstrap performs as well as the parametric bootstrap.
Relation with the bootstrap for ACD models
In this section we discuss the relation between our proposed bootstrap algorithms and theory and extant results on the bootstrap for ACD models; see in particular Fernandes and Gramming (2005), Gao et al. (2015), Perera et al. (2016) and Perera and Silvapulle (2017; for the related class of multiplicative error models [MEM]. Consider, initially, the exponential ACD process [EACD] which, by (2.7) and (2.8), has intensity and associated integrated intensity Note that, to simplify notation, we omit here the dependence on θ parametrizing the intensity function and hence ψ i . It follows that our proposed RIB algorithms are related to recursive bootstraps in the ACD framework. To see this, recall that for the parametric RIB, we first generate the sequence of transformed waiting times {v * i } as i.i.d. E (1), while for the non-parametric RIB we resample from the original (standardized) transformed waiting times v i , i = 1, ..., n T which, using (6.1), are given by v i = w i /ψ i in the case where θ * T = θ 0 without loss of generality. Next, the bootstrap waiting times w * i are generating recursively as which is equivalent to a recursive bootstrap for the EACD model (either parametric or non-parametric). Therefore, the theory we develop in this paper can also be used to establish bootstrap validity for EACD models. A crucial difference between the recursive bootstrap for MEM is that in (6.2) the number of event times n * T is random, such that the event times fall within the interval [0, T ]. In contrast, in the recursive MEM case n * T = n T , which implies that n T i=1 w * i can be much smaller or even larger than T .
For the case of ACD with non-exponentially distributed errors ε i , the intensity is (2.8) with corresponding integrated intensity where S ε is one minus the cdf of ε i . In the parametric case, with v * i drawn as E(1), using (6.3) in the bootstrap world, we recursively obtain For the non-parametric case, existing bootstrap algorithms for ACD generate the errors ε * i by resampling the residuals ε i = w i /ψ i , while the RIB first generates the bootstrap waiting times in transformed scale v * i by resampling the estimated v i ; these are later used to generated the bootstrap errors ε * i and then w * i , see (6.5) and (6.4) above.
Finally, consider our (either parametric or non-parametric) FIB applied to the ACD. It would be tempting to think that our FIB would correspond to a 'fixed conditional expected duration' bootstrap in the ACD world, where the bootstrap waiting times are generated as whereψ i is the i-th estimated conditional expected duration on the original data. Although this algorithm, which resembles the fixed volatility bootstrap for ARCH processes proposed in Cavaliere et al. (2018) and has not been investigated previously in the literature, seems to be an interesting development, it does not correspond to our FIB algorithm. In particular, the FIB uses the inverse of the estimated (integrated) intensity, sayΛ −1 , to transform the bootstrap v * i into the bootstrap waiting times w * i and generates a number of event times which is random in the bootstrap world; in contrast, a bootstrap based on (6.6) generates a number of events, given by n T , which is fixed in the bootstrap world.
Remark 6.1 In terms of validity of our FIB and RIB when applied to the ACD models, the regularity conditions in Assumptions 1 and 2 are straightforward to verify. In terms of the condition (iii) in Lemma 1, while λ T (t) cannot be bounded from below, it trivially holds that the log-derivative of λ T (t) is bounded for classic ACD(p, q) models.
Monte Carlo Simulations
In this section we consider the finite sample properties of asymptotic and bootstrapbased confidence intervals and hypothesis tests for the well-known and much used case of a Hawkes process. By considering a detailed simulation study based on the exponential kernel, we analyze how the bootstrap compares to asymptotic inference for different values of key quantities such as the 'branching ratio' (defined below) and the decaying rate of the memory of past events. We consider both the RIB and the proposed FIB schemes, parametric as well as non-parametric.
Model and implementation
In the simulations, we consider the Hawkes process with exponential kernel function, γ(x; α, β) = αe −βx and conditional intensity with θ = (µ, α, β) , µ, α, β > 0, see also (2.4). Here µ is the baseline intensity; α is the jump size of the intensity when a new event occurs; β is the exponential decaying rate, which determines how fast the memory of past events declines to zero. In terms of α and β, a key quantity is the branching ratio which describes how quickly the number of events increases 3 . Moreover, with µ, α, β > 0, stationarity of the Hawkes process requires the branching ratio to satisfy 0 < a < 1, in which case the mean intensity m is well defined and given by Hence, the stationary region is given by {θ = (µ, α, β) ∈ R × R × R : µ > 0, 0 < α < β}. A few remarks about the simulation scheme are as follows.
Remark 7.1 (i) We simulate the event times {t i } n T i=1 of the Hawkes process using the 'thinning algorithm' of Lewis and Shedler (1979) and Ogata (1981), which allows to simulate a general regular point processes characterized by any conditional intensity. Other options, such as the time-change method described in Section 3.1 (see also Ozaki, 1979), the efficient sampling algorithm by exploring the Markov property of the exponential kernel (Dassios and Zhao, 2013), and the 'stochastic reconstruction' method (Zhuang et al., 2004) are also available in the literature.
(ii) One important issue in simulating data in the time interval [0, T ] (as well as in likelihood estimation) is how to treat the events before and at time t 0 = 0 due to the 'infinite memory' of the simulated exponential intensity. In our simulations, we make use of a burn-in period [−M, 0), with M > 0 arbitrarily large (and no events prior to time −M ), and assume that data prior to t 0 = 0 are available for estimation. Accordingly, in the bootstrap world, the bootstrap event times prior to time t 0 are fixed to the original event times. We anticipate that the results do not substantially change without burn-in period, provided the time span T is large enough, see also Ozaki (1979), Rasmussen (2013) and Rizoiu et al. (2017).
(iii) As is well-known, see e.g. Embrechts et al. (2011), to avoid numerical issues in estimations it is advisable to reparameterize the kernel function as γ(x; a, β) = aβe −βx where a = α/β is the branching ratio defined above, such that where θ = (µ, a, β) . The associated likelihood function of n T event times observed in [0, T ] is given by We employ this parameterization in our simulations.
(iv) The MLEθ T is obtained by maximizing the likelihood function over the set R× R × R, i.e., without imposing the stationarity assumption in estimation. Therefore, it can be the case that for certain samplesθ T falls outside the stationarity region (e.g., the estimated branching ratioâ exceeds unity). In such a case, recursive versions of the bootstrap based onθ T would generate non-stationary bootstrap samples 4 . Therefore, as in Cavaliere et al. (2012) and Swensen (2006), prior to the implementation of the bootstrap we check whetherθ T is within the stationarity region. Also it is checked whether the Hessian evaluated atθ T is negative definite. We refer to this step as 'sanity check' [SC] and report statistics on this below. In our Monte Carlo experiment, samples for which SC fails are discarded, and the total number of Monte Carlo samples reported corresponds to the number of valid samples. We simulate three stationary Hawkes processes (denoted by Models 1-3) with true parameters θ 0 set as follows. For all simulated processes, the mean intensity is set to unity (m 0 = 1), while different levels of the branching ratio a 0 = α 0 /β 0 are considered; specifically, we set a 0 ∈ {0.2, 0.5, 0.8}. For each simulation, we consider three parameterizations (see A-C below) to allow different jump sizes and decaying behavior of the intensity. In all cases, we consider samples over The parameter configurations are summarized in Table 1 along with the (Monte Carlo) probabilities that the SC fails. It can be noticed that the probabilities of SC failure are severely high only for Model 1A when T = 50. This is because the number of events generated for T = 50 is extremely volatile and the likelihood of observing samples with a small number of events (hence, not informative enough for estimating the model reasonably well) is indeed high. Another reason is that, as is known, it is hard to precisely estimate the parameters when the true parameters α 0 and β 0 are close to the zero boundary and T is small. The reparameterization by branching ratio helps to resolve some numerical issues in estimation, as discussed in Remark 7.1(iii) but the improvement is not sufficient when the branching ratio itself is also low as in the case of Model 1A. Nevertheless, despite the quite extreme parameter setting of Model 1A, we decided to keep it in our Monte Carlo simulation for completion.
For each parameter configuration and sample size, we report the coverage probabilities (estimated over the Monte Carlo replications) of confidence intervals at the 95% nominal level, using both asymptotic and bootstrap methods. Asymptotic confidence intervals for the individual parameters as well as the (joint) confidence ellipsoid are based on the sample Hessian. We also report the coverage of (asymptotic and bootstrap) confidence intervals for the branching ratio, a = α/β. For bootstrap confidence intervals we consider the naive percentile interval method.
Finally, we also report the (null) empirical rejection probabilities of likelihood ratio tests for the hypothesis H 0 : θ = θ 0 . For the bootstrap tests, we implement the unrestricted bootstrap (i.e., without the null imposed on the bootstrap sample); results for the restricted bootstrap (i.e., with the null imposed on the bootstrap sample) do not differ substantially.
Results
The coverage probabilities of the asymptotic and bootstrap confidence intervals [CI] for individual parameters are presented in Table 2. We can see that, in general, the asymptotic CIs suffer from the problem of undercoverage for almost all models and sample spans, and this fact is particularly severe for some of the cases. In contrast, the bootstrap methods, especially the FIB, powerfully correct these distortions.
Below we provide a summary of the problems related to the asymptotic CIs for each individual parameter (branching ratio a, baseline intensity µ, intensity jump size α and decay rate β).
(i) The undercoverage of the asymptotic CI for the branching ratio is severe in finite sample for all Models 1-3. The coverage deteriorates as the true value of branching ratio increases (moving from Model 1 to 3), and as the true values of α and β decrease (moving from Model C to A). Accordingly, the performance of the asymptotic CI for the branching ratio is the worst for Model 3A, where the coverage probability is 86.5% for T = 50. Larger α 0 and β 0 seem to improve the coverage rate of the branching ratio, and this improvement is the most significant for Model 1 where the branching ratio is low.
(ii) The asymptotic CI for the baseline intensity µ performs poorly in finite samples when µ 0 is low. Note that for Model 3, where µ 0 = 0.2, the empirical coverage probabilities are 90.5%, 89.5%, and 88.1% for Model 3A, 3B and 3C, respectively, when T = 50. In contrast, these probabilities are all above 90% for Models 1 and 2.
(iii) The problem of undercoverage deteriorates when α 0 is larger (moving from Model A to C). There are no significant changes in the coverage of α over different values of the branching ratio. Improvements in the coverage of α seem to come only from increasing the sample span T . In general, the coverage is acceptable.
(iv) The undercoverage of β is severe for Model 1 with small branching ratio, but the coverage rate improves noticeably as branching ratio increases, and as sample span T increases. In particular, the asymptotic CI coverage for β is almost perfect for Models 3A-C even when T = 50. The performance is independent of the value of β.
In contrast to the coverage of asymptotic CIs, which show evident finite sample distortions, the empirical coverage probabilities of the bootstrap percentile intervals based on the fixed intensity scheme (for both the parametric and non-parametric methods, labelled 'PRFB' and 'NPFB' in Table 2) are very close to the nominal level, for almost all simulation models and even when the sample span is very short Note: Nominal coverage rate is 95%. PRFB, NPFB, PRRB, and NPRB refer to parametric fixed intensity, non-parametric fixed intensity, parametric recursive intensity and non-parametric recursive intensity bootstraps.
. The bootstrap is based on unrestricted parameter estimation. PRFB, NPFB, PRRB, and NPRB refer to parametric fixed intensity, non-parametric fixed intensity, parametric recursive intensity and nonparametric recursive intensity bootstraps.
(T = 50). The only exceptions are for the coverage of branching ratio in Model 3, where the coverage probabilities of the parametric FIB and the non-parametric FIB are slightly below 95%, the nominal level. Nevertheless, the CIs of the two recursive intensity bootstraps, although performing generally better than asymptotic CIs for the coverage of parameter α and β, share similar features of finite sample distortion as asymptotic CIs. For instance, the coverage of µ deteriorates as µ 0 decreases (for both the parametric and non-parametric RIBs); the coverage of β is much below the nominal level for Model 1 where branching ratio is low, while it converges to the nominal level as the branching ratio increases (for the RIB); finally, we observe that the coverage of the branching ratio deteriorates when the branching ratio increases. Unreported simulations of average lengths of the 95% asymptotic and bootstrap confidence intervals for each parameter show that bootstrap confidence intervals are not significantly wider than asymptotic confidence intervals, except for Models 1A and 1B in which the parameter settings are relatively more extreme, or when the sample span is short (T = 50). The wider bootstrap confidence intervals reveal the higher uncertainty associated to parameter estimation, and is in line with existing literature on the bootstrap. Table 3 presents the joint coverage rate of the asymptotic and bootstrap con-fidence ellipsoids [CE], for both parameterizations θ = (µ, α, β) andθ = (µ, a, β) . Noticeably, here the benefit of using bootstrap methods to improve the finite sample joint coverage is way more than evident. The performance of the asymptotic CEs is clearly unsatisfactory: For all nine models, the empirical coverage probabilities of the asymptotic CEs are below 89% when T = 50; despite a gradual improvement of the coverage rates as sample span T increases, the coverage rates when T = 200 are still below the nominal level for all models (the joint coverage probabilities of Model 1B are even less than 88% when T = 200). On the contrary, all bootstrap methods produce the joint CEs that cover the true parameters with probabilities very close to the nominal level, 5 across different models and different sample spans. Finally, in Table 4 we report the empirical rejection probabilities of the asymptotic and unrestricted bootstrap likelihood-ratio tests for the null hypothesis H 0 : θ = θ 0 . In general, both the asymptotic and bootstrap tests perform satisfactorily well in terms of size, especially when T = 100 and 200. Nevertheless, we do notice that the asymptotic test tends to be oversized for larger values of the branching ratio. This can be seen by inspecting the rejection probabilities of the asymptotic test on H 0 for Model 3 (which has the largest branching ratio, a = 0.8) for T = 50, 100. In particular, the asymptotic test is severely oversized for all three sub-models of Model 3, particularly so when T = 50. In contrast, we do not see much variability of the bootstrap empirical rejection probabilities across different models or sample spans -they are all very close to the nominal level (slightly conservative in some cases).
Empirical illustrations
To illustrate how the proposed bootstrap schemes work in applications, we consider two empirical examples. The first consists of 'extreme occurrences' in US stock market data, as measured by empirical quantiles of the Dow Jones Index, see Embrechts et al. (2011). We use this application to compare the four different bootstrap schemes discussed in the paper. Next, we analyze recent Danish COVID-19 tweets using the non-parametric FIB. We illustrate how bootstrap confidence intervals reveal the presence of a structural break in the parameters, whereas confidence intervals based on the asymptotic Gaussian approximation do not.
Dow Jones Index
As in Embrechts et al. (2011), we consider Dow Jones Index (DJI) daily (log) returns observed over the period January 1, 1994 to December 31, 2010. The event times corresponding to extreme returns are given by the trading days where the corresponding daily return is below the 10% empirical quantile (negative occurrences), To analyze the data, we consider a Hawkes model with intensity reparameterized as where a is the branching ratio and γ is the (exponential) kernel; that is, γ(t; θ) = β exp(−βt), see also (2.4) and Section 7. With parameter vector θ = (µ, a, β) , the MLEθ is obtained by maximizing the log-likelihood in (2.9) subject to µ, β > 0, 0 < a < 1 and with initial values from Embrechts et al. (2011). Estimation results are reported in Table 5; the estimated intensity is portrayed in the bottom panel of Figure 1. The MLEθ is very similar to Embrechts et al. (2011), and we observe in particular that the branching ratio a appears to be well inside the stationary region.
As previously emphasized, if the model is correctly specified, the transformed waiting times should be i.i.d. E (1). Therefore, the model fit can be evaluated by considering the estimated transformed waiting timeŝ with Λ defined in (3.2). Figure 2 contains QQ-plots and Kolmogorov-Smirnov (KS) plots, as well as sample autocorrelograms and related tests. Based on these, we see no clear signs of model misspecification. Precisely, the QQ plot ofv i against a unit exponential distribution has no significant deviations from the identity line, except a few quantiles in the extreme upper tail, as also confirmed by the KS statistic p-value (0.147). Moreover, while the observed waiting times w i are autocorrelated, this is not the case for the transformed waiting timesv i (and its squares,v 2 i ). We next compare the different bootstrap algorithms in terms of confidence intervals for the parameters, and compare these with the asymptotic CIs. With {θ * T,i:b } B b=1 the i.i.d. bootstrap realizations of the i-th element ofθ * T , the bootstrap CIs reported are based on the empirical α/2 and 1 − α/2 quantiles of the empirical distribution function of theθ * T,i:b 's. In Table 5, while we find no noticeable difference between the parametric and non-parametric bootstraps, the bootstrapped CIs based on the FIB are less wide when compared to the asymptotic and RIB CIs (recall also from the Monte Carlo results that in general the bootstrap coverage probabilities are better than those associated to the asymptotic CIs). The observed difference between the FIB and RIB CIs is likely to be caused by the added randomness in the sequential computation of the RIB. Interestingly, the FIB and RIB bootstrap CIs are further away from the non-stationary region (a ≥ 1) than the asymptotic CIs.
COVID-19 Tweets
We consider the arrival times of tweets related to the COVID-19 pandemic, recorded on March 11 (06:00-00:00) 2020, when during a press briefing the Danish Prime Minister at 20:30 announced the first lockdown of Denmark. In total, there are n T = 1822 events from 1166 unique individuals, with each event time {t i } n T i=0 (t 0 = 0) measured with a time resolution of 1 second within the T = 18 hours considered. In order to analyze the effects of the announcement, we analyze the full sample, as well as the pre-press briefing sample (06:00-20:30), and the postpress briefing sample (20:30-00:00). In Figure 3, we show the observed counting process N (t) for t ∈ [0, T ] as well as an initial proxy for the intensity given by the number of events per 15-minute intervals. It is worth noticing that there is a surge in activity after 20:30, visible both in the counting process and the increased intensity.
As for the DJI data, we consider the Hawkes model with exponential kernel. Based on the diagnostics (see Figure 4), the model seems to be well specified in all the three (sub)samples. However, we observe a large difference between the estimates reported for the first subsample and for the second subsample, see Table 6. In particular, the effect of the response to the announcement is a substantial increase in the intensity. One may also note that the estimated memory parameter β for the full period is between the estimates for the pre-announcement and post-announcement periods. Table 6 also reports asymptotic CIs and FIB CIs. As can clearly be seen, the bootstrap CIs indicates the presence of non-overlapping parameter estimates for the samples before and after the announcement. This possibly reflects different types of dynamics in the two samples, and indicates a structural break around the press briefing. We note that this is not detectable by the standard misspecification tests for the full sample, and is much less pronounced from the reported asymptotic CIs for the three samples (in particular so for the Figure 4: Danish COVID-19 tweets data. The "Full", "Pre" and "Post" refer to the full sample and the samples pre-and post-announcement on March 11, 2021. The top row presents KS plots for these three time periods. The middle row presents QQ plots for these three time periods. The bottom row presents autocorrelations of the time transformed waiting times for these three periods. baseline µ).
In addition, we have also considered the power law kernel, where γ(t; θ) in (8.1) is replaced by a power law, see (2.5). Interestingly, unreported results show that, in terms of model misspecification, one is unable to discriminate between the two models, and moreover that the estimates of the baseline µ and branching ratio a are virtually indistinguishable from those obtained using the exponential kernel. Finally, estimation based on the power law kernel (unlike the exponential kernel) is highly sensitive to initial values, which may reflect the large correlation of the parameter estimators for power law kernels.
Conclusions
In this paper we have discussed the theoretical foundations and practical implementations of bootstrap inference for self-exciting point process models. Applications of the bootstrap in order to improve upon the poor quality of asymptotic approximations are scarce in the literature. Classic 'recursive intensity bootstrap' (RIB) schemes have been proposed in the recent literature, although without proof of their first-order validity. RIB schemes can also be quite involved to implement in practice, as they generally require numerical integration for the recursive computation of the intensity for each bootstrap repetition. To improve, we have introduced a new bootstrap scheme, the 'fixed intensity bootstrap' (FIB), where the conditional intensity is kept fixed across bootstrap repetitions. By doing so, conditionally on the original data the bootstrap data generating process follows a simple inhomogeneous point process with known intensity; therefore, it is very simple to implement and to use in practice. For both bootstrap schemes, we have provided a new bootstrap (asymptotic) theory, which allows to assess bootstrap validity for both bootstraps. Monte Carlo evidence supports the idea that the bootstrap is a valid inference method when applied to point process models.
The results in the paper could be extended in several directions. On top of the obvious extension to multivariate point process models, an interesting one is how to deal with marked point process models. Marked (self-exciting) processes are particularly useful in applications, as the intensity function can be made dependent on a set of 'marks' associated to past events (for financial returns, the trading volumes; for energy prices, the magnitude of price spikes; for tweets, the number of followers; for earthquakes modelling, the magnitude of the earthquakes). In this context the proposed FIB seems very powerful as re-sampling with a fixed intensity, even as a function of marks, is feasible and easy to implement. As an example, consider briefly an extension of the Hawkes model with exponential kernel in (2.4). One may include real-valued marks, or covariates, y t ∈ R d in the conditional intensity λ(t; θ) as for example, where µ, β : R d → R + , see e.g. Clements et al. (2015) for an application to price spikes in electricity markets. Under the assumption of 'strongly exogenous' (or, ancillary) marks, similar to exogenous covariates in discrete time Poisson autoregressions (see Agosto et al., 2016) and with θ the parameters parameterizing the extended Hawkes intensity, estimation and inference based on the FIB utilize the original event times and marks, {t i , y t i } N (T ) i=1 . Thus, in contrast to the RIB and other existing recursive bootstraps, bootstrap inference based on FIB would not require further assumptions (apart from stationarity) of the covariates.
A further extension is to develop model misspecification-robust bootstrap methods. In particular, throughout the paper we have assumed that the model is correctly specified. This assumption implies that the bootstrap can be implemented parametrically by constructing bootstrap waiting times from an i.i.d. sequence of mean one exponential random variables (the waiting times in transformed time scale), as discussed in Sections 3.2 and 3.3. However, misspecification of the model (in the simplest case, data are modelled as a Poisson process, but the waiting times form a renewal process) may result in i.i.d., but non-exponential (transformed) waiting times. Although in this case the parametric bootstraps could fail, we believe that the non-parametric bootstrap algorithms discussed in Section 5 could serve as the basis of novel misspecification-robust bootstrap methods. All these extensions are left for future research.
We have also benefited from discussions and feedback from seminar participants at Saint-Petersburg State University (CEBA talks), Singapore Management University, Macquarie University, as well as participants of the 9th Italian Congress of Econometrics and Empirical Economics (University of Cagliari) and the 2021 Virtual Workshop on Financial Econometrics (Durham University).
This research was supported by the Danish Council for Independent Research (DSF Grant 015-00028B), the Center for Information and Bubble Studies, University of Copenhagen, the Italian Ministry of University and Research (PRIN 2017 and the University of Sydney (Faculty Research Future Fix 2020 Grant). Part of this paper was written while Giuseppe Cavaliere was visiting the School of Economics of the University of Sydney; financial support and hospitality are gratefully acknowledged. Finally, the authors acknowledge the technical assistance provided by the Sydney Informatics Hub of the University of Sydney for the high-performance computing and cloud services.
B.2 Proof of Theorem 3
The theorem follows by a straightforward application of Lemma A.1. Specifically, Assumption A.0 is satisfied with θ † = θ 0 as by Assumption, θ * T → p θ 0 . Assumptions A.1 and A.2 follow from Lemma 1 with Ω I = Ω S = I(θ 0 ). Finally, Assumption A.3 follows from Assumption 2(c), which holds conditionally on the original data; this is shown in the Supplement, Lemma D.2.
C Proofs for the recursive intensity bootstrap C.1 Proof of Lemma 2
Recall that, conditionally on the original sample, only N * (t) and the associated event times t * 1 , ..., t * n * T are random. Moreover, conditionally on the original sample, N * (t) has conditional intensity given by λ * (t; θ * T ), which in contrast to the FIB, is now a stochastic process even upon conditioning on the original data. As a consequence, at the bootstrap true value θ * T , and with F * t denoting the filtration associated to {N * (s), s ≤ t}, it holds that E * (dN * (t)|F * t− ) = λ * (t; θ * T )dt. Notice that N * as well as the intensity λ * depend on the original data through θ * T only. Notice, finally, that P (θ * T ∈ Θ 0 ) can be made arbitrarily close to one by picking T large enough (such that the bootstrap process can be made stationary upon proper choice of the distribution of the initial values). | 2021-04-08T01:16:07.438Z | 2021-04-07T00:00:00.000 | {
"year": 2022,
"sha1": "0758a86910f2ba8e22a5b3f5af269b987ca3e5f1",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2104.03122",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "0758a86910f2ba8e22a5b3f5af269b987ca3e5f1",
"s2fieldsofstudy": [
"Mathematics",
"Economics",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Economics"
]
} |
235784801 | pes2o/s2orc | v3-fos-license | Bilateral Serous Retinal Detachment in Lymphoid Blast Crisis of Chronic Myeloid Leukemia
We herein report a patient with Philadelphia chromosome-positive lymphoid blast crisis of chronic myeloid leukemia (CML), who presented with bilateral serous retinal detachment (SRD). A 36-year-old Asian male presented with the symptoms of decreased vision and was found to have bilateral SRD involving fovea. There was no inflammation in the anterior chamber or vitreous. Physical examination showed hepatomegaly and splenomegaly. A blood count revealed white blood cell count of 38.2 × 109/L with 51.5% blast cells. Bone marrow aspirate showed total cell count of 145 × 103/μL with 80.6% blast cells and negative neutrophil myeloperoxidase staining. Cytogenetic analysis using fluorescence in situ hybridization confirmed a 9;22 chromosomal translocation, indicating the presence of the Philadelphia chromosome. Flow cytometry analysis demonstrated expression of CD10, CD19, and positive TdT. According to morphology, immunology, cytogenetics, and molecular criteria, the patient was diagnosed as having Philadelphia chromosome-positive lymphoid blast crisis of CML. Based on the ocular findings and hematological abnormalities, the SRD was considered to be ocular involvement secondary to the blast crisis of leukemia. Two months after starting induction therapy, fundus examination and optical coherence tomography showed complete resolution of bilateral SRD and improved vision. Prompt diagnosis of the disease leads to early systemic chemotherapy and may help restore visual function and improve survival.
Introduction
Ophthalmic manifestations of leukemia are frequently encountered in the clinical setting. Although ocular manifestations may occur in all types of leukemia, ocular involvement in lymphoid blast crisis of chronic myeloid leukemia (CML) is rarely reported. We herein report a patient with Philadelphia chromosome-positive lymphoid blast crisis of CML, who presented with bilateral serous retinal detachment (SRD).
Case Report
A 36-year-old Asian male was admitted to our hospital with severe general fatigue. Two months before presentation, he was admitted to a local clinic with fever and was diagnosed with anemia, and received blood transfusions. Despite this treatment course, his symptoms worsened, and he consulted our hospital. Physical examination showed hepatomegaly and splenomegaly. Complete blood count revealed red blood cell count of 1.81 × 10 12 /L and hemoglobin level of 54 g/L, confirming anemia; platelet count of 214 × 10 9 /L and severe leukocytosis with white blood cell count of 38.2 × 10 9 /L. Differential count was 15% neutrophils, 24.5% lymphocytes, 4.5% monocytes, 1.0% eosinophils, 0% basophils, 1% promyelocytes, 2% myelocytes, and 51.5% blast cells. Bone marrow aspirate showed total cell count of 145 × 10 3 /μL with 80.6% blast cells, and negative neutrophil myeloperoxidase staining. Cytogenetic analysis using fluorescence in situ hybridization confirmed a 9;22 chromosomal translocation, indicating the presence of the Philadelphia chromosome. Flow cytometry analysis demonstrated expression of CD10, CD19, and positive TdT. Examination of cerebrospinal fluid was negative for malignant cells. According to morphology, immunology, cytogenetics, and molecular criteria, the patient was diagnosed as having a blast crisis of Philadelphia-positive acute lymphoblastic leukemia.
Additionally, he also complained of decreased vision 1 month before presentation at our hospital. He was seen at the ophthalmology outpatient clinic of our hospital one day after starting oral corticosteroids as treatment for leukemia. On ocular examination, decimal bestcorrected visual acuity was 1.2 in the right eye (OD) and 0.8 in the left eye (OS), and intraocular pressure was within normal limits bilaterally (OU). Slit-lamp examination revealed no anterior chamber cells and flare, and no vitreous cells. Fundus examination revealed bilateral SRD involving the fovea and some retinal hemorrhages and Roth spots in peripheral retina OU ( Fig. 1). Optical coherence tomography (OCT) showed subretinal fluid accumulation and thickened choroids OU. Choroidal thickness could not be measured because the choroid-scleral interface was not visible due to shadowing OU. No subretinal septa were observed OU. Fluorescein angiography (FA) showed multiple, pinpoint hyperfluorescent spots in the early phase, pooling of the fluorescein dye in subretinal space, and leakage from the optic disc in the late phase. Differential diagnosis included Vogt-Koyanagi-Harada (VKH) disease and central serous chorioretinopathy. Although the patient's FA findings were almost the same as those of VKH disease, he showed no inflammatory sign, prodromal meningeal irritation, and inner ear disturbance, and his OCT images did not show subretinal septa. Based on the ocular findings and hematological abnormalities, the SRD was considered to be ocular involvement secondary to the blast crisis of leukemia.
After receiving oral prednisolone (starting 95 mg/day) for 7 days as prior induction chemotherapy, the patient started induction therapy with the standard protocol, consisting of imatinib, daunorubicin, vincristine, and cyclophosphamide. Two months after starting induction therapy, fundus examination and OCT showed complete resolution of bilateral SRD; the choroid-scleral interface was visible, and the choroidal thickness at the fovea was 458 μm OD and 448 μm OS (Fig. 2). Best-corrected visual acuity improved to 1.2 OD and 1.0 OS. Eight months after the first course of chemotherapy, SRD remained resolved. No sunset glow appearance of the fundus was observed OU (Fig. 3).
Discussion/Conclusion
Ophthalmic involvement of leukemia can occur either primarily by direct infiltration of leukemic cells in ocular tissues or secondarily due to hematologic abnormalities, central nerve system (CNS) involvement, and toxicity of various chemotherapeutic drugs [1]. In the present case, SRD occurred as an extramedullary manifestation. Although SRD may occur in any type of leukemia, ocular manifestations occurring during lymphoid blast crisis of CML are rarely reported [2][3][4][5][6]. Previous reports of ocular involvements in blast crisis of CML included masquerade syndrome [2]; infiltration of leukemia cells to the iris and choroid [3]; infiltration of vitreous [4]; and choroidal infiltration and panuveitis with leukemic cells in either anterior chamber or the vitreous, retinal hemorrhages, and SRD [5]. Reports of SRD alone as ocular manifestation are even rarer [6]. The reason for the variety of ocular manifestations in blast crisis of CML is unclear. Several previous reports have also described SRD in leukemia [7][8][9][10][11][12]. The hypotheses for the pathogenesis of SRD in leukemia include sequestration of infiltrated leukemic cells in the choroid and occlusion of the choriocapillaris. As a result of the latter, the interference with blood supply to the retinal pigment epithelium may cause secondary retinal pigment epithelium dysfunction [7][8][9][10][11][12].
Leukemia presenting as SRD needs to be differentiated from other etiologies that present as SRD, such as VKH disease [11,12]. Our case did not show inflammatory signs in the anterior chamber or vitreous; OCT findings showed no subretinal septa generally observed in VKH disease; and there were no prodromal symptoms. Therefore, VKH disease was less probable.
In our case, leakage from optic nerve head was observed on FA. We speculate that involvement of the optic nerve head may be caused by direct optic nerve infiltration or increased intracranial pressure [13,14]. In the present case, no blast cells were detected in cytological analysis of the cerebrospinal fluid. However, there is still a possibility that blast ; therefore, ophthalmologists should keep in mind to examine this structure and consider infiltrative causes as differential diagnosis for disc leakage. In conclusion, involvement of leukemia may be the first sign of extramedullary spread in the posterior segment of the eye and prompt diagnosis of the disease leads to early systemic chemotherapy and may help restore visual function and improve survival.
Acknowledgement
We would like to thank Prof. Akiyoshi Takami and Dr. Osami Daimaru for comments and advice. We also thank Dr. Teresa Nakatani for the English language review.
Statement of Ethics
The Ethics Committee of Aichi Medical University Hospital waived the need for approval of this study that involved a retrospective review of medical records. This report adhered to the tenets of the Declaration of Helsinki 1964. Because of the difficulty to obtain written | 2021-07-11T05:28:08.038Z | 2021-06-17T00:00:00.000 | {
"year": 2021,
"sha1": "4f022885c3fd3246e5affb90165c6f4e2d069c55",
"oa_license": "CCBYNC",
"oa_url": "https://www.karger.com/Article/Pdf/516861",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "3019215e67cf5f5c07f1b151d088256a8699e1a7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
235397246 | pes2o/s2orc | v3-fos-license | Usefulness of High-Frequency Ultrasonography in the Diagnosis of Melanoma: Mini Review
High-frequency equipment is characterized by ultrasound probes with frequencies of over 10 MHz. At higher frequencies, the wavelength decreases, which determines a lower penetration of the ultrasound beam so as to offer a better evaluation of the surface structures. This explains the growing interest in ultrasound in dermatology. This review examines the state of the art of high-frequency ultrasound (HFUS) in the assessment of skin cancer to ensure the high clinical approach and provide the best standard of evidence on which to base clinical and policy decisions.
INTRODUCTION
Cutaneous melanoma (CM) has a high incidence rate, even among young people; it has steadily increased over the last several decades (1,2). Moreover this incidence is 1.5 times higher in males (3). However, this data is related to the age of onset; it has been seen that melanoma affects young women and older men. The main risky factors implicated in melanoma development are exposure to ultraviolet (UV) for their genotoxic effect, the number of melanocytic nevi, familiar history, and genetic susceptibility (3). It has been noted that patients with a previous history of melanoma have a 1% to 8% risk of developing other primary melanomas (4). These numbers highlight the health and socio-economic implications of this skin cancer. Melanoma is related to a poor prognosis in the general population. The main important prognostic factors for survival are the Breslow's index and the presence of ulceration. In the eighth edition, the AJCC melanoma expert panel described the impact of the tumor thickness subcategorizing T1 melanomas (5). The main prognostic factors for survival are still primary tumor (Breslow) thickness and ulceration. They are also useful to define Tcategory strata in cutaneous melanoma. As in prior editions, also in the eighth edition, tumor thickness has to be measured to the nearest 0.1 mm, not 0.01 mm. In this edition, melanoma thickness threshold of 1.0, 2.0, and 4.0 mm continues to define the T category. Consequently, those tumors that measure from 0.95 to 1.04 would be rounded to 1.0 mm. While in the seventh edition, a subset of these melanomas measuring 1.01 to 1.04 would have been staged as T2 (a: w/o ulceration, b: with ulceration). The clinical implication, if any, of this small group of patients who are mentioned in the eighth edition, has not yet been formally explored. Previous studies have detected a clinically significant treshold in the region of 0.7 to 0.8 mm in patients with T1 melanoma. In the eighth edition AJCC the analysis of the T1 melanoma patient cohort, multivariable analysis of factors that predict melanoma-specific survival (MSS) [i.e. tumor thickness, ulceration, mitotic rate as a dichotomous variable (<1 mitosis/mm 2 vs ≥1 mitosis/mm 2 )] revealed that tumor thickness dichotomized as < 0.8 mm and 0.8 to 1.0 mm and ulceration could predict MSS more efficiently than mitotic rate (as a dichotomous variable).
The subcategorization of T1 melanomas (0.8 threshold) is important for the role of Sentinel Lymph nodes biopsy(SNLB) considering that SLN metastases are very infrequent (< 5%) in patients whose melanoma is < 0.8 mm in thickness and nonulcerated (i.e., AJCC eighth edition T1a) but it occurs in approximately 5% to 12% of patients with primary melanomas 0.8 to 1.0 mm in thickness. The SLN biopsy can be performed in the patients with a primary tumor thickness 0.8-1.0 mm and also in patients with thinner ulcerated tumors (i.e., all patients with AJCC eighth edition T1b melanomas). The SLN biopsy had to be performed for patients with T2 and thicker melanomas, and when performed in patients with a T1 melanoma, the status of the SLN was used (5).
The thickness of the melanoma also determines an increased risk of lymph node involvement. Patients with melanoma spread to the nearby lymph nodes have a survival rates at 5 years of 65% (6). For all patients with primary melanoma with Breslow's index > 0.8 mm is indicated the Sentinel lymph nodes. This procedure allows the detection of metastatic involvement of the lymph nodes and the detection of nodal disease with no clinical or radiographic evidence. The outcome of SNLB may change future therapeutic management, including the choice of performing a complete lymph nodes dissection, or an adjuvant therapy, but also set up different program of clinical and imaging follow-up. For whole-body staging are used advanced imaging techniques, such as computed tomography (CT), magnetic resonance (MR), and positron emission tomography-CT (PET-CT) (7). There is no single consensus regarding surveillance imaging in melanoma patients, in fact, according to National Comprehensive Cancer Network (NCCN), the CT or PET scan is recommended every 3 to 12 months for patients with stage IIB-IV asymptomatic melanoma. While, The European Society of Medical Oncology recommends only physical examination every three months (8). However, ultrasound is the first diagnostic approach used to monitor regional lymph node basins for recurrence. It has been demonstrated that ultrasound has the highest sensitivity and specificity, 96% and 99% respectively, for lymph node surveillance (9)(10)(11), as well as for the evaluation of nodal disease. Thanks to the use of highfrequency probes, it has proved useful for the determination of ultrasound Breslow index, which means evaluating the depth of tumor invasion ( Figure 1). Moreover, Color Doppler is an additional tool that can improve diagnostic accuracy through the identification of intra-tumor vessels and characterizations of their distributions (12) (Figure 2).
High accurate pre-treatment evaluation of the melanoma is useful tool for taking a correct therapeutic approach and improving the survival rate and follow-up (13).
The HFUS, and even more the ultra-HFUS, provide important information, previously obtained only thanks to biopsy samples.
Further information can be obtained thanks to the use of strain elastography (SE). This technique estimates the tissues elasticity according to assumption that tissues affected by tumor invasion are less deformable than normal tissues (14). An evaluation is then achieved by comparing the elasticity of the target lesion with the surrounding tissues. The data obtained on the relative stiffness is converted into a color-coded image that overlaps the two-dimensional images (15)(16)(17) (Figure 3).
This review examines the state of the art of HFUS in the assessment of melanoma to ensure the best clinical evaluation for the correct therapeutic strategies.
METHODS
Using the Medline, Embase, and ISI web of Science (Science Citation Index Expanded) databases, we searched different articles with these keywords: "melanoma", "melanoma ultrasound", "skin cancer melanoma diagnosis" (18).
The reference lists of all retrieved studies were used as additional sources of pertinent documents (18). We evaluated the title and abstract of these selected articles. If the abstract was eligible, the article was downloaded and read by two of the authors (MB and AR). We included human observational studies published from 1997 to 2020. These studies reported melanoma thickness with ultrasound (US). Furthermore, the ability to identify with HFUS the skip lesions and lymph nodes using 95% confidence intervals or other measures of statistical uncertainty. The studies included in the meta-analysis consider different epidemiological data. Many of these studies relied on specific reference incidence rates based on gender, age, and provided a relative standardized incidence ratio as risky measures ( Table 1).
We excluded case reports, editorials, non-independent studies, and cohort or case-control studies.
Between two articles with overlapping numbers of melanoma cases, we chose the study with the highest number of total patients (18) (Figure 4).
DATA EXTRACTION
Only one co-author (MB) pulled the data into a predefined database.
The following information was considered valid for the analysis: study's year, country, type of melanoma, number of patients, the average age, gender, and lastly, median person years accumulated by patients (18).
DISCUSSION
The application of new imaging techniques has also changed the staging work-up of patients with cutaneous melanoma. Chest and abdominal computed tomography (CT) scanning should be restricted to patients with high-risk melanoma (stage IIIA with a macroscopic lymph node, IIIB, IIIC) and used to evaluate the potential metastatic sites. Magnetic resonance imaging (MRI) of the brain is used in patients with stage IV, optional in stage III and not used in patients with stage I and II disease. The diagnosis of metastases is evaluated by Positron emission tomography (PET)/ CT. This technique complements conventional CT/MRI imaging in the staging of patients who have solitary or oligometastatic disease where surgical resection is most relevant. The lesions suspected of cutaneous melanoma are subjected to dermoscopic examination and if dermatologist deems it necessary, evaluated with excisional biopsy. The histological examination allows to decide whether to perform a further surgical excision and an SNLB; after a correct melanoma staging to decide the subsequent treatment (19,20). Therefore after the excision of the lesion and histologic evaluation it is mandatory to perform a correct staging to decide whether a further surgical excision should be performed. Ultrasonography is widely used in medicine (21)(22)(23). In recent years, US and especially HFUS have become popular among dermatologists. Skin US offers essential information for the diagnosis, therapeutical management, and follow-up of tumoral and non-tumoral cutaneous pathology. It seems that HFUS examination may be useful in pre-operative evaluation of CM, and it may correlate with histology (24). Modern HFUS equipment allows highly accurate visualization of the skin layers and appendages up to histological details (25)(26)(27)(28). Probes ranged from 15 to 22 MHz allowed visualization of the epidermis and dermis, including adjacent tissues 1 to 2 cm deep from the basal dermal layer (16).
50-70 MHz
There is a favorable agreement between HFUS and Breslow thickness in 7 lesions examinated. Moreover ultra-HFUS has ultrasound frequencies higher than 30 MHz, which allow to obtain submillimeter resolution of superficial anatomical structures (29).
The image quality is influenced by the resolution, the key element in measuring the thickness and depth of skin changes (30). The typical ultrasound image of healthy skin is composed of three elements: epidermis, also known as epidermal echo, dermis and subcutaneous tissue (30).
HFUS cannot detect pigments such as melanin but allows a noninvasive evaluation of the primary tumor. It is already able to calculate a Breslow index in a large number of patients with CM (1).
As far as these studies are concerned, it remains unclear how the authors obtained the resolution values. Some parameters such as dynamic signal range and signal-to-noise ratio were not reported in the studies, and more often the diagnostic information provided on the lesions appeared to be poorly detailed (37).
Lassau et al., 1997 who evaluated the hypoechoic, homogeneous, well-defined and vascularized lesions, saw that there is no difference in the sensitivity and specificity achieved using HFUS alone for the discrimination of invasive melanoma (n = 19) from all other included lesions (n = 44) (39).
Kaikaris et al., 2011 described the use of HFUS (14 MHz) and the association between US and morphological findings in measuring melanoma thickness.
They found a low US correlation between the Breslow index for thin melanomas (1-2 mm) and a significant correlation for thicker melanomas (> 2 mm). Measurements made with ultra-HFUS (20 MHz) were found to be well correlated with the depth of thick melanomas but were not accurate enough for thinner melanomas.
Evidence suggests that HFUS (20 MHz) may be the best tool for the estimations of tumor volume more than 2D-US (40). The first significant US reports of melanoma were performed using fixed HF probes ranging from 20 to 100 MHz. Solivetti et al., 2014, define the HFUS as a useful technique for the detection of melanoma in-transit metastases (41). This study was performed on 600 patients with melanoma (thickness> 1 mm) resulted negative to objective examination at clinical follow-up; the US detected in-transit metastases in 63 patients with a total of 95 lesions (41). All these lesions have not reported false positive or false negative (41).
Botar et al., 2015 document the positive correlation between the Breslow index with the involvement of the lymph nodes and risk of distant metastasis. This study performed the characterization of the lesion with elastography but used the 40-MHz probe for the semiquantitative analysis. The information obtained with HFUS showed a good correlation between sonometry and histometry (r = 0.88), with an average difference of 0.39 mm (relative difference 28%) (35,42). Tumors with a thickness between 0.55 and 0.95 mm were found to be incorrectly classified according to histology in 34%, and tumors with a thickness between 1.30 and 1.70 mm were classified incorrectly in 50% of cases. These last results are due to the low penetration of ultrasound with fixed frequency equipment (about 6 mm at 20 MHz, 3 mm at 75 MHz, and 1 mm at 100 MHz).
On the other hand, probes with variable frequency from 10 to 15 MHz and multi-channeled color Doppler evaluation allow differentiating melanomas measuring < o > 1 mm in thickness (43). This evaluation is essential in choosing to perform an SNL biopsy, which is indicated in melanomas measuring more than 1 mm in thickness (42).
Gambichler et al., found an almost similar relationship to histology, with a correlation coefficient of 0.99 with both 20-and 100-MHz transducers (44).The use of 100 MHz was more accurate than the 20 MHz. They included only lesions ≤ 1 mm thick, limiting the evaluation of lesions> 1 mm thick. Machet et al., Gambichler et al., and Pellacani et al., found that the US measurements were slightly overestimated compared to the histological size but concluded that US has a strength correlation with melanoma thickness (10,45,46). For the first time, Reginelli et al., described the HFUS analysis of the CM using probes ranged from 50 to 70 MHz. In this study 14 CM have been analyzed. They present oval aspects and a fusiform shape, inhomogeneous, hypoechoic, smooth edges, and variable vascularization (1,47,48).
After several studies on small animals, the first HFUS for clinical use could be introduced for clinical use. The availability to use HF between 50 and 70 MHz is much higher than the conventional US systems, providing a resolution up to 30 microns and a penetration of about 15 mm (1). They considered the US performed with HF probes more accurate because the result corresponds to in vivo tissue without dehydration or fixation. The thickness obtained from US evaluation was compared to that obtained on the biopsy piece, and a favorable agreement was seen with the Breslow thickness (39,(49)(50)(51).
CONCLUSIONS
The application of ultrasound to dermatology is becoming more and more frequent. The ultrasound examination offers significant advantages and being it minimally invasive it is easily repeatable. In particular, the use of equipment with high-frequency probes provides important information, especially in the pre-operative, thus allowing a broader diagnostic-therapeutic evaluation, as well as later follow-up.
AUTHOR CONTRIBUTIONS
All the authors contributed equally to this work. All authors contributed to the article and approved the submitted version. | 2021-06-11T13:28:53.715Z | 2021-06-11T00:00:00.000 | {
"year": 2021,
"sha1": "e8c93caed4114a822b031d12c1f4b1bb5bb5a8a8",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2021.673026/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e8c93caed4114a822b031d12c1f4b1bb5bb5a8a8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
19851560 | pes2o/s2orc | v3-fos-license | The role of thromboelastometry in the assessment and treatment of coagulopathy in liver transplant patients.
Perioperative monitoring of coagulation is vital to assess bleeding risks, diagnose deficiencies associated with hemorrhage, and guide hemostatic therapy in major surgical procedures, such as liver transplantation. Routine static tests demand long turnaround time and do not assess platelet function; they are determined on plasma at a standard temperature of 37°C; hence these tests are ill-suited for intraoperative use. In contrast, methods which evaluate the viscoelastic properties of whole blood, such as thromboelastogram and rotational thromboelastometry, provide rapid qualitative coagulation assessment and appropriate guidance for transfusion therapy. These are promising tools for the assessment and treatment of hyper- and hypocoagulable states associated with bleeding in liver transplantation. When combined with traditional tests and objective assessment of the surgical field, this information provides ideal guidance for transfusion strategies, with potential improvement of patient outcomes. RESUMO A monitorização perioperatória da coagulação é fundamental para estimar o risco de sangramento, diagnosticar deficiências causadoras de hemorragia e guiar terapias hemostáticas durante procedimentos cirúrgicos de grande porte, como o transplante hepático. Os testes estáticos, comumente usados na prática clínica, são insatisfatórios no intraoperatório, pois demandam tempo e não avaliam a função plaquetária; são determinados no plasma e realizados em temperatura padrão de 37°C. Os métodos que avaliam as propriedades viscoelásticas do sangue total, como o tromboelastograma e a tromboelastometria rotacional, podem suprir as deficiências dos testes estáticos tradicionais, uma vez que permitem avaliar a coagulação de forma rápida e qualitativa, guiando a terapia transfusional de forma adequada. A tromboelastometria rotacional mostrou-se promissora na avaliação e no tratamento de estados de hipercoagulação e hipocoagulação, associados a sangramento no transplante hepático. Estas informações, combinadas com os testes tradicionais e uma avaliação objetiva do campo cirúrgico, promovem um cenário ótimo para guiar as estratégias transfusionais e potencialmente melhorar o desfecho destes pacientes.
INTRODUCTION
Perioperative monitoring of blood coagulation is vital to assess bleeding risks, diagnose deficiencies associated with hemorrhage and guide hemostatic therapy during major surgical procedures, such as liver transplantation. (1) Routine tests performed under static conditions (prothrombin time, international normalized ratio, activated partial thromboplastin time, fibrinogen levels and platelet count) have long turnaround time, do not assess platelet function, are conducted on plasma rather than whole blood, at 37°C, which often does not reflect the actual temperature of the patient; (2) therefore such tests are ill-suited for dynamic intraoperative work settings. (2) Methods that measure the viscoelastic properties of whole blood, such as thromboelastogram (TEG ® ) and rotational thromboelastometry (ROTEM ® ), can be used to guide transfusion therapy in cirrhotic patients; these methods provide fast qualitative assessment of coagulation and can overcome the limitations of traditional static tests. (3) Coagulation protein synthesis is often impaired in patients with liver disease. These changes can be counteracted by mechanisms leading to a new hemostatic equilibrium. (4) The major mechanisms include (1) dysfunction and impaired production of pro-and anticoagulation factors leading to bleeding, thrombosis or relative hemostatic equilibrium in patients with end-stage liver disease; (5) and (2) decrease in circulating platelets due to splenic sequestration, faster platelet turnover, shortened platelet half-life or decreased platelet production (low thrombopoietin levels), which is counteracted by increased secretion of von Willebrand factor, a platelet adhesion mediator.
Patients suffering from cirrhosis are deficient in procoagulant factors II, V, VII, IX, X and XI. These deficiencies affect routine laboratory based coagulation tests, particularly prothrombin time, international normalized ratio and activated partial thromboplastin time. However, in spite of decreased procoagulant factor levels, cirrhotic patients may still have normal thrombin generating capacity due to decreased production of protein C (a potent anticoagulant) and increased levels of endothelium-derived factor (factor VIII). (6)
VISCOELASTIC METHODS OF COAGULATION ASSESSMENT
Thromboelastography was originally described in 1948 as a method for global assessment of hemostatic function using a sample of blood. Different from conventional tests, TEG ® is a whole blood based assay run at the temperature of the patient, and therefore allows the assessment of platelet function and platelet-erythrocyte interactions. Thromboelastogram (Haemoscope/ Haemonetics ® , Niles, Ill) enables a comprehensive dynamic assessment of the coagulation process; in this test, a blood sample is loaded into a stationary cup that rotates through 4 o 45' in 10-second cycles. Movement is then monitored via a pin suspended into the blood sample following addition of a coagulation activator (Figure 1).
The ROTEM ® device is a modification of the TEG ® technology based on the same working principles: the signal generated by the suspended pin is transmitted via optical detection system instead of a torsion wire, and movement is generated by the pin, not the cup (Figure 2). Both TEG ® and ROTEM ® measure and provide graphic representation of viscoelastic changes across all stages of clot formation, persistence and resolution ( Figure 3). Reagents specific to ROTEM ® enable assessing different aspects of the coagulation process and thus provide guidance for correction of potential disorders, (7) a major advantage over TEG ® . The following variations of the assays are used to assist decision making concerning transfusion in clinical practice: (1) EXTEM (tissue factor activation; rapid assessment of clot formation and fibrinolysis via the extrinsic coagulation pathway); (2) INTEM (contact activation; assessment of clot formation and fibrin polymerization via the intrinsic coagulation pathway); (3) FIBTEM (tissue factor activation combined with platelet inhibitor cytochalasin D; qualitative assessment of fibrinogen levels); (4) APTEM (tissue factor activation combined with aprotinin; assessment of the fibrinolytic pathway; rapid detection when combined with EXTEM); and (5) HEPTEM (contact activation combined with heparinase; detection of heparin or heparinoids in the sample).
Assuming optimal conditions for hemostasis (temperature, blood pH and calcium serum levels within normal ranges) and clinical diagnosis of coagulopathy based on objective assessment of the surgical field, viscoelastic methods can be used to guide clotting factor or platelet replacement.
VISCOELASTIC METHODS IN LIVER TRANSPLANTATION
Increased rates of infection and hepatic artery thrombosis after liver transplantation have been associated with red blood cell transfusion. All blood products (cryoprecipitate, fresh frozen plasma and platelets) were shown to have negative impacts on 1 and 5-year graft survival. Also, transfusion of cryoprecipitate, fresh frozen plasma and/or platelets is associated with transfusion related acute lung injury. (8) In contrast with viscoelastic tests, conventional coagulation tests do not reflect the actual hemostatic status of cirrhotic patients. The limitation in ROTEM ® is that the test is run with blood outside the endothelium under no-flow conditions, which precludes abnormal results to be interpreted as indicative of coagulopathy in patients without evident bleeding. (9) Hence, ROTEM ® is used as a guide for blood product replacement in patients with signs of coagulopathy and bleeding of non-surgical origin, provided temperature, blood pH and calcium serum levels are within normalcy values. (2) Rotational thromboelastometry is thought to be a promising tool for assessment and treatment of hyperand hypocoagulable states associated with bleeding during major surgical procedures, such as liver transplantation. (10) CONCLUSION Thromboelastogram and rotational thromboelastometry deliver valuable real time information for perioperative coagulopathy management across the different phases of liver transplantation. When combined with traditional tests and objective assessment of the surgical field, these tools provide ideal guidance for transfusion strategies and have the potential to improve patient outcomes. Further studies are warranted to determine which parameters to target and which transfusion triggers to use for improved perioperative management of patients undergoing liver transplantation. | 2017-08-15T21:15:33.280Z | 2017-04-04T00:00:00.000 | {
"year": 2017,
"sha1": "2d52712ee56ac49b1b2387dbaf4b9d3607b6d031",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/pdf/eins/v15n2/1679-4508-eins-S1679-45082017MD3903.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0424cbd26bacb434e2bbf70a3300e5f14d405ba7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
249450584 | pes2o/s2orc | v3-fos-license | Recurrent Neural Network and Auto-Regressive Recurrent Neural Network for trend prediction of COVID-19 in India
. On 31st December 2019 in Wuhan China, the first case of Covid-19 was reported in Wuhan, Hubei province in China. Soon world health organization has declared contagious coronavirus disease (COVID-19) as a global pandemic in the month of March 2020. Since then, researchers have focused on using machine learning and deep learning techniques to predict future cases of Covid-19. Despite all the research we still face the problem of not having a good and accurate prediction, and this is due to the complex and non-linear data of Covid-19. In this study, we will implement RNN and Auto Regressive RNN. At first, we implement LSTM and GRU in an independent way, then we will implement deepAR with LSTM and GRU cells. For the evaluation of the obtained results, we will use the MAPE and RMSE metrics. Abbreviations. ARIMA , Autoregressive Integrated Moving Average; Bi-GRU , Bidirectional Get Recurrent Unit; Bi-LSTM , Bidirectional Long-Short Term Memory; Bi-Conv-LSTM , Bidirectional Convolutional Long-Short Term Memory; CNN, Convolutional neural network ; Conv-LSTM , Convolutional Long-Short Term Memory; Covid-19 , Coronavirus
Introduction
Coronavirus (COVID-19) is a respiratory disease of severe acute respiratory syndrome. The first identified was in December 2019 when a group of patients presented a new form of viral pneumonia in Wuhan, China. On March 11, 2020, the WHO, declared the new coronavirus (2019-nCoV) outbreak as a global pandemic. Since then, several preventive measures such as containment, rapid testing, wearing masks, selfquarantine, social distancing are applied by countries to stop the spread of COVID-19 pandemic. The most affected countries by COVID-19 are: USA, India, Brazil,France,Germany,UK,Russia,S.Korea,Italy,Turke y,Spain,Vietnam [1]. The first case of Covid-19 in India was confirmed on 30 January 2020 in Kerala. A few months later, the number of confirmed cases increased daily. 97847 was the maximum number of confirmed cases in 2020, specifically in September. From 5 April, the number of cases began to reach 400,000 cases [2]. [3] During this pandemic, machine learning or deep learning techniques have gained immense interest from researchers [4]. Most of the researches have been focused on the prediction and forecasting the future Covid-19 cases on short-term by implementing the machine learning and deep learning techniques. These researches and studies are aimed to take the proper decisions in the future, whether it is for travel restrictions or confinement or any other kind of decision. In parallel, these researches have studied the impact of this pandemic on different sectors such as health, tourism.... Shahid et al. [5] have used a COVID-19 dataset and has been modelled it using various regressors including ARIMA, LSTM, GRU, and Bi-LSTM for future predictions on confirmed cases, deaths, and recovered cases for ten countries around the world (Brazil, China, Germany, India, Israel, Italy, Russia, Spain, UK, USA). The metrics used to assess performance are: MAE, RMSE and r2_score. The results showed that the ARIMA and SVR models are unable to track the trend of the features with higher prediction error and negative r2_score values. LSTM, GRU and Bi-LSTM proved to be robust with higher accuracy rate.
Verma et al. [6] have led to a comparative study of the RNN /CNN recurrent and convolutional recurrent neural network models: vanilla LSTM, stacked LSTM, ED_LSTM, Bi-LSTM, CNN and a hybrid CNN+LSTM model to capture the complex trend of the COVID-19 epidemic. This study was done to predict future cases of covid-19 in India and the United States. In order to evaluate these models, they used the mean square error RMSE and the mean absolute percentage error MAPE as metrics. The results showed the robustness of the LSTM model and the hybrid model CNN+LSTM+ compared to the other models.
Bhangu et al. [6] have focused on the monthly analysis of time series of confirmed, cured and deceased COVID-19 cases, which allows to identify the trend and seasonality of the data. They used the ARIMA (autoregressive integrated moving average) and SARIMAX (seasonal auto-regressive integrated moving averages with exogenous regressor) models which was optimized to have good results.
Tiwari et al. [7] have implemented machine learning techniques, namely Naïve-Bayes, SVM and linear regression to predict the future growth and effects of the epidemic. Their demonstration showed that Naïve Bayes gives better results and better predicts confirmed Covid-19 cases with minimum MAE and MSE value. The results predicted by Naïve Bayes are almost similar to the actual confirmed Coronavirus cases.
Rustam et al. [8] have used four standard prediction models, such as linear regression (LR), least absolute shrinkage and selection operator (LASSO), support vector machine (SVM), and exponential smoothing (ES) to predict COVID-19 threat factors such as the number of new infections, number of deaths, and number of recoveries in the next 10 days. The results proved that ES performed the best of all the models used followed by LR and LASSO, while SVM performed poorly.
Ayoubi et al. [10] have used deep learning methods: LSTM and Gru and their bidirectional extensions Bi-LSTM, Bi-Conv-LSTM, Bi-Conv-LSTM and Bi-GRU to make a prediction on future Covid-19 cases. The results showed that the bidirectional models have a lower error rate with the following metrics: EV, MAPE, MSLE, RMSLE. The results show that the bidirectional models have lower errors than other models.
The structure of this paper is: Section two describes the deep learning models that we used and the evaluation metrics. In the third section, we present the Covid-19 2 Methodology and data dataset, the experimental results, and the discussion. Finally, conclusions are in the fourth section.
Experimental setup
The trend of COVID-19 epidemic is very dynamic and complex to be captured. To capture this complex trend, we perform the following steps during training, testing and forecasting. -In the first stage, we tested the deep learning models LSTM, GRU, DeepAR with LSTM cells and DeepAR with GRU cells. The experimental work is shown in Fig.2.
-The models have been implemented using Python. In order to evaluate these models, we used the following metrics: RMSE and MAPE.
Mathematical modelling
Deep learning methods are used to make predictions on data that can be linear or non-linear or both. When we want to make predictions with time series, we often use recurrent neural networks RNN which are known for their robustness in predictions. What makes RNNs robust is the way the information flows between the cells. The input from the current time step and the output from the previous time step will be introduced into the RNN cells so that the current state of the model is impacted by its previous states. RNN models cannot remember past information that is faraway in time. Internal architecture of GRU cell [12] DeepAR is a forecasting method based on autoregressive RNN, which learns a global model from the historical data of all series in the data set. DeepAR, a methodology for producing accurate probabilistic forecasts, based on training a recurrent autoregressive neural network model on a large number of related time series. DeepAR, which uses a real value as a past value. DeepAR uses a recurrent neural network (RNN) as a basic component and accepts past sequences and its covariates as an input [13][14] [15]. DeepAR use Long Short-Term Memory (LSTM) or Gated Recurrent Unit (GRU) cells that takes the previous time points and covariates as input [16].
LSTM & GRU
The evaluation metrics used to assess the performance of the proposed models were RMSE and MAPE which are mathematically represented by the following equations: Eq. (1) and Eq. (2). irrelevant information. reset gate, these two gates are ܧܵܯܴ = √ 1 ∑ ݊ ݕ( − )ݕ 2 (1) trained to filter out any irrelevant information.
Where y is model predicted value, yi is actual value.
Fig.5.
Internal architecture of LSTM cell [11] We first calculated the descriptive statistics of our data concerning the confirmed cases, these statistics are represented in the figures 7. The implementation of the models was done in Kaggle using python 3.0. We then consider individual models such as LSTM and GRU. The structure of LSTM is as follows: LSTM layer -Exclusion layer -Dense layer. The structure of GRU is as follows: GRU layer and Dense layer. GRU and LSTM can have the same structures and parameters. We divided the input data into 80% for training and 20% for testing and normalized them using MinMaxScaler, its mathematical representation is presented in (3). Fig.9. DeepAR with LSTM cells "testing data" Where Z is original time-series data, ܼ ݊ is normalized time-series data, ܼ ݉݅݊ is minimum value in the timeseries, and ܼ ݔܽ݉ is the maximum value of the time series.
At first, we implemented the models individually. For GRU and LSTM had respectively 2 layers. We add a dense layer to the model to connect each neuron to the next neuron. To overcome the explosive gradient or evanescent gradient problem, we used the ReLu activation function. In DeepAR we kept the same parameters that we selected with the individual models. This means that DeepAR with LSTM cell and DeepAR with GRU cell were implemented with two layers for each cell type. As an optimizer, we used Adam and RMSE to evaluate the models. From the figures we can see that the DeepAR model reacts well to the training phase, the data of this phase are linear. On contrary, in the test phase the model does not give very good results as in the training phase. This is due to the complexity of the data as we have linear and non-linear data. At the beginning the flow was increasing weakly but from December 2021 the flow is increasing rapidly. The problem we are confronted with when working with the covid data is to find a model that can detect these sudden and unexpected changes, and this is where the data starts to be complex and the model does not perform well. By analyzing the results, we can see that the DeepAR-LSTM cells model has given good results, but the GRU, either in individual mode or by introducing it as a cell for the DeepAR, remains less efficient.
Conclusion
The COVID-19 data of some countries are not linear which is the case for India. From the results obtained we can see that the Auto-Regressive Recurrent Neural Network give good results compared to Recurrent Neural Network. From our research and those cited in the literature, we can observe that we have not yet succeeded in having a model that detects the complex trend of Covid-19. This study has several limitations. Deep learning and machine learning models remain weak in detecting the complexity of this trend. What we see and propose as future work is to work with hybrid models like RNN with CNN to detect the complex trend of Covid flow. | 2022-06-08T15:16:19.131Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "736d04b3022bb19854bde5578c422e9b25be03e5",
"oa_license": "CCBY",
"oa_url": "https://www.itm-conferences.org/articles/itmconf/pdf/2022/06/itmconf_iceas2022_02007.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "091d91875031ded870f0d01d3f1daa59c9325efa",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.