file_name
stringlengths 18
40
| text_content
stringlengths 145
266k
|
|---|---|
10.1016_j.ejrh.2020.100692.txt
|
TITLE: Temporal and spatial variability of shallow soil moisture across four planar hillslopes on a tropical ocean island, San Cristóbal, Galápagos
AUTHORS:
- Percy, Madelyn S.
- Riveros-Iregui, Diego A.
- Mirus, Benjamin B.
- Benninger, Larry K.
ABSTRACT:
Study region
This paper provides a summary of findings from temporal and spatial studies of soil water content on planar hillslopes across the equatorial island of San Cristóbal, Galápagos (Ecuador).
Study focus
Soil water content (SWC) was measured to generate temporal and spatial records to determine seasonal variation and to investigate how the behavior of surface and near-surface root-zone soil water may support island-wide hydrogeology models. SWC probes were installed at four weather stations in a climosequence to generate a temporal record, and spatial surveys of shallow SWC across the selected sites were completed during wet and dry seasons. Temporal differences in SWC were driven by seasonal variations in rainfall and evapotranspiration, while spatial variability remained high during both wet and dry seasons. Unsaturated hydraulic conductivity determined by mini-disk infiltrometers was highly variable across the slopes, as were other hydrologic variables.
New hydrological insights for the region
The high heterogeneity of soil water and hydrologic characteristics provides a means to explain why little runoff is observed at the study sites: soils do not saturate uniformly across hillslopes, allowing for runoff generated in one part of the hillslope to be conducted into the soil in adjacent parts of the hillslope. The lack of connected surface runoff helps explain how water enters the groundwater system of the island.
BODY:
1 Introduction Knowledge of near-surface soil water content and the amount of water in pore spaces in the top of a soil (0−15 centimeters) contributes to the understanding of climate, energy balance, and plant health. Soil water content distributions across hillslopes can affect energy fluxes by: (1) controlling the partitioning of rainfall into evapotranspiration, runoff, and recharge ( Mirus and Loague, 2013 ; Rasmussen et al., 2011 ); (2) facilitating soil development and changes in chemistry by serving as a means of solute transport and changing the oxidation state of soils ( Bailey et al., 2014 ); and (3) serving as the source of water for soil microbiota ( Graham et al., 2010 ). Soil water content distributions also affect hydrologic processes at varying scales ( Western and Blöschl, 1999 ), ranging from centimeters (e.g., capillary flow versus macropore flow, solute transport) to kilometers (e.g., surface-atmosphere interactions, large-scale flooding). The distribution of soil water across hillslopes is affected by a number of factors, including season ( Martinez et al., 2008 ; Western and Grayson, 2000 ), topography ( Beven and Kirkby, 1979 ), flora ( Chandler et al., 2018 ; Metzger et al., 2017 ), land use ( Foster et al., 2003 ), and physical properties of the soil ( Dong and Ochsner, 2018 ; Zhu and Mohanty, 2003 ). We investigated the effect of seasonal variations in rainfall and evapotranspiration on the temporal and spatial distributions of soil water on San Cristóbal, a tropical ocean island located in the Galápagos archipelago in the equatorial Pacific ( Fig. 1 ). Previous work on soil moisture patterns and dynamics has largely focused on subtropical or temperate landscapes, where a hillslope’s aspect affects the soil water content, not just through evapotranspiration, but by fundamental changes to the hillslope’s hydrology. Slope aspect affects the types of plants growing on the hillslope, the amount of solar radiation that each side of the slope receives during the summer and the winter, and the shape of the slopes ( Pelletier et al., 2018 ). Authors have noted that polar-facing slopes (south-facing in the southern hemisphere and north-facing in the northern hemisphere) are usually cooler, wetter and steeper, while equator-facing slopes are drier and more likely to approach wilting point, and less steep ( Ebel, 2013 ). Low-elevation equatorial sites, like those on San Cristóbal, are largely unaffected by aspect because solar radiation is direction-independent, providing an opportunity to explore the seasonal change, at constant aspect, in the distribution of soil water. The relationship between temporal and spatial variability in soil water content based on season has been extensively studied in subtropical and temperate climates. The Shale Hills Critical Zone Observatory (Pennsylvania, USA) has served as the site of numerous spatial and temporal soil water surveys. The results of the temporal studies show that while root-zone and shallow soil water probes (less than 0.3 m deep) recorded strong seasonal differences in the distributions of soil moisture, deeper soil moisture probes (inserted to depths between 0.3 and 1.1 m) showed greater temporal persistence ( Takagi and Lin, 2012 ). At the Wüstebach catchment in Germany, authors noted that seasonal, and even event-driven variations in precipitation affected the topsoil’s soil moisture content, while deeper soil water content was affected by the depth to the water table more than the local precipitation ( Rosenbaum et al., 2012 ). A study of soils in France, Spain, and Tunisia found that while precipitation caused the mean soil water content across a field to change, the distribution of soil water, especially minima and maxima, remained constant through time ( Vachaud et al., 1985 ). Other authors have noted similar examples of spatially heterogeneous soil water distributions that are stable inter- or intra-annually ( Brocca et al., 2009 , and references therein; Grayson et al., 1997 ). At all these sites, an extensive network of soil moisture monitoring probes, weather stations, and groundwater observation wells and piezometers were in place, facilitating detailed quantitative understanding of the temporal and spatial dynamics of soil moisture. Tropical sites like the Galápagos often lack the scientific infrastructure to support studies like those carried out in subtropical and temperate climates, despite the importance of the temporal behavior and spatial distribution of soil water to the island-wide hydrology. Previous studies have used electromagnetic resistivity ( D’Ozouville et al., 2008a ), seismic refraction ( Adelinet et al., 2018 ), numerical modeling ( Domínguez et al., 2016 , 2017 ), and noble gas and stable isotope geochemistry ( Warrier et al., 2012 ) to create a conceptual model for the behavior of water as it enters the groundwater system (summarized in Percy et al., 2016 ). In general, the current conceptual models of the island’s hydrogeological system require that water infiltrates through soils and into perched aquifers that either drain via springs to the surface or deeper into the island’s groundwater system. However, little work has been done to understand the behavior of shallow soil water across the island’s planar hillslopes. San Cristóbal’s seasonal variability in precipitation amounts and type ( Percy et al., 2016 ; Schmitt et al., 2018 ) makes it a prime study site to understand how seasonal variability affects the temporal and spatial behavior and distribution of soil water across hillslopes on a tropical island. This study uses new data collected from soil moisture and weather monitoring stations at four different altitudes on San Cristóbal, coupled with intensive spatial sampling of shallow soil water content distributions across these four hillslopes during two different seasons, to address three primary questions: 1 Does the soil water saturation differ seasonally on the studied hillslopes? 2 Do we observe differences in the spatial patterns of near-surface soil water content across these hillslopes based on seasonal differences in stochastic variables like rainfall and evapotranspiration? 3 Could spatial patterns of shallow soil water affect runoff generation? Through geostatistical analysis of spatial patterns, time-series analysis of the hydrologic response monitoring across the hillslopes, and characterization of soil hydraulic properties, we provide further insights into variations in the hillslope-scale water balance across a climosequence in a tropical equatorial island setting. 2 Methods and materials 2.1 Site selection and description San Cristóbal Island, a basaltic volcanic island that is part of the Galápagos Archipelago ( Fig. 1 ), is estimated to have emerged from the Pacific around 2.4 Ma ( Geist et al., 2014 ), and the youngest dated eruption on the southwest side of the island, where our study sites are located, occurred 0.65 Ma ago. The bedrock across the archipelago is basaltic and ranges from tholeiitic to alkaline in nature; across San Cristóbal, at least six mineralogically and chemically distinct basalts were identified that range in age between 2.35 ± 0.03 and 0.7 Ma on the southwest side of the island to nearly modern on the northeast side of the island ( Geist et al., 1986 ). Despite its equatorial latitude, San Cristóbal experiences seasons due to the latitudinal migration of the Intertropical Convergence Zone ( Trueman and D’Ozouville, 2010 ). From January through May, San Cristóbal generally experiences weather typical of the tropics, with large convective rainstorms across the entire island. From June until December, southeast trade winds blow across cool upwelled ocean water to create an inversion layer, resulting in a dry and sunny coastal zone while the summit at roughly 700 m above sea level receives rainfall and is often cloudy (see Schmitt et al., 2018 , for details of upland climate). The seasonal shift from highly variable, island-wide precipitation to consistent, highland-only precipitation has resulted in a climosequence that is strongly dependent on elevation, ranging from very arid at the coastline to very humid at the summit ( Colinvaux, 1972 ). Vegetation ranges from native scrub and cacti at the coastline to grassy pastures, broadleaf shrubs, and trees at middle elevations, and native herbs and shrubs at the highest elevations ( Huttel, 1986 ). We selected four hillslope sites across the island, co-located with weather stations deployed in 2015 ( Fig. 1 ), two on the windward side and two on the leeward side. The weather stations were installed to capture as many climate zones as possible while facilitating routine maintenance, instrument security, and data downloads. El Junco (EJ) is the highest site on the windward side of the island and is located in the very humid zone. The hillslope is convex-linear (see classification of Schoeneberger et al., 2012 , for explanation of slope shapes) and is bounded on the east side by a bedrock outcrop. The studied portion of the slope covers an area of 48 × 50 m 2 . Sitio Mirador (SM) and Cerro Alto (CA) are located on the leeward side of the island and span the transition zone between dry (SM) and humid (CA) climates. CA’s slope shape is convex-concave and has cow paths that we observed to affect runoff at the site; the studied portion of the slope is 54 × 40 m 2 . SM, with a study area of 96 × 40 m 2 , is the most gently sloping of the sites and is bounded on three sides by bedrock cliffs. Finca Merceditas (FM) is the lowest site, but because it is on the windward side of the island, it remains in the transition zone between the dry and humid climate zones. The slope shape at FM is linear-linear and was partially covered by a study area of 75 × 50 m 2 . With these four sites, we have hillslopes in most of San Cristóbal’s climate zones except for the arid zone. The arid coastal zone’s soils are thin and unevenly distributed, found mostly in cracks in lava flows, and so are not included in this study. We visited the sites during the Galápagos wet season of 2015 and the dry season of 2016; wet season 2015 was during an El Niño and was wet (161 mm of measured precipitation at EJ over five weeks of the study period, from the weather station at each site), while the study during 2016 occurred during an eight-month drought (46 mm of precipitation measured over the eight-week study period at EJ). Summary details about each site are provided in Table 1 . Between the field seasons, a wildfire affected CA and SM, probably in July according to farmers. At CA, the majority of the sampling points were unaffected by the wildfire and the site remained a pasture; only the downslope points experienced the fire. SM’s land use changed following the fire from a carefully maintained natural area that served as a nursery for native plants to a garden with little ground cover. 2.2 Measurement of climate variables and temporal soil water data The deployed weather stations collected data on rainfall, wind speed and direction, relative humidity, temperature, and solar radiation, and were co-located with soil moisture probes at three depths. Data gaps at the stations were the result of inclement weather and biological activity that resulted in the failure of instruments. To record the volumetric soil water content, we installed soil water sensors (Model CS616, Campbell Scientific Inc., Utah, United States) horizontally at depths of 10, 20, and 40 cm at EJ, and 10, 20, and 30 cm at the other three sites. However, because of equipment failure after deployment, continuous soil moisture data at SM are not available. The data were recorded every 15 min, with the exception of the station at FM which recorded data every 5 min for the first 300 days. A complete description of the equipment used to measure climate variables is found in Schmitt et al. (2018) . To calculate the reference evapotranspiration rates, we used the FAO Penman-Monteith method ( Zotarelli et al., 2010 ) because of the available weather station data. We used the grass reference surface in our calculations because none of the weather stations were installed under trees and because three of the four sites were grassy, while the last site (EJ) is characterized by low herbs. 2.3 Physical properties and site characterization We made hillslope-scale and point measurements to describe the physical properties of the studied hillslopes. We recorded physical variables at each point for which near-surface soil water content was collected, as described below. Digital elevation models available for San Cristóbal were insufficient because their resolution was too coarse for this study (20 m from ENVISAT, D’Ozouville et al., 2008b ), so we surveyed the four hillslopes across the 2-m grid via tape-and-compass measurements (Fig. S1). From the generated 2-m resolution DEM, we calculated the slopes and upslope accumulation area for each site using ArcGIS ( Esri, 2011 ). We noted the overlying plant type (bramble, fern, herb, grass, tree, and bare soil) at the point at which we inserted the soil water content probe, and recorded the presence of trees, tree roots, or rocky outcrops within 2 m of each measurement point. We present maps of the plants at each measurement point for both seasons in Figure S2. We used a soil step probe (33″ Plated Step Probe with Handle, AMS, Idaho, United States) to measure the depth of soil (hereafter, depth to refusal) after measuring the soil water content at each point. Excavations show that the depth to refusal is the depth to a clay-rich horizon or rock and provide an adequate proxy for the depth to which soils are heavily rooted. Maps of the depth to refusal are presented in Fig. S3. 2.4 Soil water content and hydrologic properties During the field seasons, we measured the near-surface soil water content using a Hydrosense II (HS2) CS659 portable soil water content probe (12-cm rods, Campbell Scientific Inc., Utah, United States) with a support volume (the volume of soil over which the soil water content is measured) of approximately 460 cm 3 . The output from the HS2 is electrical period (microseconds, μs) and a conversion is used to calculate volumetric water content (volume%). The period is strongly related to the dielectric permittivity of the material around the probe rods (Campbell Scientific Hydrosense II User Guide). Tropical soils on volcanic parent material can have a range of clay and amorphous material contents that will affect the calibration of soil water content probes ( Noborio, 2001 ; Regalado et al., 2003 ), so we noted both period and calculated soil water content. We calibrated the HS2 in the laboratory using minimally disturbed soil core samples collected in 130 cm 3 metal cylinders. In the 2015 wet season, we took at least 100 shallow soil moisture measurements within a 20 × 20 m 2 grid to provide adequate spatial coverage on the small hillslopes. We made additional measurements down slopes in transects extending from grid lines. The dry season campaign in 2016 included the same 100 points measured during the 2015 campaign. We measured additional points during both field campaigns that are included in the geospatial analysis. We collected measurements over several days at each site; during the wet season campaign, the weather stations were not yet operational at EJ, CA, and FM, but field notes indicate that it rained every day at EJ and for one of the six measurement days at CA. During the dry season, only EJ received rain during data collection (Fig S4). We also measured hydraulic properties of the soils to support the temporal and spatial observations of the soil’s volumetric water content. To measure the hydraulic properties of the soils on the four hillslopes, we used in situ experiments to quantify the unsaturated hydraulic conductivity and laboratory analyses to measure bulk density, grain size distributions, pH, and the weight percentage of amorphous material in the soils. In the field, we measured the sorptivity of water and unsaturated hydraulic conductivity using Meter Group mini-disk infiltrometers (MDI) with an applied suction of −2 cm for sites CA, SM, and FM, and −0.5 cm for site EJ (due to slow infiltration across the site). We applied the MDI to the soil after clearing the leaf litter and made at least two measurements of unsaturated hydraulic conductivity within 1 m of each other at SM and FM. We used the Excel macro provided by the manufacturer to calculate the sorptivity and unsaturated hydraulic conductivity. We measured the bulk density of the soils using the saran method ( Blake and Hartge, 1986 ) to aid in the calibration of the Hydrosense II, as well as estimate what the maximum volumetric water content could be. We calculated the bulk porosity in samples from the sites using an average particle density value of 2.60 g/cm 3 , the average density of kaolinite, gibbsite, halloysite, and illite, all of which are expected to be important components of the soil (porosity = 1 – [bulk density/particle density]). In the laboratory, we measured grain size distributions from each site by dry sieving the fine earth fraction (<2 mm particles) on a shaker, followed by gravimetric determination to yield the percent of the total weight of the sample, with chemical dispersion of clays accomplished using sodium hexametaphosphate ( Soil Survey Staff, 2014 ). The pH of the mineral soil was measured by combining 5 g of fine earth mixed in 20 mL of distilled water and allowed to equilibrate for at least one hour before allowing the particles to settle and measuring the pH of the supernatant ( Pansu and Gautheyrou, 2006 ). Naturally occurring amorphous materials in soils are noncrystalline and poorly crystalline material, usually composed of compounds of Si, Fe, and Al that coordinate with free oxides and hydroxides. We measured the weight percentage of amorphous material in the bulk soil using the ammonium oxalate extraction method ( Jackson et al., 1986 ) at least once per site because amorphous materials can adsorb water and affect the hydrophobicity of soil. 2.5 Data processing We analyzed the soil water content and climate time-series data using R computing software ( R Core Team, 2017 ). To examine relative temporal changes in soil moisture, we normalized the measured volumetric water content to the maximum observed values during saturation to calculate relative soil saturations. To examine the observations spatially, we used semivariogram analysis to calculate the autocorrelation distance across the four sites during each season; semivariograms show the autocorrelation over distance across hillslopes and can be used to understand the distributions of soil water ( Western et al., 2004 ). Understanding autocorrelation distance at the four sites provides a quantitative metric for the degree of hillslope connectivity, which is important to the understanding of runoff generation on the hillslopes. We input our gridded data into the geoR package in R ( Ribeiro and Diggle, 2001 ) to test omnidirectional, exponential ( Western et al., 2004 ) and spherical models ( Bi et al., 2009 ). Different studies have found that the exponential or spherical models fit data better in different landscape types: the exponential model’s covariance along the sill asymptotically approaches zero variance, while the spherical model’s sill represents where the covariance is zero. We compared the raw output data from the HS2 (period) and the calculated soil water content to complete the semivariogram analysis using the different models. We tested correlation relationships between soil water content and topographic measures, including slope and upslope accumulation area, and the depth to refusal. We also analyzed the relationship between hillslope-averaged variables like bulk density, pH, fraction of amorphous material, unsaturated infiltration rates, soil texture, and the mean soil moisture and standard deviation of the soil water content for the wet and dry seasons. 3 Results 3.1 Temporal record of soil moisture The temporal record of rainfall, soil moisture, and evapotranspiration is presented in Fig. 2 for sites EJ, CA, and FM. Summary data from the weather stations at all four sites are in Table 2 . The wet season on San Cristóbal usually falls between January and May, and the dry season falls between June and December, with interannual variability in the beginning and end of the seasons. However, the beginning of the study fell within the strong 2015−2016 El Niño, which ended in May–June 2016 ( Hu and Fedorov, 2019 ), resulting in very wet conditions throughout the first part of the study. As shown in Fig. 2 a, almost 65% of the precipitation at EJ fell between October 2015 and February 2016. During this period, the wind was at its most variable, and the solar radiation was lower than during dryer parts of the year. On the island of Santa Cruz, researchers found that fog can account for approximately 20% of the net precipitation at high elevation sites at comparable elevations to EJ ( Pryet et al., 2012b ). Because of the configuration of the weather station, we did not measure the amount of occult precipitation at EJ, which means that an important source of water into the hillslope is not considered within the precipitation data. Within the soil, the deepest soil moisture probe (40 cm) recorded soil moisture levels that regularly approached saturation; the data show that soil moisture decreased more slowly from saturation than the shallow soil moisture. The dry period (February 2016 until the end of the data record, January 2017) had fewer rainstorms and the reference evapotranspiration was higher due to both higher windspeeds and greater solar radiation. This corresponds with a decline in the recorded soil moisture through the season, with rainstorms resulting in sudden increases in the shallower soil moisture data (10 and 20 cm) and much smaller increases in the deep soil moisture data (40 cm). CA ( Fig. 2 b), located on the drier leeward side of the island, receives precipitation from large convective storms, rather than the fog that characterizes the “dry” season at EJ. In calculating the reference ET for site CA, the solar radiation term of ET was consistent across the year, but the wind term was highly variable and was the dominant control of the calculated reference evapotranspiration for the site. The data recorded by the soil moisture probes at CA show that the deepest soil moisture probe (30 cm) most often approached saturation, and the 20 cm deep probe was only slightly drier. The shallow probe (10 cm) recorded the greatest percent change in measured soil water content after precipitation events compared to the deeper soil probes and also indicated that the soil dried most quickly near the surface. At the low-elevation windward site FM, the average reference ET remained around 2.40 mm/day throughout the seasons, in part because the windspeed remained consistent throughout the seasons. Changes in the daily reference ET are due to changes in radiation fluxes. The soil moisture data show that the shallowest soil moisture probe (10 cm) recorded the highest saturation across the study period, while the two deeper soil moisture probes (20 and 30 cm) had similar saturation values recorded through time. At all of the sites, the temporal soil moisture record reflects different responses to precipitation events. For example, at EJ ( Fig. 3 a), the surface soil moisture probe at 10 cm showed a daily decline in the percent saturation until a large event, at which point the record reflects an increase to 100% saturation. The 10- and 20-cm probes first responded to the storm event at the same time; however, the 20-cm deep probe did not record as steep of a rate in the increase of saturation. The 40-cm probe had the highest percent saturation prior to the storm event and did not respond to the storm event until hours after the surface soil probes. Several days after the precipitation event, the 40-cm deep probe reached saturation during smaller precipitation events, while the 10- and 20-cm soils had lower percent saturation values. Fig. 3 b shows the response in percent saturation at CA to two storm events. The first storm event resulted in a nearly simultaneous increase in the percent saturation at all three probe depths within 30 min after the start of the event, indicative of preferential flow in the surface soil which may be an important mechanism at this site. The second storm event was recorded by all three soil probes simultaneously (within the same 15-minute measurement period). At CA, the surface soil dried most quickly. At FM ( Fig. 3 c), the 10-cm deep probe recorded the highest percent saturation throughout the considered time period, while the 20- and 30-cm deep probes recorded very similar percent saturations. The shallowest probe recorded an increase in the percent saturation approximately 50 min before the 20- and 30-cm root-zone probes began to record an increase in the percent saturation. The second, smaller event did not affect the 30-cm deep probe, and only marginally increased the percent saturation for the 20-cm deep probe but resulted in a 3% increase in the percent saturation for the surface probe, from 92% saturation to 94.8% saturation. 3.2 Spatial soil water patterns and statistics The number of sampling points and summary statistics of the spatial soil water survey measurements ( Table 3 ) show that the mean, maximum, and minimum measured values of soil water content during the wet seasons are, not surprisingly, higher than during the dry season. The standard deviation and coefficient of variation (a measure of the relative variability of the soil water content) are higher for the dry period than they are for the wet period at all four sites. While changes in soil water between field seasons are associated with the precipitation, the cross-site relationship between site-wide precipitation and mean soil water content is not linear, likely because of differences in physical properties of soils at the sites. For example, despite its location in the very humid highlands and the highest annual precipitation record, EJ has the lowest mean of spatial soil water content for the wet season, and the second highest mean of spatial soil water content for the dry season. CA’s mean of spatial soil water content is the second highest during the wet season but has a low mean of spatial soil water content during the dry season. SM and FM display similar means of spatial soil water values during the wet season, but because SM is on the leeward side of the island and FM is on the windward side, the mean of the spatial soil water content during the dry season is higher at FM than SM. There are no easily discernible spatial patterns in the distribution of near-surface soil moisture across the hillslopes ( Fig. 4 ). At EJ, the most humid site, neither of the spatial surveys show a relationship between soil moisture distribution and topography across the hillslope, although the lower portion of the slope for the dry season was slightly wetter than the upper portion of the slope, collected the day before, because of a rainstorm that occurred on the day we made the measurements on the low slope segment (Supplementary Fig. 4 ). Any possible pattern present at CA is difficult to identify because of the rock faces and thin soils that dominate the middle of the hillslope at this site. In the areas with spatially continuous data, soil moisture increased slightly in the middle of the saddle at the top of the hill at CA, at a point with the greatest convergence at the study site. For SM, the distribution of soil moisture appears random during both the wet and dry season, although less data were collected during the wet season. The noticeably wet area at SM during the wet season occurred where taller trees and thick herbs shaded the ground. At FM, the distribution of soil moisture appears random during both the wet and dry seasons. The distribution of the soil water content data varies from site to site, with the Shapiro-Wilk test of normality indicating that none of the sites’ soil water content distributions are normal. Previous authors used the gamma distribution (e.g., Kaiser and McGlynn, 2018 ) to describe the probability distribution of data because it resembles the positive skewness often observed with soil pore size distributions ( Tuller and Or, 2005 ) and it has been experimentally tested in a variety of catchments (e.g., McGuire et al., 2005 ). We examined histograms of the soil water content for each site and applied gamma distributions ( Fig. 5 ). For all but FM, the dry season’s distributions are positively skewed, while for all of the sites, the wet season’s distributions are negatively skewed. Other studies note that negative skewness is observed when the bounded function approaches the upper bound of porosity and the soil water content has reached saturation ( Kaiser and McGlynn, 2018 ; Western et al., 2002 ), whereas positive skewness corresponds to the soil water content approaching the wilting point. We tested the temporal stability of the soil moisture spatial distributions by ranking the soil moisture values and comparing the ranks of the points that we measured during both seasons. Across the four sites, there was no temporal stability in the spatial distribution of soil moisture. However, it is crucial to note that other studies that have observed temporal stability of soil moisture distribution patterns incorporated significantly more time points, including studies that collected soil moisture data over years ( Dari et al., 2019 ; Gao et al., 2019 ; Starks et al., 2006 ; Western et al., 1999 ). Additional interpretations of the temporal stability of soil moisture distributions across our hillslopes would require additional spatial surveys of the slopes to capture not just wet and dry seasons, but the transition between the seasons. 3.3 Geostatistical analysis We used semivariogram analysis at each site to evaluate spatial correlations between measured soil moisture. We made the models of semivariance using the directly measured period (μs) rather than the calculated soil water content because the Hydrosense II does not report volumetric water content values above 52.2% due to internal calibration within the sensor, while the measured period is reported from the sensor for every point. The soil’s electrical response is affected by soil moisture, but also by the clay content, organic material content, and soil salinity ( Topp et al., 1980 ). While the use of period rather than volumetric water content affected the autocorrelation distances calculated with the semivariogram analysis, this introduced less error into the interpretations of the results because of greater spatial coverage across the hillslopes. We tested a range of input values for the nugget, the partial sill, and the range parameter, selected based on the measured data for each model semivariogram and reported the model with the lowest sum-of-squares fit. The values used for each of the best-fit models are in Table 4 . The spherical model type provided a lower sum-of-squares fit than does the exponential model for all but four of the models. For three models (CA during the dry season and FM during the wet and dry seasons), the type of model did not affect the sum-of-squares fit and is listed as Sph/Exp. For SM during the dry season, the exponential model had a lower sum-of-squares fit. We also tested whether the model required a nugget; for all of the models, setting a nugget provided a better model fit than reducing the nugget to zero. Beyond the presence of a nugget, the semivariogram models (lines) show no consistent trends in autocorrelation related to either season or site wetness and do not fit the calculated semivariance of the measured data well ( Fig. 6 ). In the wet season (right panel), the wet sites’ semivariance does not reach a plateau, suggesting that the autocorrelation distance is longer than the measured length of the hillslopes. For the dry season (left panel), the models of the semivariance (lines) fail to capture the calculated semivariance at CA and FM because there is no spatial correlation across the hillslopes. 3.4 Soil hydrologic properties The results of the MDI experiments measured over the course of one day at each site show that the soils’ sorptivity and measured unsaturated hydraulic conductivity (hereafter, K unsat ) were heterogeneous during both wet and dry field seasons. We report both values in Table 5 because the sorptivity, which represents a soil’s ability to draw water ( Stewart et al., 2013 ), does not depend upon the soil wetness when measured, unlike unsaturated hydraulic conductivity measurements. Because the MDI experiments are completed over very small areas (the diameter of the steel disk applied to the soil surface is 4.5 cm), this heterogeneity is expected, but even in settings with massive spatial variability in hydraulic conductivity, enough measurements with a mini disk can be used to provide useful estimates of effective infiltration capacity at the catchment scale ( McGuire et al., 2018 ). Regardless of the mean soil water content for each hillslope, the unsaturated hydraulic conductivity did not exhibit patterns based on the distance downslope from the datum at each site, suggesting that shallow soil water was not accumulating downslope. We measured the soil saturation at each point prior to running the MDI experiments and saw no relationship between shallow soil saturation, the soil sorptivity, and the unsaturated hydraulic conductivity. Other hydrologic properties at the four sites, including bulk density and calculated porosity, grain size distributions, pH, and the weight percentage of amorphous material, are reported along with the results of the MDI experiments in Table 5 . Because we made all of the soil moisture measurements in the upper soil layer, only topsoil (0−15 cm, samples from the O or Ah horizons; Lasso and Espinosa, 2018 ) properties are included. During the dry field season, we collected surface bulk density samples at the same location we made the MDI measurements at FM and EJ; however, there is no apparent relationship between the two properties. The other measured hydrologic properties at FM and EJ have no correlation with the MDI-measured K unsat values at FM and EJ. We compared the K unsat data to the intensity of rainfall during the field seasons for all of the sites ( Fig. 7 ). At all of the sites, the range of measured unsaturated hydraulic conductivity spanned orders of magnitude. The smallest values measured for the unsaturated hydraulic conductivity at each site are within the same order of magnitude of rainfall intensity ( Table 5 ), while the largest values are much higher than the measured rainfall intensity. However, because we avoided obvious macropores (observed at Sitio Mirador, the only site at which surficial expression of the presence of macropores was clear) and adjusted the suction for the MDI to exclude macropore flow, the calculated K unsat values only apply to the soil matrix. Because we did not account for macropore flow in our measurements of K unsat , the precipitation and K unsat values may not overlap at all if macropore flow is important and rapid infiltration through macropores dominates infiltration. Additionally, we calculated the rainfall intensities from the total precipitation collected in a tipping bucket rain gauge and integrated over the data logger’s recording time (5−15 min). Likely some of the instantaneous rainfall intensities were slightly higher than the values integrated over the relatively short measurement periods. While in the field, we excavated soil pits across the hillslopes; at EJ, we observed water exfiltrating along the O and A horizon boundary in the profile wall (between 8 and 15 cm deep across the entire hillslope) when there was heavy rain (every day during the wet season, one day during the dry season). At CA, we observed water moving along a boundary between less clay-rich and clay-rich layers only during the wet season. At SM and FM, we did not observe water exfiltration from the exposed face of the soil pits. Although numerous studies indicate that saprolite that underlies many tropical sites may have bimodal pore size distributions ( Navarre-Sitchler et al., 2013 ), we observed no evidence of such a distribution within the saprolites from the soil pits. 4 Discussion 4.1 Do the soil water saturation states differ seasonally? At the three study sites with temporally continuous records of soil saturation (EJ, CA, and FM), we observed changes in the percent saturation for all depths between the wet season and the dry season ( Table 6 ). This shift in the seasonal percent saturation was unsurprisingly supported by changes in the mean soil saturation measured by the two areal studies. EJ had one rain event during the dry season for which there are both temporal and spatial data. Over the course of the event, the soil water saturation of the areal study increased by a percent change of 18.5%, and the soil water saturation measured by the weather station increased by a percent change of 20.3%. With additional spatial surveys and in situ soil moisture probes, determining if the weather stations provide a representative measurement of the change in the percent saturation would be possible. Because this study fell during an El Niño, the seasonal changes that we described represent extreme cases of wet and dry. The highest intensity and amount of precipitation measured at EJ corresponds to the record of the greatest saturation of soils at all depths. When rainfall declined during the dry season, the calculated reference evapotranspiration was higher and more variable, reflecting an increase in solar radiation and windspeed. El Niño usually results in more convective rainfall and less fog in the Galápagos ( Trueman and D’Ozouville, 2010 ), so it is possible that during non-El Niño years, the soils at EJ do not dry as much as they did during our study period, which occurred during an intense El Niño year. At CA, the rainy periods corresponded to the highest measured soil saturation, while the reference evapotranspiration was variable throughout the year, reflecting the site’s leeward position and the fact that it receives less consistent rainfall and fog during dry seasons compared to the windward sites. Because island-wide convective storms are more common during El Niño years, the high soil saturation observed during this study at CA may be uncharacteristic of the site. FM also has a distinct rainy and dry season, although it received more dry season precipitation than the leeward CA. Higher soil saturation was associated with the rainy seasons, but the reference evapotranspiration stayed consistent throughout the year, reflecting the fact that the site receives less precipitation than EJ, but more than CA. We examined the change in variance in the temporal soil water content record and observed shifts through time. The sites showed different relationships between the saturation state and the variance, with higher measured variance during the dry season at EJ and lower variance during the dry season at CA. The variance around the mean percent saturation at FM remains nearly the same for both seasons. Many authors have observed that the highest variance in soil moisture is associated with mid-range mean soil moisture measurements in near-surface soils ( Choi et al., 2007 ; Famiglietti et al., 2008 ), although researchers have also noted that variance can both increase or decrease with changes in the mean soil moisture ( Pan and Peters-Lidard, 2008 ). For example, at a temperate site in Virginia (USA), it was noted that the lowest variance around mean soil moisture is associated with extreme wet and dry periods ( Lawrence and Hornberger, 2007 ). The authors determined that during dry periods, the variance is controlled by the wilting point of plants; during temperate periods, the variance is dictated by the hydraulic conductivity of soils; and during wet periods, the variance is controlled by the soil porosity. In the case where authors working in sodic soils observed that while the highest variance was associated with mid-range soil moisture measurements at depths below 20 cm, they still observed that the shallowest soil moisture measurements (0−6 cm) exhibited the highest variance associated with the extreme wet and dry soil moisture periods ( Peterson et al., 2019 ). Additionally, their observation of high variance associated with extreme wet and dry values in deeper soils is likely the result of the chemical properties of the soils affecting the conductivity of clay pans at depths greater than 20 cm. The depth to the point of refusal, which is generally the depth to a clay-rich horizon, is deeper than the shallowest soil moisture probe at each weather station, and the chemistry of the soils do not display sodic characteristics ( Percy, 2020 ), so we assume that the sites’ variance decreases as conditions approach extreme wet and dry conditions. To this end, we compared the wet and dry seasons to understand what possible factors may affect the percent saturation at the sites, using Lawrence and Hornberger’s (2007) drivers of variance for different saturation states. At EJ, we observed a shift between the wet and dry seasons from lower to higher variance, suggesting a possible shift from the control of soil saturation by maximum soil porosity to the control of the soil saturation by hydraulic conductivity and other hydrologic properties. EJ’s surface soil remained more saturated than any of the other sites throughout the year ( Table 2 ), resulting in the plants on the planar hillslope unlikely to demonstrate water stress during any part of the study period. Conversely, the variance in CA declined between the wet and the dry periods, suggesting that CA shifted from a state where soil saturation was controlled by hydrologic factors to a point where the plants’ wilting points were being reached. Field observations of dried grasses and herbs support this hypothesis, although matric pressure was not measured so this hypothesis cannot be further tested. The lack of a statistically significant change in the variance at FM led us to conclude that there was not a major shift in what factors are controlling the soil saturation at this site. Conversely, it is possible that the control of the saturation of soils at FM had shifted from porosity to the wilting point of plants, but we find this to be unlikely based on field observations at FM during the dry season, when grasses and herbs were still green, and because the deeper probes rarely recorded full saturation values. Further field campaigns would benefit from a longer temporal record and the installation of additional in situ soil moisture probes to account for variable changes in the percent saturation across the sites. 4.2 Do the spatial patterns of near-surface soil water shift seasonally? With the spatial surveys of soil moisture, we intended to test whether observable differences in the spatial patterns of near-surface soil moisture (12 cm) across the hillslopes are based on the season. Our findings indicate that the heterogeneity of soil water across the hillslopes remained high during both wet and dry seasons, but based on only two spatial surveys, determining the temporal stability of the heterogeneity is not yet possible. The semivariogram analysis indicates that the autocorrelation distance was longer than the length of the studied hillslopes. The lack of spatial autocorrelation on our measured scales may reflect the size of the hillslopes, their planar morphology, or the heterogeneity of soil properties across the hillslopes. We used simple regressions to test whether topography, depth to refusal, or slope could be used to predict the percent saturation, but no relationship was observed between any of these variables and the spatial soil moisture measurements. We used Wilcoxan signed rank tests to test whether plant type led to distinguishable differences in percent saturation, but differences between plant types were not statistically significant at any of the sites. This suggests that at the scales of the plots in which we worked a combination of these variables affects the surface soil moisture, in addition to the microclimatic conditions associated with each measurement point. Previous studies noted that plot-scale variability in soil moisture can be very high when compared to landscape-scale variability ( Kaiser and McGlynn, 2018 ); because this study was limited to the hillslopes at which weather stations were installed, we did not measure catchment-scale variability in soil water content. We noted from the temporal soil moisture record that EJ’s variance in percent saturation was higher in the dry season than in the wet season, while CA’s variance was lower in the dry season than in the wet season, and FM’s variance was the same throughout the seasons (Section 4 .1). However, the variance from the spatial soil moisture measurements from the seasonal surveys was consistently higher during the dry season compared to the wet season. Although the temporal and spatial variance are not directly comparable because of the nature of the study design, the dry season’s higher spatial variance may be due to surface soils drying and approaching wilting points. Reasons why the spatial patterns of soil water remained highly heterogeneous throughout both seasons are numerous, despite differences in stochastic variables like precipitation and evapotranspiration. Previous work completed across a much larger catchment at Tarrawarra (Victoria, Australia) noted a change in the heterogeneity of spatial soil moisture observations between the wet and dry season, with soil water content increasing in the convergent portions of the catchment during wet periods ( Western and Grayson, 1998 , 2000). The planar nature of the studied hillslopes, coupled with their small size, make it unlikely that we captured sufficient data in convergent areas where flowlines meet, possibly explaining the lack of recorded seasonal reorganization of soil water. Additionally, the interpretation of the spatial distribution of soil water is convoluted by the fact that each site’s soil water survey took several days to complete. The weather stations were installed after we completed the spatial surveys in the wet season, so we are unable to show the change in soil water content through time for the first spatial survey; however, we can show how the soil moisture changed during the dry season over the course of the surveys (Fig. S4). At FM and CA, the weather station recorded a mean value of volumetric water content of 29.7% and 25.4%, respectively, with a standard deviation of 0.2% over the time period during which we collected spatial data. However, a rainstorm at EJ on one of our measurement days caused the soil water content to rise and the resulting standard deviation of the soil water measured at the weather station during the four days of the spatial survey is an order of magnitude higher at EJ (2.3%) compared to FM and CA. Further surveys of spatial distributions of soil water will benefit from single-day collection, which was not possible with the limited resources and narrow scope of this project. 4.3 Do spatial patterns of soil water affect runoff generation? Despite intense rainfall at all of the sites during the rainy season, we did not observe evidence of connected surface runoff across the studied hillslopes at EJ, SM, and FM, including little evidence of gully formation at the agricultural sites or bottom-slope wetness. The only runoff that we observed was at CA along a cow trail, and then only when the trail passed beneath a basalt bluff. Based on the spatial distribution of soil moisture across the hillslopes and the semivariogram analyses of the spatial distributions, we propose that the lack of runoff at the four sites is due to low spatial connectivity across each hillslope and variable responses to wetting and drying at different depths of the soil, coupled with generally high hydraulic conductivity in surface soils. The low spatial connectivity of the surface soil water, identified using semivariogram analysis, affected runoff because the amount of water in soils varied spatially. From the temporal record of soil water content, we noted variable responses to wetting and drying between the surface soil probes (10 cm) and the deeper soil moisture measurements made within the root zone. The delays between the arrival of water at the surface soil probe and the deeper probes suggest vertical infiltration is largely matrix-flow dominated. We also observed a large range of sorptivity and unsaturated hydraulic conductivity ( Fig. 7 , Table 5 ). The range of the unsaturated hydraulic conductivities ( Table 5 ) suggests that some parts of the hillslopes were more conductive than others. From the MDI experiments, we know these areas may be very close to one another (within 1 m). Incorporating the observations of spatial and vertical heterogeneity, we propose two possible explanations for the lack of runoff observed at the sites. The first is that highly permeable, unsealed surface material distributed across the hillslopes allows water to vertically infiltrate into the soil column. Previous authors modeling arid and semiarid catchments found that the sealing status of the shallowest portions of a soil profile were a more important factor in generating runoff than spatial heterogeneity ( Assouline and Mualem, 2006 ). They showed that simulations of catchments with no surface seal formation generated little cumulative runoff. However, this explanation does not fit our observations, as we did not observe soil sealing on any of the hillslopes, and the hydrologic properties and soil water distributions are highly heterogeneous at each site. Instead of uniform, highly permeable soil surfaces, we suggest that surface soil that becomes saturated simply becomes a source of soil water to adjacent soil that is less saturated and can accommodate the excess water by serving as a sink. The high variability in unsaturated hydraulic conductivity provides further evidence that intense precipitation does not saturate a soil and start flowing overland because areas with lower soil moisture and higher hydraulic conductivity are nearby and can serve as sinks for excess water. This supports the hypothesis of more “transmissive” and “retentive” parts of the hillslopes that prevent the accumulation of observable and measurable overland flow across hillslopes. Thus, any patches of runoff are not connected with another downslope and limit the occurrence and impacts of overland flow. Transmissive soils conduct water more easily when wet (higher hydraulic conductivities), while retentive soils conduct water poorly when wet (lower hydraulic conductivities) but can retain soil moisture for longer periods of time. High-intensity rainfall may produce localized runoff from retentive soils that infiltrates into adjacent parcels of more transmissive soil ( Nimmo et al., 2009 ). On San Cristóbal, this model may explain the lack of runoff on the planar hillslopes: because of the high heterogeneity in hillslope-scale characteristics, like soil texture and the amount of amorphous material, both of which affect hydraulic conductivity, parcels of transmissive and retentive soils may be adjacent to each other. The lack of autocorrelation across small distances (less than 100 m) on the hillslopes provides support for this model of a patchwork of transmissive and retentive soils. Runoff generation at temperate sites may depend on whether lateral or vertical flow paths are dominant along the hillslopes, where the presence of lateral flow paths depends on the morphology of the landscape ( Elsenbeer, 2001 ; Fitzjohn et al., 1998 ). For our study, none of the studied slopes featured gullies or other areas of high convergence (Fig. S1), so it is unlikely that large amounts of surface lateral flow were driven by hillslope morphology. At CA, the uppermost portion of the site was saddle shaped and crisscrossed by cow paths. Soil water may have accumulated due to lateral flow in the slightly convergent portion of the saddle ( Fig. 4 ), but the remainder of the data from the spatial survey at the site demonstrated no similar accumulation. Most soil water evaporates directly from the soil surface, is used by plants, or infiltrates vertically until the water reaches an aquitard and begins to flow laterally. In excavated soil pits during the wet season, we observed water seeping out of the upslope walls along the horizon boundaries, supporting lateral redistribution of water at depth. At CA, also a “water meadow” at the base of a nearby hillslope wetted after small rain events, despite no evidence of runoff along the hillslope, supporting the idea that water may be laterally redistributed deeper within the soil. Our hypothesis to explain the lack of observed surface runoff on the hillslopes corroborates work completed on other tropical ocean islands. On the island of Santa Cruz, Galápagos (to the west of San Cristóbal), soil profiles were instrumented with tensiometers to measure the pressure head within soil profiles under two different land cover types (forest, pasture; Domínguez et al., 2016 ). Those data were then incorporated into a soil water transfer model, which showed that runoff was negligible because the soil water input never exceeded the infiltration capacity of the soils, as we observed at the sites on San Cristóbal. Based on the model, deep percolation into the groundwater system occurred in both land use regimes, suggesting that vertical flow paths through the soil results in groundwater recharge. Previous work on Hawaii observed a similar phenomenon—the surface soil had little connectivity, resulting in negligible runoff, but after vertical infiltration, lateral redistribution of soil water was observed at horizon boundaries due to the change in the hydraulic conductivity between different horizons ( Lohse and Dietrich, 2005 ). Future work on San Cristóbal should include the measurement of hydraulic conductivity across broader areas and in different horizons to better characterize spatial variations in conductivity that control runoff and groundwater recharge. 4.4 Soil water and island-wide hydrogeology Previous work on the hydrogeology of San Cristóbal has related the island’s groundwater system to that of Hawaii ( Violette et al., 2014 ). Consecutive lava flows, separated by baked paleosols or tuffs, have led to a complicated groundwater system of perched aquifers, some of which drain to a basalt aquifer and some that result in springs. This is similar to the hydrogeology of many other basalt systems, including continental basalts like those found on the Snake River Plain ( Mirus et al., 2011 ) and other ocean islands, like Piton de la Fournaise on La Reúnion ( Violette et al., 1997 ). San Cristóbal is the only island in the archipelago with multiple perennial streams, maintained by springs that drain the perched aquifers. This understanding has come through geophysical ( Adelinet et al., 2018 ; Auken et al., 2009 ; Pryet et al., 2012a ) and geochemical ( Warrier et al., 2012 ) observations. Little research has previously connected inputs into the island’s hydrologic system with the groundwater system. From this study, we propose that water that has been transmitted through the soil’s root zone and into the deeper horizons of the soil will either flow laterally along horizons of low hydraulic conductivity, resulting in high soil water contents and seeps that emerge in areas of convergence (like the water meadow at the base of CA), or will infiltrate further through the unconsolidated saprolite to enter the deeper groundwater system. Due to the lack of monitoring wells on San Cristóbal, exact determination of the amount of infiltrated water that enters the groundwater system is unknown, but the research on the behavior of surface and root zone soil water helps provide an important step in further understanding water cycling on San Cristóbal. This study is far from comprehensive, but the data provide ample opportunities for further testing of conceptual and numerical models of hillslope hydrologic response on San Cristóbal and similar tropical island settings. Additional efforts must also be made to improve instrumentation of hillslopes in diverse climates across San Cristóbal to track how seasonality affects the distribution and behavior of soil water. 5 Conclusions We have compared the temporal and spatial records of soil water on four planar hillslopes on San Cristóbal, Galápagos, to develop a qualitative model of water in near-surface soils in different climate zones. This work aims to provide a component of our understanding of the hydrogeology of San Cristóbal, and possibly, other ocean islands. We initially sought to answer three questions: (1) does soil water differ seasonally; (2) does the spatial distribution of soil water differ through time; and (3) could spatial patterns of soil water affect runoff generation. The findings of this study confirm that temporal differences in saturation states on the studied hillslopes are driven by seasonal differences in precipitation and evapotranspiration, and the position of the hillslopes on San Cristóbal’s climosequence and their windward/leeward position. Thus, our dataset is a valuable resource for future hydrologic and soil-atmosphere modeling efforts. We complemented the temporal differences in saturation state with surveys of the spatial variability of the soil water across the studied hillslopes, showing that during both wet and dry seasons, the soil water content was highly variable spatially and randomly distributed across the hillslope. This spatial variability is likely the result of the heterogeneity of a number of factors, including physical and hydrologic characteristics of the soil and the microclimate and microtopography around each of the measurement points. While in the field, we did not observe evidence of runoff at most of the sites except along the cow path at CA, despite numerous heavy rain events. Because near-surface soil moisture was not spatially connected and the hydraulic conductivity is higher than rainfall intensity, we conclude that retentive patches of soil which can generate runoff are adjacent to transmissive patches of soil. Transmissive soils have higher hydraulic conductivities and can transmit potential runoff into deeper horizons of the soil system. This may explain observations that uncultivated hillslopes on San Cristóbal have little surface-runoff connectivity, as well as explaining how water enters the shallow groundwater system that is drained by numerous springs. Fieldwork in the Galápagos remains challenging, and our measurement and monitoring campaigns were restricted by access and resources. Our results confirm what might be presumed based on initial site assessments and field trips, namely that seasonal changes in precipitation and elevation-dependent precipitation inputs affect the soil saturation across the sites. However, our study provides concrete evidence to support the conceptual model for hillslope hydrology on San Cristóbal. These baseline data can be expanded to improve our understanding of the behavior of water through the surface soil across San Cristóbal’s climate gradient. Additional fieldwork on the hillslopes can capture the spatial distribution of soil water across the hillslopes with different precipitation inputs to accurately model the temporal stability of soil moisture distributions. The installation of more research infrastructure, especially piezometers and additional soil water probes at different points across the currently instrumented sites, will yield valuable information to help track the fate of soil water that infiltrates beneath the clay layer at each site. Testing soil-water models using the data presented in this paper can also facilitate improved quantitative understanding of factors affecting the distribution of soil water across the hillslopes, as well as predict the fate of the soil water through the water cycle on the island. Acknowledgments This work was supported by a NSF Graduate Research Fellowship grant ( DGE-1650116 ), a G eological Society of America Student Research Grant ( 10804-15 ), the NSF Science Across Virtual Institutions International Critical Zone Observatory Grant, and UNC’s Department of Geological Sciences Martin Fund awarded to M.S. Percy. All the data described in this work can be obtained by emailing the corresponding author ( madelynp@live.unc.edu ). The authors appreciate logistical collaboration with the UNC-USFQ Galápagos Science Center, particularly Leandro Vaca and Juan Pablo Muñoz. We thank Juliana Borja, Rebecca Chaisson, Sara Guevara, Claris Orellana, Sarah Schmitt, Kayla Seiffert, and Erin VanderJeudgt for field assistance. The manuscript benefited enormously from feedback provided by Kim Perkins of the U.S. Geological Survey and two anonymous reviewers. Any use of trade, firm, or product names is for descriptive purposes only and does not imply endorsement by the U.S. Government. Appendix A Supplementary data Supplementary material related to this article can be found, in the online version, at doi: https://doi.org/10.1016/j.ejrh.2020.100692 . Appendix A Supplementary data The following is Supplementary data to this article:
REFERENCES:
1. ADELINET M (2018)
2. ASSOULINE S (2006)
3. AUKEN E (2009)
4. BAILEY S (2014)
5. BEVEN K (1979)
6. BI H (2009)
7. BLAKE G (1986)
8. BROCCA L (2009)
9. CHANDLER K (2018)
10. CHOI M (2007)
11. COLINVAUX P (1972)
12. DOZOUVILLE N (2008)
13. DOZOUVILLE N (2008)
14. DARI J (2019)
15. DOMINGUEZ C (2016)
16. DOMINGUEZ C (2017)
17. DONG J (2018)
18. EBEL B (2013)
19. ELSENBEER H (2001)
20. ESRI (2011)
21. FAMIGLIETTI J (2008)
22. FITZJOHN C (1998)
23. FOSTER D (2003)
24. GAO L (2019)
25. GEIST D (1986)
26. GEIST D (2014)
27. GRAHAM R (2010)
28. GRAYSON R (1997)
29. HU S (2019)
30. HUTTEL C (1986)
31. JACKSON M (1986)
32. KAISER K (2018)
33. LASSO L (2018)
34. LAWRENCE J (2007)
35. LOHSE K (2005)
36. MARTINEZ C (2008)
37. MCGUIRE K (2005)
38. MCGUIRE L (2018)
39. METZGER J (2017)
40. MIRUS B (2013)
41. MIRUS B (2011)
42. NAVARRESITCHLER A (2013)
43. NIMMO J (2009)
44. NOBORIO K (2001)
45. PAN F (2008)
46. PANSU M (2006)
47. PELLETIER J (2018)
48. PERCY M (2020)
49. PERCY M (2016)
50. PETERSON A (2019)
51. PRYET A (2012)
52. PRYET A (2012)
53. RCORETEAM (2017)
54. RASMUSSEN C (2011)
55. REGALADO C (2003)
56. RIBEIRO P (2001)
57. ROSENBAUM U (2012)
58. SCHMITT S (2018)
59. SCHOENEBERGER P (2012)
60. SOILSURVEYSTAFF (2014)
61. STARKS P (2006)
62. STEWART R (2013)
63. TAKAGI K (2012)
64. TOPP G (1980)
65. TRUEMAN M (2010)
66. TULLER M (2005)
67. VACHAUD G (1985)
68. VIOLETTE S (1997)
69. VIOLETTE S (2014)
70. WARRIER R (2012)
71. WESTERN A (1999)
72. WESTERN A (1998)
73. WESTERN A (2000)
74. WESTERN A (1999)
75. WESTERN A (2002)
76. WESTERN A (2004)
77. ZHU J (2003)
78. ZOTARELLI L (2010)
|
10.1016_j.heliyon.2024.e39709.txt
|
TITLE: A study of intergenerational support and the health relationship of the elderly population: Evidence from CHARLS
AUTHORS:
- Wang, Jinzhen
- Peng, Wenjia
- Miao, Changjun
- Bao, Yuekui
- Yang, Danhong
ABSTRACT:
Background
Currently, the relationship of intergenerational support on health is mainly focused on physical health and mental health, and there are fewer studies on the relationship of intergenerational support on self-assessed health. Therefore, this paper discusses and analyzes the possible relationship of intergenerational support on the self-assessed health of older adults in terms of the bidirectional support of various forms of support between older parents and their adult children in terms of financial, living, and emotional support(CHARLS).
Methods
This study was conducted on participants aged 65 and over, using the CHARLS 2020 survey, involving 6359 (3069 men and 3290 women) elderly people.
Health was measured in three dimensions: physical, psychological, and self-assessment; intergenerational support was measured in three dimensions: financial, life care, and spiritual comfort; and regression models analysed the relationship between intergenerational support and the health of the elderly population.
Results
Children's life care support (p < 0.05) had a significant positive association on physiological health, but a significant negative association on the elderly's psychological and self-assessed health, children's spiritual comfort support (p < 0.05) had a positive association on the elderly's physiological health, psychological health, and self-assessed health, and children's financial support (p < 0.05) was positively correlated with the elderly's psychological, and children's financial support had a positive association, but not related to physical health and self-rated health of the elderly.
Conclusions
The relationship between intergenerational support and the health of the older population is complex, with life-care support significantly contributing to physical health but negatively affecting mental health and self-assessed health; spiritual comfort support positively affecting physical, mental and self-assessed health; and financial support having a primarily positive effect on mental health. Therefore, children should fully respect and utilise the ‘autonomy’ of the elderly when providing intergenerational support to the elderly in their families. It is also important to improve intergenerational communication abilities and skills, which will help to better understand and fulfil the needs of older people, thereby promoting their physical and mental health.
BODY:
1 Introduction Globally, population ageing has become a common and increasingly serious social phenomenon [ 1–3 ]. With the advancement of medical technology and socio-economic development, the life expectancy of human beings continues to increase, the proportion of elderly people is growing rapidly, and the issue of elderly health has gradually become a hot field of social science research around the world [ 4–6 ]. Most studies have examined the impact of intergenerational support on health in old age from a single dimension, and there are differences between urban and rural areas. With the shift to smaller family structures, the role of intergenerational support in the health protection of the elderly has become more prominent, emphasizing the need to pay attention to the needs of different groups of elderly people and to develop differentiated support strategies. Elderly health is not only related to the quality of life of individuals, but also has a far-reaching impact on socio-economic development, healthcare system and social security system. However, the elderly are currently facing various health threats, such as depression and other mental health problems, which are highly prevalent among the elderly population, with a prevalence rate of 25.6 % and on the rise [ 7 , 8 ], while the proportion of disabled and semi-disabled elderly people also accounts for 18.3 % of the elderly population, a figure that is also worrying [ 9 ]. These health problems place a tremendous pressure of life and financial burden on older persons and their families. In order to address this global challenge, governments and international organizations have promoted healthy ageing as an important strategy, such as the introduction of China's "Healthy China 2030" planning document [ 10 ]; the Japanese Government's implementation of a long-term care insurance system [ 11 ]; and the European Union's development of a healthy ageing strategy [ 12 ]. These policy documents not only emphasize the need to sustain the development and maintenance of healthy lives for older persons, but also propose specific goals and measures to promote healthy ageing. Within the framework of the theory of health ecology, the factors influencing the health of older persons involve the individual, family and social levels. However, with the changes in family structure and the diversification of residence patterns globally, the intergenerational supportive role of the family in providing life care, economic support and spiritual comfort is also changing. Although a large number of studies have examined the relationship of child support on the health of the elderly population [ 13–16 ], there is a large inconsistency in the findings due to differences in study locations, study populations, and measurement indicators. This inconsistency may stem from sample selection bias, one-sidedness of indicators, and ignorance of the complexity and inter-causality of social factors. Therefore, in order to gain a deeper understanding of the relationship between intergenerational support and the health of the elderly population and to overcome the limitations of existing studies, this study intends to analyze the data based on a national survey in China. The health status of older adults was comprehensively assessed in terms of three dimensions: physical, psychological and self-assessment, while intergenerational support was comprehensively measured in terms of three dimensions: economic, life care and spiritual comfort. We speculated that life-care support significantly promoted physical health but had a negative impact on mental health and self-rated health; spiritual comfort support had a positive impact on physical, mental and self-rated health; and financial support had a positive effect mainly on mental health. Through this research design, we aim to reveal the specific relationship of intergenerational support on the health of the elderly population in different dimensions and its mechanism of action, so as to provide a scientific basis for the formulation of more effective health promotion policies. 2 Methods 2.1 Study population The current study is based on the China Health and Retirement Longitudinal Study (CHARLS), an ongoing, nationally representative longitudinal program. Inclusion criteria: 1) age ≥45 years; 2) residence in China; and 3) informed consent for this study. Residents are aged 45 years and older, including more than 19,000 people in 150 county-level units, 450 village-level units, and about 10,000 households.A baseline survey was conducted in 2011, and waves 2, 3, 4, and 5 were followed in 2013, 2015, 2018, and 2020, respectively in Fig. 1 . The questionnaire included the basic personal and family status of the elderly, their social and economic background and family structure, self-evaluation of their health and quality of life status, personality and psychological characteristics, cognitive functioning, lifestyle, ability to perform daily activities, economic sources, financial status, life care, caregivers when sick, whether they could get timely treatment and medical bill payers, and other ninety questions with more than 180 Details of the research design and sampling methodology of the CHALRS have been described by Zhao et al. [ 17 ]. This study uses data from the 2020 cross-sectional survey with a total valid sample of 6359 participants. Participants were excluded based on the following exclusion criteria: 1) age ≥65 years; 2) groups with missing values for key indicators; and 3) health conditions, such as memory-related disorders or psychiatric problems. A valid sample of 6359 participants remained.The CHARLS survey was approved by the Biomedical Ethics Committee of Peking University (IRB00001052-11015), and all participants were required to sign an informed consent form. 2.2 Variable selection 2.2.1 Independent variables Intergenerational support was used as the independent variable in this study, which was comprehensively measured in three separate areas: life care, spiritual comfort, and financial support. The corresponding measurement questions were, "Is life care support received?", "Do they receive spiritual comfort support?" and "Do they receive financial support?" The corresponding questions were: "Do you receive care support? A value of 1 was assigned to a "yes" answer and a value of 0 to a "no" answer. 2.2.2 Dependent variables In this study, the health status of older adults was used as the dependent variable, and three dimensions of physical health, mental health, and self-assessed health were comprehensively assessed. In terms of physical health, the Ability to Perform Activities of Daily Living (ADL) scale was used to measure the basic mobility of older adults, which is a relatively objective measure. The questions in the scale cover six daily activities: bathing, dressing, eating, toileting, bowel control, and getting in and out of bed, and the ability to take care of themselves is assessed by asking the elderly whether they need help from others in these activities. According to the level of assistance needed, the answers were categorized into three options: "No difficulty, no help needed", "Some difficulty, need help", and "Can't do it, must get help". The answers were categorized as "no difficulty, no need help", "some difficulty, need help" and "can't do it, must get help". Based on the study by Yen Yueh-Ping, if an elderly person can complete all six activities in the ADL scale on his/her own without assistance, he/she is considered as not disabled (ADL = 0); if he/she has difficulties in one or more activities or is unable to complete them, he/she is considered as disabled (ADL = 1). In terms of mental health, this study used the Depression Scale, a relatively subjective measure, to assess the psychological state of the elderly. The scale consists of 10 entries with response options including "always", "often", "sometimes", "seldom ", "never", and "refused to answer". The total score ranges from 10 to 50, with higher scores indicating more depressive tendencies and poorer mental health. In this study, we refer to the depression evaluation criteria of the scale, and consider older adults with a score of 30 or more as having a tendency to depression, while those with a score of less than 30 are considered to be psychologically healthy. In terms of self-assessment of health, the question "What do you think about your health condition?" to find out the elderly's subjective assessment of their health. Answer options included "very good", "good", "fair", "not good" and "still very bad". "still very bad". Based on previous research, self-assessed health was set as a dichotomous variable: it took the value of 1 when the respondent answered "very good" or "good", and "fair" when the respondent answered "fair", "not good" or "very bad" and 0 when respondents answered "fair", "not good" or "very bad". 2.2.3 Covariates A total of six Covariates were included in this study based on the results of previous studies and the data provided by CLHLS2020, including gender, age, marital status, alcohol consumption, pension status, and nature of residence of the elderly. The details are shown in Table 1 . 2.3 Statistical methods The qualitative data in this study are represented in relative quantities and proportions, and the Chi-square test is employed to analyze the correlation between intergenerational support and the health of the elderly population. After considering endogeneity, a logistic regression model is used to analyze the relationship between intergenerational support and the physical health, mental health, and self-rated health of the elderly population. All statistical analyses were performed using R software, version 4.1.0 (University of Auckland, New Zealand). p-value <0.05 was considered statistically significant. 3 Results 3.1 Basic characteristics of study population Finally, a total of 6359 participants were enrolled. The detailed selection process was presented in Fig. 2 .In this study, a total of 6359 elderly people were surveyed, of whom 3069 (48.26 %) were male and 3290 (51.74 %) were female; their ages were mainly concentrated in the age range of 65–74 years, with a total of 4722 (74.26 %); in terms of life support, 63.58 % of elderly people did not receive life support, 78.35 % of elderly people received spiritual comfort, and 66.80 % of elderly people received financial support. In terms of health, 24.19 of the elderly were disabled, 12.96 per cent were psychologically depressed, and 79.59 per cent were self-assessed as unhealthy. Marriage and residence show that 72.95 per cent of older persons have a spouse, 84.84 per cent receive a pension and 72.62 per cent live in rural areas. In terms of living habits, 76.33 per cent of older persons do not drink alcohol and 76.08 per cent do not smoke, as detailed in Table 2 . 3.2 Correlation analysis between intergenerational support and health of the elderly population According to the results analysed in Table 3 , intergenerational support is significantly correlated with the psychology of the elderly population, while life care support and spiritual comfort support are also significantly correlated with both physiological health and self-assessed health of the elderly population. In terms of physiological health, 60.96 % of the non-disabled elderly did not receive life care support, 16.59 % did not receive mental comfort support, and 33.64 % did not receive financial support; while only 71.78 % of the disabled elderly did not receive life care support, and also 37.52 % did not receive mental comfort support, and 31.79 % did not receive financial support. In terms of mental health, 62.89 per cent of the elderly without a tendency to depression did not receive support for living care, 20.43 per cent did not receive support for mental comfort, and 33.75 per cent did not receive financial support; 68.20 per cent of the elderly with a tendency to depression did not receive support for living care, 29.85 per cent did not receive support for mental comfort, and 29.49 per cent did not receive support for mental comfort. 29.49 per cent did not receive financial support. With regard to self-assessed health, 64.43 per cent of the elderly who considered themselves unhealthy did not receive life-care support, while 22.43 per cent did not receive spiritual comfort support and 32.76 per cent did not receive financial support; while 60.25 per cent of the elderly who considered themselves healthy did not receive life-care support, 18.64 per cent did not receive life-care support and 34.90 per cent did not receive financial support. elderly people did not receive financial support, as shown in Table 3 . 3.2.1 Logistic regression model analysis of intergenerational support and health of the elderly population Analysis of the regression results shows that in Model 1, there is a significant positive correlation between life care support, spiritual comfort and the physical condition health of the elderly population, with specific coefficients of 1.533 and 2.280, respectively. however, there is no significant correlation between financial support and the physical health condition of the elderly population. In model 2, life care support and financial support are significantly negatively associated with the tendency to depression in the elderly population with a coefficient of −1.198; on the contrary, spiritual comfort support is significantly positively associated with the tendency to depression in the elderly population with a coefficient of 1.226. In model 3, it is found that life care support has a significant negative correlation with the self-assessed health status of the elderly population, with a coefficient of −0.842, and spiritual comfort support has a significant positive correlation with the self-assessed health status of the elderly population, with a specific coefficient of 1.533 and 2.280. population's self-assessed health status with a significant positive correlation and a coefficient of 0.814. It is worth noting that in this model, there is no significant relationship between financial support and the physical health and self-assessed health status of the elderly population, as detailed in Table 4 . 4 Discussion 4.1 Life care has a significant positive association on the physical health of older people, but a negative association on mental health and self-assessed health status After in-depth exploration, this study found that the positive association of life care support on the physical health of older adults was particularly significant, a finding that coincides with previous research by Zheng Zhidan and several other scholars [ 18–20 ]. Analysing the reasons for this, we hypothesize that this is mainly attributed to the meticulous care provided by children to the elderly. Such care not only effectively reduces the mobility stress of the elderly in their daily lives and frees them from heavy housework, but more importantly, it largely reduces the risk of potential physical damage to the elderly [ 21 ]. In addition, citing Hu Chenpei's research findings, we found that the caregiving support provided by children also empowered the elderly to have more leisure time and abundant energy to engage in appropriate physical exercise, which undoubtedly had a profound positive association on their physical health [ 22 ]. However, caregiving support had a negative association on older people's mental health and self-rated health. This may be due to the fact that excessive caregiving leads to dependency among older people, reducing their sense of autonomy and self-efficacy, which in turn has a negative association on psychological well-being. At the same time, over-care may also make older people feel less well, which may affect their self-rated health [ 23 ]. This finding suggests the need for moderation and attention to the psychological needs of older people when providing them with life care support in order to promote their overall health [ 24 ]. 4.2 Children's spiritual comfort support has significant positive association on physical health, psychological health and self-rated health of older people This study further found that spiritual comfort has a significant positive contribution to physical health and mental health of older adults, a result that is in line with previous studies [ 25–27 ].Spiritual comfort has a significant positive contribution to the self-rated health of older adults, which is consistent with the findings of Wang Dan et al. [ 28 ]. On the one hand, this may be because as they grow older, older adults may face challenges such as changes in physical health, adjustments in their living environment, and narrowing of their social circle, which bring anxiety and loneliness; children's care, listening, and understanding provide older adults with an outlet for emotional catharsis, enabling them to release their inner repression and dissatisfaction, thus effectively relieving psychological stress and maintaining a good psychological state (c) Children can make the elderly feel the warmth and care of the family through companionship, communication and the provision of emotional support. This kind of support not only makes the elderly feel that they are still needed and cherished, but also stimulates their love and confidence in life [ 29 , 30 ]. At the same time, the spiritual comfort of their children can also help the elderly to better cope with the difficulties and challenges in their lives and improve their mental resilience, thus positively contributing to their physical and mental health. 4.3 Children's financial support has a positive association on the psychological health of the elderly, but is not related to their physical health and self-rated health The results of the study show that children's financial support can positively influence the mental health of older adults, which is consistent with the findings of Sun Jingkai et al. [ 31–33 ]. It may be on the one hand because as they age, older adults may face financial problems such as insufficient pensions and increased medical costs. Financial support from their children, whether regular living expenses are given or irregular financial assistance, can improve the financial situation of the elderly to a certain extent, so that they don't have to worry about the basic living expenses and medical costs, and this sense of financial stability helps to reduce the level of anxiety and stress among the elderly, which in turn positively affects their mental health; on the other hand, the elderly who receive financial help from their children [ 34 ]. On the other hand, when the elderly receive financial help from their children, they will also feel the care and love of their children, which will make them feel that they are still valuable to their families, thus enhancing their self-esteem and sense of self-worth [ 35 ]. 4.4 Strengths and limitations of this study The innovation of this study is that it is the first time to comprehensively assess the impact of intergenerational support on geriatric health from the three dimensions of physiological, psychological, and self-assessed health, which fills the gap of single-dimension research; at the same time, it utilises the CHARLS dataset, which is nationally representative, which enhances the representativeness and reliability of the study; and it explores in depth the specific mechanisms of action of the different types of intergenerational support, which provides a precise geriatric health policy with a scientific basis. However, there are limitations in this study, including the cross-sectional design cannot effectively observe the long-term cumulative impact of intergenerational support, it may be difficult to completely avoid endogeneity problems due to the large health disparities caused by the fact that the study population is a special group, and it is difficult to comprehensively reflect the dynamic changes of intergenerational support with a single point in time observation. In summary, with the gradual trend of empty nesting and miniaturisation of elderly families in China, the association of intergenerational support on the health of the elderly has become increasingly complex. There is a bidirectional relationship between intergenerational support and health in old age: on the one hand, intergenerational support will have an association on the health status of the elderly; on the other hand, the health status of the elderly will also lead families to adjust the way of intergenerational support. At the same time, different types of intergenerational support will have different associations on the health of the elderly. When providing support to the elderly in the family, children should focus on the appropriateness and reasonableness of the approach, respect the autonomy of the elderly, and support the elderly to play the role of self-supporting, avoiding excessive intervention in the lives of the elderly. It is recommended that a health management information system integrating health monitoring, emergency contact, personalized advice, interactive learning and social functions be established to enhance the self-management capacity of older persons and to promote healthy ageing in conjunction with the Government, the community and the family. Elderly people generally do not have enough knowledge about health and their health behaviours need to be improved [ 36 ]. Through communication with their children, older people can acquire more health knowledge and improve their ability to self-support themselves in the process of strengthening their children's interaction. The government and society should pay attention to interpersonal communication training and counselling for elderly families, enhance the ability of both children to provide mental comfort, promptly resolve negative emotions arising from children's support, and correct erroneous health perceptions and lifestyles in order to maintain the mental health of the elderly. At the same time, family support policies should be improved, social resources should be integrated, and the development of community-based home care services should be promoted [ 37 ], so as to raise children's awareness of their responsibility to care for the elderly. Comprehensive and differentiated interventions should also be adopted to address the characteristics of different elderly people, strengthen geriatric health care and enhance health literacy, so as to strengthen the supportive role of family elderly care and promote the realisation of healthy ageing [ 38 ].Future research could further explore: the use of longitudinal studies to track the cumulative effects of changes in intergenerational support on the health of older persons over time; in-depth analyses of the mechanisms of action and interaction effects of different types of intergenerational support; and the expansion of the scope of research to explore the similarities and differences in the relationship between intergenerational support and health of older persons in different regions and cultural contexts. CRediT authorship contribution statement Jinzhen Wang: Writing – original draft, Formal analysis, Data curation. Wenjia Peng: Writing – review & editing, Resources. Changjun Miao: Project administration, Methodology. Yuekui Bao: Writing – review & editing, Software. Danhong Yang: Writing – review & editing, Supervision. Contributors PW and YD were responsible for drafting and revising the manuscript. WJ analysed and interpreted the data and prepared the figures and tables. BY and MC revised the manuscript, while WJ refined the language. The final version of the manuscript was read and approved by all authors. Patient consent to publication Not applicable. Ethical approval The CHARLS survey project received approval from the Peking University Biomedical Ethics Committee (IRB00001052-11015), and all participants provided written informed consent prior to the survey. Data availability statement The CHARLS datasets generated and analysed during the current study are available on the CHARLS website at http://charls.pku.edu.cn/en . CHARLS data are de-identified. Respondents are identified by a unique ID number. Funding This research did not receive any specific grants from any funding bodies in the public, commercial, or not-for-profit sectors. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgements We express gratitude to the CHARLS team for providing the datasets and training. This study was supported by the Health Committee of Jinshan District , Shanghai ( JSKJ-KTMS-2021-10,2022-WS-59 ).
REFERENCES:
1. ROSEMARY G (2023)
2. TODA K (2023)
3. S M (2023)
4. XIAOMENG W (2022)
5. PEJMAN S (2022)
6. BO T (2021)
7. PEYMAN M (2023)
8. RONG J (2020)
9. LI C (2023)
10. ZHANG X (2020)
11. YOICHIRO Y (2022)
12. CRISTEA M (2020)
13. LANLAN C (2023)
14. ZHAO L (2023)
15. PATAPORN S (2023)
16. LIANLIAN L (2022)
17. YAOHUI Z (2014)
18. ZHIDAN Z (2017)
19. ZUO D (2011)
20. HOU J (2021)
21. HU P (2024)
22. HU C (2016)
23. IOST P (2023)
24. SBIRAKOS V (2021)
25. AJROUCH K (2007)
26. FENGJEN T (2013)
27. QINGYAN S (2022)
28. WANG D (2023)
29. WILLIAMS W (2018)
30. KEENOY J (2023)
31. HAN S (2024)
32. SUN J (2021)
33. ROMERO F (2024)
34. ROSTAN Z (2022)
35. ALVES S (2022)
36. YIMIN Q (2023)
37. PIERLUC T (2020)
38. LUO L (2021)
|
10.1016_j.chpulm.2024.100119.txt
|
TITLE: A 77-Year-Old Man With Unexplained Hypoxemia
AUTHORS:
- Pasqualicchio, Patrick
- Siow, Maria
- Peiris, Shanaka
- Arif, Imran
- Sukhija, Rishi
- Elwing, Jean
- Jose, Arun
ABSTRACT: No abstract available
BODY:
Case Presentation A 77-year-old male with small cell lung cancer and recently diagnosed radiation pneumonitis presented to the emergency department after he was found to be hypoxemic at an outpatient appointment, with pulse oximetry demonstrating an oxygen saturation (Sp o 2 ) of 60% to 70%, refractory to supplemental oxygen up to 6 L/min. Before admission, he did not require supplemental oxygen. His medical history was notable for hypertension and hyperlipidemia with no known history of intracardiac defects. On arrival to the emergency department, he was escalated to 15 L/min, with improvement in Sp o 2 to 80%. He demonstrated worsening hypoxemia when sitting upright, which was relieved when recumbent in the left lateral decubitus position. An expedited workup for hypoxemic respiratory failure was pursued. CT imaging was negative for acute pulmonary thromboembolism, but demonstrated widespread bilateral ground glass opacities ( Fig 1 ). Review of current CT imaging compared with 1 month prior demonstrated interval progression of radiographic abnormalities. A transthoracic echocardiogram with agitated saline demonstrated dyssynergic interventricular septal motion, with large right-to-left atrial shunting evidenced by early appearance of bubbles within 3 to 5 cardiac cycles. There was no dilation of the right-sided heart chambers, and function of the left and right ventricles was subjectively normal. No tricuspid regurgitant jet was visualized, and pulmonary artery systolic pressure was unable to be calculated. Laboratory testing (CBC count, comprehensive metabolic panel, and coagulation studies) was unremarkable, with the exception of an A-a gradient of 351.3 mm Hg on arterial blood gas. Given the findings on chest imaging, the patient was given a diagnosis of radiation pneumonitis resulting in acute hypoxemic respiratory failure, thought to precipitate right-to-left intraatrial shunting due to an underlying intracardiac defect and concomitant elevated pulmonary vascular pressures. As a result, the patient was treated with supplemental oxygen, systemic glucocorticoids, and diuretics. After 14 days of 500-mg daily IV methylprednisolone therapy, an unusually high dose and duration because of the patient’s severity on presentation and do-not-intubate code status limiting therapeutic options in the event of deterioration, repeat CT imaging demonstrated a marked improvement in the extent of parenchymal abnormalities ( Fig 2 ). However, the patient’s acute hypoxic respiratory failure had paradoxically worsened with an Sp o 2 of 75% to 85% while on 100% F io 2 and 70 L/min via high-flow oxygen therapy, prompting a more extensive workup. Right heart catheterization performed 16 days after admission demonstrated low filling pressures in the absence of pulmonary hypertension or a step-up in oxygenation ( Tables 1, 2 ). An incomplete shunt run and the absence of pulmonary vein or wedge position saturation precluded calculation of the pulmonary to systemic blood flow (Qp/Qs) ratio for shunt quantification. The cardiac output listed in Table 1 refers to the Qs systemic values. A transesophageal echocardiogram (TEE) was also performed, identifying a large patent foramen ovale (PFO) and a small fenestrated atrial septal defect (ASD) measuring 1.0 cm, composed of a bidirectional atrial level shunt with a prominent eustachian valve and concomitant atrial septal aneurysm, which we will refer to as a complex PFO ( Fig 3 ). What is the diagnosis? Diagnosis: Platypnea-orthodeoxia syndrome secondary to a complex PFO Given the patient’s refractory hypoxemia, closure of the atrial shunt was attempted. Intracardiac echocardiography (ICE) evaluation of the interatrial septum confirmed the presence of a complex PFO ( Fig 4 A). A 37-mm Gore Cardioform ASD Occluder was deployed under fluoroscopic and echocardiographic guidance, allowing for closure of the communication with no visible shunting with Doppler. Immediately after device deployment, the patient’s oxygen saturation improved from 84% to 100%, and he was rapidly weaned off supplemental oxygen. Discussion Clinical Discussion Platypnea-orthodeoxia syndrome (POS) is a rare disorder characterized by hypoxemia that worsens when upright, improving with recumbency. First described in 1949, the prevalence is unknown; however, the incidence is increasing, mainly due to increased awareness and improvements in diagnosis. It can be caused by intracardiac shunts, pulmonary arteriovenous shunts, or ventilation/perfusion mismatch in the lungs and can lead to significant hypoxemia in the absence of elevated right-sided pressures, which often complicates diagnosis. 1 Intracardiac shunts constitute 87% of POS cases, and can consist of either a PFO or an ASD, with or without concomitant or atrial septal aneurysm. 1 2 In POS due to intracardiac shunts, postural changes are thought to create a pressure gradient favoring blood flow from right-to-left across the atrial septum, resulting in hypoxemic respiratory failure. The mechanisms behind these postural hemodynamic abnormalities are not well understood, but stretching and horizontalization of the atrial septum creating a jet-like flow directly from the vena cava through a (low-resistance) interatrial communication are thought to play a role. This process is also thought to contribute to the positional variation in hypoxemia observed in POS; the upright position more closely aligns flow from the vena cava with the atrial septum as the heart is pulled down by gravity, widening the defect, while simultaneously decreasing venous return to the heart, and reducing right atrial pressure. Calculating the shunt ratio of pulmonary to systemic blood flow (Qp/Qs) can be helpful in quantifying the presence and directionality of an intracardiac shunt; however, this requires oxygen saturation recorded from the pulmonary artery and pulmonary vein/wedge position (for Qp), and the aorta and both inferior and superior vena cava (to calculate true mixed venous saturation for Qs). In this case, the lack of a full shunt run prevented calculation of the Qp:Qs ratio; however, we were able to support the diagnosis of POS secondary to a complex PFO by relying on advanced imaging (TEE and ICE) and postural oxygenation changes seen in the patient. Positional pulse oximetry (not performed in this case) can be useful in characterizing orthodeoxia platypnea when evaluating challenging cases. Though present in this case, dyssynergic interventricular septal motion is not thought to contribute to the pathophysiology of POS resulting hypoxemic respiratory failure and may have been a spurious finding further complicating the diagnosis. 1 It is unclear exactly why the patient suddenly manifested acute hypoxemic respiratory failure from shunting through a preexisting (undiagnosed) complex PFO, or what process acutely changed the position of the atrial septum relative to the PFO to result in POS. Although we did not have direct radiographic evidence confirming an acute shift in atrial position, this sequence of events is supported by the persistence of hypoxemic respiratory failure despite improvement in the burden of pulmonary parenchymal abnormalities with empirical treatment for radiation pneumonitis and resolution of hypoxemia only after closure of the cardiac defect. When an intracardiac shunt is identified in the setting of POS, definitive management involves closure of the defect. Percutaneous PFO closure is preferred to a surgical approach because it has lower morbidity and has been proven to be effective with minimal long-term complications. 2 2 , The 2008 American College of Cardiology/American Heart Association 3 guidelines recommend closure of an ASD in the setting of documented POS (class IIa). Given the patient’s refractory hypoxemia and complex PFO, closure of the atrial defect was attempted using a 37-mm Gore Cardioform ASD Occluder under fluoroscopic and ICE guidance ( 4 Fig 4 C). Although a 27-mm device would be typical for a defect of 1.0 cm, a larger device was selected given the patient’s hypermobile and redundant atrial septum to maximize stability and cover both defects with 1 device. After device placement, ICE confirmed correction of interatrial shunting ( Fig 4 D), and the patient’s oxygen saturation immediately improved from 84% to 100%. In the days after the procedure, the patient was rapidly weaned off supplemental oxygen, maintaining oxygen saturations > 90% on ambient air. Radiologic Discussion Plain chest radiograph was deferred on admission in lieu of dedicated CT imaging of the chest, which demonstrated interval worsening of pulmonary parenchymal disease, with widespread bilateral ground glass opacities with areas of consolidation, supporting the initial diagnosis of radiation pneumonitis ( Fig 1 ) precipitating secondary right-to-left interatrial shunt and acute hypoxemic respiratory failure. However, the presence of refractory acute hypoxemic respiratory failure despite improvement in the burden of pulmonary parenchymal disease appreciated on CT imaging ( Fig 2 ) raised the possibility of an alternate diagnosis, supporting more aggressive investigation of the underlying interatrial shunting and justifying the invasiveness of TEE imaging. Subsequent TEE investigation revealed a bidirectional atrial level shunt concerning for a large PFO involving a fenestrated ASD ( 5 Figs 3 A, 3B), prompting ICE evaluation. ICE is an imaging modality that provides the same quality images as TEE for diagnostic purposes, but also simultaneously guides catheter-based interventions. In this patient, ICE was able to confirm the presence of bidirectional interatrial shunting ( 6 Fig 4 A) due to a complex PFO ( Fig 4 B). Additionally, a prominent eustachian valve was observed where the inferior vena cava met the right atrium, which may have played a role in channeling blood through the atrial defect ( Fig 3 C). A single closure device rather than 2 separate devices for the ASD and PFO components was used given the characteristics of the defect. The use of ICE allowed for simultaneous confirmation of a complex cardiac defect and real-time visualization of corrective intervention ( Fig 4 C) that completely corrected the patient’s interatrial shunting ( Fig 4 D) and almost immediately resulted in resolution of the patient’s hypoxemic respiratory failure without the need for general anesthesia. Conclusions • In patients with refractory hypoxemia, particularly those with positional variation of their hypoxemia and a diagnostic workup incongruent with their clinical course, the appropriate application of advanced diagnostics (eg, TEE, ICE) can be helpful to reconcile discordant data and identify unusual etiologies of hypoxemia (eg, POS). • Although rare, POS caused by intracardiac shunt responds well to closure, with 95% of patients experiencing symptomatic and clinical improvement. • ICE is a powerful tool that can simultaneously characterize a patient’s physiology and guide intervention in cases of POS with intracardiac shunting without the need for TEE and general anesthesia. Financial/Nonfinancial Disclosures None declared (P. P., M. S., I. A., and R. S.). A. J. reports investigator-initiated research supported by United Therapeutics, and has served on the Consultant or Advisory board for Merck and Janssen. J. M. E. has received research grant support from Janssen, United Therapeutics, Liquidia, Phase Bio, Gossamer Bio, Bayer, Merck, Altavant, Aerovate, Tenax, and Pharmosa, and serves on the consultant or advisory board of United Therapeutics, Altavant, Aerovate, Bayer, Gossamer Bio, Liquidia, Merck, and Janssen. Acknowledgments Author contributions: P. P. is the guarantor of this case and had full access to all the data in the case and takes responsibility for the integrity of the data and the accuracy of its analysis. P. P. aided in writing and revising the paper and coordinated discussions between coauthors. M. S. aided in interpreting data and writing and revising the paper. S. P. aided in selecting and analyzing TEE and ICE images, and in responding to reviewer requests. I. A. and R. S. aided in selecting TEE and ICE images, labeling and interpreting them and incorporating them into the case. J. E. supervised the writing and revision of the case and provided guidance on key components of the paper. A. J. supervised the writing and revision of the case and provided guidance on key components of the paper. Other contributions: CHEST Pulmonary worked with the authors to ensure that the Journal policies on patient consent to report information were met.
REFERENCES:
1. RODRIGUES P (2012)
2. AGRAWAL A (2017)
3. DELGADO G (2004)
4. WARNES C (2008)
5. DEBELDER M (1992)
6. VITULANO N (2015)
|
10.1016_j.nbd.2012.06.008.txt
|
TITLE: Activation of the γ-secretase complex and presence of γ-secretase-activating protein may contribute to Aβ42 production in sporadic inclusion-body myositis muscle fibers
AUTHORS:
- Nogalska, Anna
- D'Agostino, Carla
- Engel, W. King
- Askanas, Valerie
ABSTRACT:
The muscle-fiber phenotype of sporadic inclusion-body myositis (s-IBM), the most common muscle disease associated with aging, shares several pathological abnormalities with Alzheimer disease (AD) brain, including accumulation of amyloid-β 42 (Aβ42) and its cytotoxic oligomers. The exact mechanisms leading to Aβ42 production within s-IBM muscle fibers are not known.
Aβ42 and Aβ40 are generated after the amyloid-precursor protein (AβPP) is cleaved by β-secretase and the γ-secretase complex. Aβ42 is considered more cytotoxic than Aβ40, and it has a higher propensity to oligomerize, form amyloid fibrils, and aggregate. Recently, we have demonstrated in cultured human muscle fibers that experimental inhibition of lysosomal enzyme activities leads to Aβ42 oligomerization.
In s-IBM muscle, we here demonstrate prominent abnormalities of the γ-secretase complex, as evidenced by: a) increase of γ-secretase components, namely active presenilin 1, presenilin enhancer 2, nicastrin, and presence of its mature, glycosylated form; b) increase of mRNAs of these γ-secretase components; c) increase of γ-secretase activity; d) presence of an active form of a newly-discovered γ-secretase activating protein (GSAP); and e) increase of GSAP mRNA. Furthermore, we demonstrate that experimental inhibition of lysosomal autophagic enzymes in cultured human muscle fibers a) activates γ-secretase, and b) leads to posttranslational modifications of AβPP and increase of Aβ42. Since autophagy is impaired in biopsied s-IBM muscle, the same mechanism might be responsible for its having increased γ-secretase activity and Aβ42 production. Accordingly, improving lysosomal function might be a therapeutic strategy for s-IBM patients.
BODY: No body content available
REFERENCES:
No references available
|
10.1016_j.nbd.2008.01.002.txt
|
TITLE: Cerebellar granule cells transplanted in vivo can follow physiological and unusual migratory routes to integrate into the recipient cortex
AUTHORS:
- Williams, Ian Martin
- Carletti, Barbara
- Leto, Ketty
- Magrassi, Lorenzo
- Rossi, Ferdinando
ABSTRACT:
CNS repair by cell transplantation requires new neurons to integrate into complex recipient networks. We assessed how the migratory route of transplanted granule neurons and the developmental stage of the host rat cerebellum influence engraftment. In both embryonic and postnatal hosts, granule cells can enter the cerebellar cortex and achieve correct placement along their natural migratory pathway. Donor neurons can also reach the internal granular layer from the white matter and integrate following an unusual developmental pattern. Although the frequency of correct positioning declines in parallel with cortical development, in mature recipients correct homing is more frequent through the unusual path. Following depletion of granule cell precursors in the host, more granule neurons engraft, but their ability for achieving correct placement is unchanged. Therefore, while the cerebellar environment remains receptive for granule cells even after the end of development, their full integration is partially hindered by the mature cortical architecture.
BODY: No body content available
REFERENCES:
No references available
|
10.1016_j.rineng.2025.105468.txt
|
TITLE: Chaos game optimization for extracting global MPP of PV System based high-efficiency triple-junction solar cell
AUTHORS:
- Ghadbane, Houssam Eddine
- Rezk, Hegazy
- Benhammou, Aissa
- Mohamed, Ahmed F.
ABSTRACT:
High-efficiency triple-junction solar cells (TJSC) have received more attention in concentrated solar photovoltaic (PV) systems because of their ultimate feathers. In this paper, chaos game optimization (CGO) is used for extracting the global maximum power point (MPP) of high efficiency InGaP/InGaAs/Ge TJSC based PV system considering partial shading (PS). With PS, the power versus voltage characteristics comprises some local MPPs and unique global ones. Consequently, traditional MPP tracking approaches cannot discriminate between global and local points and are always stuck with local MPP. In this case, the PV power is reduced, and hence, an optimization algorithm such as CGO is highly required to mitigate the PCS of the PV system. To prove the dependability of CGO in extracting the global MPP, four PS patterns are used to test the superiority of CGO in cases of changing the position of the global MPP. A comparison with another optimzation algorithms also considered and the results confirmed the superiority of CGO compared with other algorithms. As an example, in the first PS pattern, the average PV power levels were between 865.67 W and 993.89 W. The maximum power that CGO can achieve is 993.89 W whereas when using COOT, the lowest power output is 865.67 W. In addition, CGO achieves the lowest STD of 0.032.
BODY:
1 Introduction Multi-junction solar cell (MJSC) combines materials with different energy band-gaps into one PV solar cell (SC) [ 1 , 2 ]. This permits promising high-efficiency solar cells because the MJSC extends the usable spectrum of solar energy makes the SC efficiency far more than the best efficiencies achievable by traditional single-junction SCs [ 3 , 4 ]. The efficiency of MJSC reached to 44 % with a concentration ratio of 947 suns [ 3 ]. Generally, the triple-junction solar cell (TJSC) uses GaAs as a middle sub-cell since its near-perfect material quality, despite its bandgap being higher than optimal for the global spectrum [ 5 ]. A new mathematical model to simulate the MJSC is proposed by Ferhati et al. [ 1 ]. The model is based on the aspect of interfacial potential. Zilong et al. examined the characterization of the InGaP/InGaAs/Ge TJSC with a two-stage dish-style concentration [ 3 ]. The results demonstrated that with a solar irradiance level of 450 W/m2 and a temperature lower than 64.9 C, the output power and efficiency are 1.52 W/cm 2 and 29.3 %, respectively. Under this case, the output power and efficiency increased by 23.3 % and 9.1 % respectively compared to single stage concentrating system [ 6 ]. The relation between the TJSC temperature and efficiency was investigated by Almonacid et al. [ 6 ]. The results demonstrated that the InGaP/lnGaAs/Ge TJSC efficiency diminished from 39 % to 31 % with increasing the temperature 25 C to 100 C under a geometric concentration ratio of 200. The characteristics of TJSC are non-linear; under uniform distribution of solar radiation, there is one maximum power point (MPP) which can be easily extracted using any conventional MPP tracker such as perturb and observe, hill climbing and fuzzy logic [ 7 ]. Samavat et al. used modern optimzation algorithms to optimize the membership functions and rules of the fuzzy logic [ 7 ]. At the same time, Partial shading condition (PSC) has a significant impact on PV systems [ 8 ]. Under PSCs, several MPPs appear on the power against the voltage curve [ 9 , 10 ]. This poses a challenge for conventional MPP methods to precisely extract the global MPP because conventional MPP methods cannot distinguish between global and local MPPs. Accordingly, the conventional MPP method frequently converges to a local MPP, leading to declined harvested energy from the PV system [ 11 ]. To mitigate the shading condition and ensure extracting the global MPP, optimzation algorithms such red-tailed hawk algorithm [ 12 ], modified tunicate swarm algorithm [ 13 ], musical chairs algorithm [ 14 ], grey wolf optimizer [ 15 ], Archimedes optimization algorithm [ 16 ] and Marine predator algorithm [ 17 ] as are used. In the current research work, CGO is used to extract the global MPP of high-efficiency InGaP/InGaAs/Ge TJSC-based PV systems while considering PSC. To demonstrate the reliability of CGO n tracking the global MPP, four different shading scenarios are taken into consideration. The idea is to use different shading scenarios to change the position of the global MPP. A comparison with recent optimization algorithms was also carried out, including eel & grouper optimizer (EGO) [ 18 ], COOT algorithm [ 19 ], puzzle optimization algorithm (POA) [ 20 ], northern goshawk optimization (NGO), leaders harris hawk optimization (LHHO), harris hawk optimization (HHO), ant lion optimizer (ALO), red kite optimization (ROA), gradient-based optimizer (GBO), equilibrium optimizer (EO), Marine predators’ algorithm (MPA) and particle swarm optimization (PSO) [ 21 ]. The contributions of the paper can be summarised as follows. • For the first time, CDO is applied to extract the global MPP of a high-efficiency InGaP/InGaAs/Ge TJSC-based PV system. • A comprehensive comparison with recent optimzation algorithms is carried out. • Validating the superiority of the proposed global MPPT The rest of the paper is organized as follows. The model of InGaP/InGaAs/Ge TJSC and related formulas are explained in Section 2 . Section 3 presents the methodology which contains two parts: chaos game optimization algorithm and problem formulation. The obtained results are discussed in Section 4 and finally the conclusions were highlighted in Section 5 . 2 Model of InGaP/InGaAs/Ge TJSC The parameters of each sub-cell are included in the TJSC model. Think about the diode reverse saturation currents and how the temperature changes affect the energy gap of each sub-cell. One light-current, one parallel-resistance, and one series-resistance are all part of the single-diode model of TJSC, as shown in Fig. 1 . Three sub-cells—the upper, medium, and lower—make up the TJSC model. From highest to lowest, the energy disparities decrease. This equation describes the current that is taken out of the TJSC: (1) I c e l l = I p h i − I d i − I p i ∀ i = [ 1 , 2 , 3 ] The photocurrent is defined using the following relation in terms of solar irradiance level: where (2) I p h i = G X s [ I s c i + α ( T − T s t c ) ] T stc is the standard temperature, α is the short circuit current temperature coefficient, X is the concentration ratio, and s G is the solar irradiance level in W/m 2 . The current, voltage drop, and saturation current of the diode are experssed by the following relations: (3) I d i = I O i [ exp ( q V d i A i K B T ) − 1 ] (4) V d i = V i + I c e l l × r i (5) I O i = K i × T ( 3 + γ i / 2 ) [ exp ( − E g i A i K B T ) ] ∀ i = [ 1 , 2 , 3 ] The terminal voltage of the Triple-Junction solar cell can be expressed as follows: where; (6) V c e l l = n 1 K B T q ln [ I p h 1 − I c e l l I O 1 + 1 ] + n 2 K B T q ln [ I p h 2 − I c e l l I O 2 + 1 ] + n 3 K B T q ln [ I p h 3 − I c e l l I O 3 + 1 ] − I c e l l × R where (7) R = r 1 + r 2 + r 3 q is the electron charge, n i is the diode ideality factor, K B is Boltzmann's constant, E g is the bandgap energy, K and γ are constants, T is the absolute temperature, and R is the cell series resistance. 3 Methodology 3.1 Chaos game optimization Chaos theory is a mathematical branch that examines the behavior of dynamical systems. It focuses on their sensitivity to initial conditions and their self-similarity, highlighting the importance of initial conditions. Chaos Game Optimization (CGO) is a novel metaheuristic optimization method that utilizes ideas from chaos theory. It leverages the configuration of fractals through the chaotic game concept and focuses on the self-similarity aspects of fractals. The CGO algorithm employs a Sierpinski triangle as the search space for solution candidates. Each solution candidate ( X = 1,2…, i , i n ) comprises decision variables that denote the position of eligible seeds, where n is the population size. The mathematical model aims to create eligible seeds within a search space to complete the Sierpinski triangle shape. It creates new seeds within a Sierpinski triangle, drawing a temporary triangle with three seeds for each eligible seed. The purpose of creating temporary triangles is to generate new eligible seeds in the search space. Four approaches are developed to achieve this. The first iteration uses a temporary triangle with n available seeds and three vertices of a Sierpinski triangle. Three seeds are placed in the triangles: X , best seed ( i X ), and the mean of all seeds ( best X ). mean where – For the 1st temporary seed, a dice with green and red faces is rolled, and the seed is moved based on the color. Random integer generation functions are used to create two integers for each face. Randomly generated factorials are used to control seed movement in the search space. This process can be modelled as follows: (8) X T 1 = X i + α i ( β i X b e s t − γ i X m e a n ) α is a random factorial generated for the i i th seed expressing its motion limitations, β and i γ are random gains (0–1) expressing the possibility of rolling a dice. i α can be defined as one of the following forms: i where (9) α i = { r a n d 2 × r a n d ( δ × r a n d ) + 1 ( ε × r a n d ) + ( ∼ ε ) δ and ε are random numbers (0,1). – The 2nd temporary seed is rolled using a dice with three blue and three red faces. Depending on the color, the seed moves toward the X (blue face) or i X (red face). The second seed can move toward a point on the connected lines using randomly generated factorials. This process can be modelled as follows: mean (10) X T 2 = X b e s t + α i ( β i X i − γ i X m e a n ) – For the 3rd temporary seed, a dice with blue and green faces rolls, determining the seed's direction. Random integer generation creates 0 and 1 for selecting colors. Seed can move along connected lines between X and i X . This process can be modelled as follows: best (11) X T 3 = X m e a n + α i ( β i X i − γ i X b e s t ) – An additional random 4th temporary seed ( X ) is added to implement the mutation phase while updating the new seeds. T4 – The fitness of each temporary generated seed will be calculated, and temporary best solutions ( X ) will be assigned. If Tbest X is better than Tbest X , this list will be updated. best The CGO can be described using the following pseudocode: start the cgo random initialization fitness evaluation and x best assignment for t = 1:t max for i = 1:n calculate X mean calculate α i , β i and γ i calculate x t1 , x t2 , x t3 and x t4 limit the temporary seeds within the search space limits fitness evaluation and x tbest assignment update x best return x best end The evolution of the optimization process can be described using the flowchart explained in Fig. 2 . 3.2 Problem formulation The main target is extracting the global MPP from the InGaP/InGaAs/Ge TJSC-based PV system, which is considering PS. While the extracted PV power defines the maximum cost function, the array current is the suggested design variable. The output PV voltage of the TJSC-based array may be defined using the following relation: (12) V a r r a y = N s m × n C ( ∑ i = 1 i = 3 ( n i K B T q ln [ I p h i − I I O i + 1 ] ) − I × R ) The variable Nsm represents the number of solar PV modules linked in series, whereas n represents the total number of cells per module. The photovoltaic power generated by the array can be approximated using the following equation: C (13) P a r r a y = V a r r a y × I c = ( N s m × n C ( ∑ i = 1 i = 3 ( n i K B T q ln [ I p h i − I I O i + 1 ] ) − I × R ) ) × I c The suggested objective function is maximizing the PV output power provided in Eq. (13) through seeking the PV current at global MPP. The steps of the solution methodology using CGO are given on Fig. 3 . At first, the optimization process starts with random initial values for PV current. Then at every value of PV current, the PV system operated, and the PV power is required. After each iteration, the updating process is initiated if the power obtained is more than the preceding run. Once all the iterations have been completed, the best power from each iteration is combined to generate global power. 4 Results and discussion The analysis is performed on a based PV module containing 20 series-connected solar cells and provides 480 W of nominal maximum power at 800 W/m 2 and 20 °C (STC). The InGaP/InGaAs/Ge TJSC parameters are presented in Table 1 . The InGaP/InGaAs/Ge TJSC is built in MATLAB, as demonstrated in Fig. 4 . The electrical characteristics of the TJSC-based PV module are presented in Table 2 and Fig. 5 . The PV power characteristics are nNar and vary with the solar irradiance and temperature. Fig. 6 a and 6 b illustrate the results of the simulation of the PV model's output current-voltage and output power-voltage characteristics at a constant temperature and increasing solar radiation, respectively, with the effect of solar radiation taken into account. Varying the amount of sunlight has a significant impact on the maximum power point. As the solar radiation varies between 450 W/m 2 and 1000 W/m 2 , the MPP is adjusted from 280 W to 570 W. However, changing the solar radiation has a significant impact on the I-V characteristics of the cell up until the output voltage hits the maximum power point. Then it has a little effect beyond that. To prove the consistency of the proposed global MPPT based CGO, four PS patterns are carried out. The results are compared to EGO, COOT, POA, NGO, LHHO, HHO, ALO, ROA, GBO, EO, MPA and PSO. The performance of the algorithms is analyzed through the best value of the objective function, standard deviation, variance, minimum value and average value under different PS patterns. Four arrangements of PV array are employed: three series-PV modules, four series-PV modules, five series-PV modules and six series-PV modules. Four PS patterns are used during the evaluation process, which have different global MPP locations, such as 2nd right, 3rd right, 2nd left, and centre. The idea of changing the location of the global MPP is to test the reliability of the proposed tracker. Table 3 and Fig. 7 demonstrate the detailed explanation of the specifications of PS patterns. In order to maintain objectivity in the comparison, all algorithms have the same starting point of five populations and ten iterations. To be fair in comparison, the number of populations (5) and iterations (10) are kept fixed with all algorithms. MATLAB software version R2022b has been used with OMEN X by HP Laptop, Intel(R) Core (TM) i9–9880H CPU @ 2.30 GHz 32.0 GB RAM. During the optimization process, a cost function that was meant to be improved upon was the product of PV current voltage. After 30 trials, all optimizers were used to prove that the global MPPT-based CGO that was suggested was reliable. As shown in Table 4 , all approaches were statistically evaluated. Table 5 shows the specifics of the first shading pattern and Table 6 shows the features of the thirs pattern, while both have 30 runs. The initial PV configuration, which makes use of three series-PV modules, follows the steps outlined in Fig. 6 a and employs the first PS pattern. Solar irradiation ranges from 300 W/m 2 to 700 W/m 2 . Two local points at 524.93 W and 681 W, as well as a single global peak at 993.89 W, make up the power vs voltage graph under this PS pattern. In the middle of the graph, you can see the worldwide MPP. At this point, the PV voltage is 103.55 V, and the current is 9.59 A. Table 3 shows that when compared to other algorithms, the proposed global MPPT based CGO performs the best. In the first PS pattern, the average PV power levels were between 865.67 W and 993.89 W. The maximum power that CGO can achieve is 993.89 W and followed by POA (993.54 W). When using COOT, the lowest power output is 865.67 W. The standard deviations range from 0.032 to 134.28. After POA (1.32), CGO achieves the lowest STD (0.032). According to ALO, the worst possible STD value is 134.28. Fig. 8 shows the convergence during the optimization process. CGO is faster than other algorithms in finding the global MPP. The final values of the mean cost function for EGO, COOT, POA, NGO, LHHO, HHO, ALO, ROA, GBO, EO, MPA, PSO, and CGO are 973.76 W, 865.67 W, 993.54 W, 976.8 W, 992.91 W, 992.19 W, 882.55 W, 980.73 W, 986.48 W, 983.42 W, 991.16 W, 992.4 W, and 993.89 W, respectively, as shown in Table 3 . This demonstrates that the proposed CGO is better than competing algorithms. However, one of the main challenges in integrating CGO into practical PV systems is time-consuming nature, especially when they involve complex shading patterns. This can delay real-time MPP tracking. To address this issue, the optimization process end criteria must be adaptive in nature and the optimizer stops the search just in case global MPP is obtained. As shown in Fig. 7 b, the second PS pattern is utilized in the second PV configuration that makes use of four series-PV modules. At 900 W/m 2 , 700 W/m 2 , 400 W/m 2 , and 200 W/m 2 , the sun irradiance levels are as follows. The power-versus-voltage graph in this PS pattern has four peaks: a single global peak at 993.89 W and three local peaks at 524.93 W, 903.58 W, and 614.3 W. In the middle of the graph, on the left side, you can see the global MPP. At this point, the PV voltage is 103.55 V, and the current is 9.59 A. Table 3 shows that, when compared to other algorithms in this PS pattern, the proposed global MPPT-based CGO performs the best. Between 993.89 W and 896.64 W were the mean values of the PV power. The most significant power that CGO can achieve when flowed via PSO is 993.89 W (989.52 W). In terms of PV power, ALO achieves the lowest value of 896.64 W. From 0.0009 to 129.69, the STD values range. With an STD of 0.0009, CGO achieves the lowest value, while PSO comes in second with 7.19. When using ALO, the lowest possible STD value is 129.69. The optimization process's impact on the mean cost function is seen in Fig. 9 . With reference to Table 3 , the final mean cost function values for EGO, COOT, POA, NGO, LHHO, HHO, ALO, ROA, GBO, EO, MPA, PSO, and CGO are 966.89 W, 879.52 W, 985.76 W, 973.38 W, 986.27 W, 974.2 W, 896.64 W, 982.7 W, 980.41 W, 972.08 W, 985.19 W, 989.52 W, and 993.89 W, respectively. This demonstrates that the proposed CGO is better than competing algorithms. Regarding the third PS pattern is utilized in the third PV arrangement with five series-PV modules, as shown in Fig. 7 c. Sunlight irradiance readings range from 1000 W/m 2 to 250 W/m 2 . The PS pattern causes the power-voltage graph to have five peaks: a single global peak at 1692.34 W and four local peaks at 571.25 W, 1259.35 W, 1571.61 W, and 977.5 W. Second from the right on the graph is where you can find the global MPP. The current flowing through the PV system is 7.58 A, and the voltage is 223.05 V. Table 3 shows that, when compared to other algorithms in this PS pattern, the proposed global MPPT-based CGO performs the best. Values for the mean PV power varied from 1692.33 W to 1544.66 W. The highest power flowed by PSO (1681.37 W) to CGO is 1692.33 W. At its lowest, ALO achieves a PV power of 1544.66 W. From 0.032 to 209.46, the STD values range. After LHHO (0.8), CGO achieves the lowest STD of 0.032. According to ALO, the worst possible value of STD is 209.46. The mean cost function fluctuates throughout the optimization process, as shown in Fig. 10 . The mean cost functions for EGO, COOT, POA, NGO, LHHO, HHO, ALO, ROA, GBO, EO, MPA, PSO, and CGO were determined to be 1630.47 W, 1565.72 W, 1684.05 W, 1676.02 W, 1691.93 W, 1675.38 W, 1544.66 W, 1659.81 W, 1677.12 W, 1680.12 W, 1675.05 W, 1681.37 W, and 1692.33 W, respectively. This demonstrates that the proposed CGO is better than competing algorithms. As shown in Fig. 7 d, the fourth PS pattern is then applied to the fourth PV configuration, which makes use of six series-PV modules. There are 950 W/m 2 , 750 W/m 2 , 600 W/m 2 , 500 W/m2, 350 W/m 2 , and 250 W/m 2 of solar irradiation. A power-versus-voltage graph exhibiting this PS pattern has six peaks, including five local points at 548.4 W, 1061.81 W, 1348.68 W, 1359.9 W, and 1171.1 W, and one unique global peak at 1534 W. On the third right-hand side of the graph, you can see the global MPP. At this point, the PV voltage is 223.08 V, and the current is 6.91 A. Table 3 shows that, when compared to other algorithms in this PS pattern, the proposed global MPPT-based CGO performs the best. Values for the mean PV power varied from 1692.33 W to 1372.99 W. The highest power that CGO may get from LHHO (1533.47 W) is 1534.03 W. In terms of PV power, ALO achieves the lowest value of 1372.99 W. The range of STD values is from 0.032 to 157.17. After PSO (9.9), CGO achieves the lowest STD of 0.032. The ALO method yields the worst possible STD result of 157.17. The optimization process's impact on the mean cost function is illustrated in Fig. 11 . If we look at Table 3 , we can see that the final values of the mean cost functions for EGO, COOT, POA, NGO, LHHO, HHO, ALO, ROA, GBO, EO, MPA, PSO, and CGO are 1466.44 W, 1412.97 W, 1530.44 W, 1503.28 W, 1533.47 W, 1524.51 W, 1372.99 W, 1524.46 W, 1520.26 W, 1523.04 W, 1530.83 W, 1527.53 W, and 1534.03 W, respectively. This demonstrates that the proposed CGO is better than competing algorithms. Table 7 displays the results of an analytical study that evaluated the algorithms under consideration using the ANOVA test. The degree of freedom ( DF ), the sum of squares ( SS ), mean squared error ( MS ) (= DF/SS ), the ratio of mean squared errors ( F ), and the likelihood that the computed test statistic can take on a value more significant than the test statistic itself ( P ) are all variables used in this test. With a P value that is significantly smaller than the F value in every instance, we can see that the mean values in the columns are very different from one another. At the same time, the rankings are shown graphically in Fig. 12 , which proves that the CGO algorithm is very stable and accurate. A Tukey test has been carried out to support the results gained by ANOVA. The results of the Tukey test were demonstrated in Fig. 13 . For the 1st PS pattern, CGO has the best performance followed by POA, whereas the worst performance is assigned to the original COOT and ALO. For the 2nd PS pattern, CGO has the best performance, followed by PSO, whereas the worst performance is assigned to COOT and ALO. For the 3rd PS pattern, CGO has the best performance, followed by LHHO, whereas the worst performance is assigned to COOT and ALO. Lastly, for the 4th PS pattern, CGO has the best performance, followed by LHHO, whereas the worst performance is assigned to EGO, COOT and ALO. 5 Conclusion Chaos game optimization (CGO) has been implemented to extract the global MPP of high-efficiency InGaP/InGaAs/Ge triple-junction solar cell (TJSC) based PV system while considering the partial shading (PS) issue. Four different PS patterns are used throughout the assessment procedure. The tracking performance of CGO is compared with another recent optimzation methods. The results demonstrated the superiority of CGO; for the 1st PS pattern as an example, the mean PV power values ranged between 993.89 W and 865.67 W. CGO attains the maximum power of 993.89 W flowed by POA (993.54 W). COOT obtains the lowest power of 865.67 W. The STD values vary between 0.032 to 134.28. The lowest STD of 0.032 is attained by CGO, followed by POA (1.32). ALO obtains the worst value of STD of 134.28. In sum, A comparison with recent optimzation algorithms was also carried out, and the results confirm the superiority of CGO compared with other algorithms. Integrating the high-efficiency InGaP/InGaAs/Ge TJSC with thermoelectric generator to build hybrid system will be examined in the future work considering shading and heterogeneous heat distribution conditions. CRediT authorship contribution statement Houssam Eddine Ghadbane: Writing – review & editing, Writing – original draft, Software, Methodology, Investigation, Formal analysis, Conceptualization. Hegazy Rezk: Writing – review & editing, Writing – original draft, Software, Methodology, Funding acquisition, Conceptualization. Aissa Benhammou: Writing – review & editing, Writing – original draft, Software, Formal analysis. Ahmed F. Mohamed: Writing – review & editing, Writing – original draft, Methodology, Formal analysis, Conceptualization. Declaration of competing interest The authors declare that they have no conflicts of interest to report regarding the present study. Funding “This research work was funded by Umm Al-Qura University , Saudi Arabia, under grant number 25UQU4290444GSSR05 ”. Acknowledgments “The authors extend their appreciation to Umm Al-Qura University , Saudi Arabia, for funding this research work through grant number 25UQU4290444GSSR05 .”.
REFERENCES:
1. FERHATI H (2020)
2. TODOROV T (2015)
3. WANG Z (2013)
4. FERNANDEZ E (2013)
5. FRANCE R (2022)
6. ALMONACID F (2012)
7. SAMAVAT T (2025)
8. CELIKEL R (2024)
9. QI P (2024)
10. ZAKI M (2023)
11. CHALH A (2022)
12. ALAMOSA M (2024)
13. FATHY A (2023)
14. ELTAMALY A (2021)
15. SILAA M (2023)
16. SAJID I (2023)
17. VANKADARA S (2022)
18. MOHAMMADZADEH A (2024)
19. ALBAKER A (2024)
20. WANG C (2024)
21. YANG S (2023)
|
10.1016_j.jctube.2015.11.003.txt
|
TITLE: Mycobacterium avium intracellulare complex causing olecranon bursitis and prosthetic joint infection in an immunocompromised host
AUTHORS:
- Tan, Eugene M.
- Marcelin, Jasmine R.
- Mason, Erin
- Virk, Abinash
ABSTRACT:
Case
A 73-year-old immunocompromised male presented with recurrent left elbow swelling due to Mycobacterium avium intracellulare complex (MAC) olecranon bursitis. 3 years after completing MAC treatment, he underwent right total knee arthroplasty (TKA). 1 year later, he developed TKA pain and swelling and was diagnosed with MAC prosthetic joint infection (PJI). He underwent TKA resection, reimplantation, and 12 months of anti-MAC therapy. This patient is the seventh case report of MAC olecranon bursitis and the third case report of MAC PJI. He is the only report of both MAC olecranon bursitis and PJI occurring in the same patient.
Informed consent
This patient was informed and agreed to the publication of this material.
BODY:
Introduction Nontuberculous mycobacteria (NTM) comprise over 125 species and are ubiquitous in soil, water, and animals. Mycobacterium avium intracellulare complex (MAC) is the most common pathogenic NTM species and consists of M. avium and M. intracellulare , which are indistinguishable based on traditional laboratory testing. MAC usually causes pulmonary disease but may also cause lymphatic, skin/soft tissue, skeletal, or disseminated disease. Water sources, such as recirculating hot-water systems, are the reservoir for most MAC infections. [1] We illustrate a rare case of MAC causing both olecranon bursitis and prosthetic joint infection (PJI) in an immunocompromised host. Case report A 73-year-old male with history of multiple myeloma (in remission for 3 years after thalidomide and dexamethasone treatment) and chronic cough, presented to the Emergency Department with 3 days of left elbow swelling, which was diagnosed as olecranon bursitis based on clinical presentation and X-ray ( Fig. 1 ). There was no history of elbow trauma. His bursitis improved with a steroid injection but recurred 2 months later and was treated with a repeat steroid injection. Unfortunately, his bursitis recurred again 2 months later, and bursa aspirate yielded a white blood cell (WBC) count of 45,708 with 98% neutrophils, suspicious for septic bursitis. Bacterial cultures were negative, and he had not been on antibiotics previously. He received a 14-day course of cephalexin. Because the elbow was still edematous after 8 days of cephalexin, he underwent elbow debridement, which revealed purulent fluid with erythematous grayish-brown tissue. Histology showed acutely inflamed synovium consistent with infection. 2 out of 3 samples were smear-positive for acid-fast bacilli. All 3 operative mycobacterial cultures and the initial aspiration grew MAC. On review of systems, he mentioned a chronic productive cough. The patient smoked a pipe for 10 years but quit 65 years ago. He had no formal diagnosis of COPD. Sputum cultures grew MAC. A chest X-ray showed basal atelectasis and multiple bilateral calcified pleural plaques, consistent with prior asbestos exposure ( Fig. 2 ). There was no significant change compared to a chest X-ray done 3 years prior to presentation, and tuberculosis skin testing was negative. Other exposures included gardening, hot-tub use, and a pet dog. The MAC isolate was susceptible to rifabutin, ethambutol, and clarithromycin; intermediate to rifampin, streptomycin, and moxifloxacin; and resistant to ciprofloxacin, kanamycin, cycloserine, ethionamide, and amikacin. His initial treatment regimen included clarithromycin 500 mg PO BID, ethambutol 1600 mg PO daily, and rifabutin 300 mg once daily. The rifabutin was discontinued 3 weeks into therapy due to neutropenia and transaminitis. His bursitis resolved after a 12-month course of ethambutol and clarithromycin. Repeat sputum cultures were not obtained after completion of therapy. One year later, his multiple myeloma recurred, and he was started on lenalidomide and dexamethasone. Around this time, he was also diagnosed with seronegative rheumatoid arthritis (RA) and initiated methotrexate. 3 years after completing his MAC treatment, he underwent an elective right total knee arthroplasty (TKA) for degenerative joint disease. This procedure was performed at an outside facility with presumed peri-operative prophylaxis. One year later, he developed right TKA pain, instability, and swelling. X-ray showed a well-seated right TKA with a large effusion ( Fig. 3 ). C-reactive protein (CRP) was elevated at 13.5 mg/L (reference range < 8 mg/L), and erythrocyte sedimentation rate (ESR) was elevated at 65 mm/h. Synovial fluid aspiration yielded 4524 total nucleated cells; 57% neutrophils, and 39% monocytes. The synovial fluid aspirate grew MAC. He underwent TKA resection with placement of a vancomycin/tobramycin impregnated spacer. Histology was negative for acute inflammation or granulomas. All 4 surgical tissue samples grew MAC. His multiple myeloma chemotherapy and methotrexate for RA were held. His TKA MAC isolate was still susceptible to ethambutol and clarithromycin. Although it was previously susceptible to rifabutin, it now demonstrated intermediate susceptibility. He was initially started on clarithromycin, ethambutol and rifabutin, but due to recurrent transaminitis and neutropenia, he was continued on dual therapy with ethambutol and clarithromycin for 6 months prior to consideration for reimplantation. Pre-reimplantation CRP was normal at 4.8 mg/L, and ESR was 49 mm/h. Although still on therapy, the right knee synovial fluid was aspirated prior to reimplantation and found to be negative for mycobacterial growth. Six weeks later, he underwent reimplantation. All 4 surgical tissue samples from reimplantation remained negative for MAC. One of 4 cultures grew coagulase-negative staphylococci, which was considered a contaminant. Pathology was negative for acute inflammation. He continued ethambutol and clarithromycin for another 6 months, to complete a total of 12 months. Then, he was placed on chronic suppression with azithromycin 1200 mg orally once a week. Fifteen months later, he successfully underwent an elective contralateral TKA, again for degenerative joint disease. His multiple myeloma went into remission, precluding further chemotherapy. His methotrexate for RA was switched to hydroxychloroquine. He continues to do well at 7 years of follow-up after the 2-stage exchange of his MAC-infected right TKA. Discussion Nontuberculous mycobacteria (NTM) olecranon bursitis is very rare, with only 21 reported cases, 6 of which were due to MAC ( Table 1 ). 38% of cases occurred in immunocompromised patients. Minor elbow trauma or bursa injection can provide a portal of entry for NTM if contaminated with soil [2,3] . Most patients with NTM olecranon bursitis typically present with mild pain that improves with multiple corticosteroid injections, but bursal swelling may worsen. Diagnosis is delayed more than 6 months in most cases. 13 out of the 21 reported cases required surgical intervention followed by prolonged anti-mycobacterial therapy [2] . This patient's subsequent MAC infection of the TKA was also interesting. In general, the microorganisms causing PJI are S. aureus (31.0%), coagulase-negative Staphylococcus (20.2%), culture-negative (15.8%), polymicrobial (7.4%), Streptococcus (5.8%), Enterococcus (3.9%), fungi (2.3%), anaerobes (0.9%), and lastly mycobacteria (0.6%) [4] . Mycobacterial species previously reported to cause PJI include M. tuberculosis, M. bovis, and the rapidly growing mycobacteria: M. abscessus, M. chelonae, M. fortuitum, M. kansasii, M. smegmatis, and M. wolinskyi . However, MAC PJI is relatively rare [5] . The pathogenesis of his infection is unclear – whether it was contamination at surgery or subsequent hematogenous spread. He had chronic respiratory tract colonization, but there is no clear mechanism to explain how MAC respiratory colonization could potentially lead to joint infection in his case. Currently, there are 2 known case reports of MAC PJI ( Table 2 ) [5,6] . Because of the rarity of MAC PJI, optimal management is undefined. A 3-drug regimen of a macrolide, ethambutol, and a rifamycin is recommended for 6–12 months, but accepted treatment guidelines have not been established [1] . Regarding reimplantation, treatment decisions must be extrapolated from the available literature on other mycobacterial PJI's. In a 1998 case series, only 2 out of 7 patients with M. tuberculosis PJI underwent reimplantation, one at 20 months and other at 30 months post-resection [7] . In a 2007 case series of PJI due to rapidly growing mycobacteria (RGM), 2 patients underwent reimplantation (one at 3.5 months and one at 7.5 months post-resection) and required chronic antibiotic suppression [8] . ESR and CRP are helpful to assess clearance of bacterial infections prior to reimplantation but may be unreliable in determining timing of second-stage reimplantation in NTM PJIs [9] . Our patient's ESR of 49 mm/h done one day prior to reimplantation may have been confounded by his multiple myeloma and RA; however CRP had normalized. In a 2007 case series of RGM PJI, the median ESR at diagnosis was 70.5 mm/h, and the median CRP was 6 mg/dL. In the two patients who underwent reimplantation, ESR and CRP had normalized [8] . Synovial fluid WBC count less than 1102.5 cells/uL may be an adjunctive perioperative test but only has a sensitivity of 75% and specificity of 61% and has not been studied in NTM PJI [9] . Suppressive therapy is another important question. One patient with M. chelonae PJI underwent resection, reimplantation, and suppressive therapy with clarithromycin and moxifloxacin. Two patients with M. fortuitum PJI were able to retain their prosthesis and remained asymptomatic on suppressive regimens of moxifloxacin, trimethoprim-sulfamethoxazole and azithromycin and levofloxacin/trimethoprim-sulfamethoxazole, respectively. However, suppressive therapy may not be feasible with other species such as M. abscessus , which are often resistant to most oral antibiotics [8] . This patient's MAC olecranon bursitis and PJI were separated by approximately 4 years, suggesting long-term colonization. MAC has been known to colonize the respiratory and gastrointestinal tract of patients especially those with immune compromise such as AIDS or those with structural lung disease, such as COPD or bronchiectasis. Our patient was immunocompromised with his multiple myeloma and treatment of seronegative RA [10–12] . Other risk factors for pulmonary NTM colonization include white race, age greater than or equal to 60 years, female sex, birth and residency in Canada for at least 10 years. Our patient possesses the risk factors of white race and age greater than 60 years and immune suppression for his long-term colonization with MAC [12] . This patient's MAC olecranon bursitis, followed by a MAC PJI 4 years later, is a rare phenomenon and likely related to chronic relatively asymptomatic respiratory MAC colonization. In summary, mycobacterial PJIs are very rare, and patients often experience a delay in diagnosis. Mycobacterial cultures should be obtained in immunocompromised patients with persistent symptoms and negative bacterial cultures [2] . This is the seventh case report of MAC olecranon bursitis and the third case report of a MAC PJI. This patient is unique, as he is the only known case of both MAC olecranon bursitis and PJI occurring in the same patient. This case also highlights the rare risk of MAC PJI in immunocompromised patients who may be colonized with MAC. Learning points for clinicians 1. Obtain mycobacterial cultures for persistent join pain or swelling despite empiric antibiotics, especially if the patient is immunocompromised. 2. ESR, CRP, and synovial WBC count may be helpful to determine timing of reimplantation for patients with PJI's. 3. Discuss the option of long-term antibiotic suppression for patients with MAC PJI.
REFERENCES:
1. GRIFFITH D (2007)
2. GARRIGUES G (2009)
3. ZHIBANG Y (2002)
4. AGGARWAL V (2014)
5. GUPTA A (2009)
6. MCLAUGHLIN J (1994)
7. BERBARI E (1998)
8. EID A (2007)
9. KUSUMA S (2011)
10. WEINSTOCK D (2003)
11. BERMUDEZ L (1992)
12. HERNANDEZGARDUNO E (2010)
|
10.1016_j.asej.2014.04.014.txt
|
TITLE: Blood flow analysis of Prandtl fluid model in tapered stenosed arteries
AUTHORS:
- Akbar, Noreen Sher
ABSTRACT:
In the present article we have discussed the blood flow analysis of Prandtl fluid model in tapered stenosed arteries. The governing equations for considered model are presented in cylindrical coordinates. Perturbation solutions are constructed for the velocity, impedance resistance, wall shear stress and shearing stress at the stenosis throat. Attention has been mainly focused to the analysis of embedded parameters in converging, diverging and non-tapered situations. Streamlines have been plotted at the end of the article for considered arteries. It is observed that due to increase in Prandtl fluid parameters, the stenosis shape and maximum height of the stenosis the velocity profile decreases.
BODY:
Nomenclature V ¯ velocity vector n stenosis shape Q flow rate T ¯ temperature u velocity component in r -direction w velocity component in z -direction P ¯ pressure S ¯ Cauchy stress tensor A , B material constant of Prandtl fluid model ξ tapering parameter b length of stenosis Greek symbols α , β Prandtl fluid parameter μ Kinmatic viscosity δ height of the stenosis ρ Density of the fluid ν Kinematic viscosity ϕ tapered angle 1 Introduction Arteries which are basically considered as a living tissues need a supply of metabolites including oxygen and removal of waste products. Aroesty and Gross [1] have discussed the pulsatile flow of blood in small blood vessels. Blood flow in artery has some important aspects due to the engineering as well as from the medical applications point of view. The hemodynamic behavior of the blood flow is influenced by the presence of the arterial stenosis. If the stenosis is present in an artery, normal blood flow is disturbed. Thurston [2] and Chien et al. [3] present the viscoelastic properties of blood. According to them the arterial configuration is closely connected with blood flow. Blair and Spanner [4] discussed that blood as a Casson fluid is valid for moderate shear rate and validity of Casson’s and Herschel–Bulkley for blood flow is same. Chaturani and Samy [5] reported the theory of Aroesty and Gross [1] and study pulsatile flow of blood in stenosed arteries modeling blood as a Casson fluid. Mandal [6] analyzed unsteady analysis of non-Newtonian blood flow through tapered arteries with a stenosis. Unsteady flow and mass transfer in models of stenotic arteries considering fluid–structure interaction discussed by Valencia and Villanueva [7] . Pulsatile flow of blood for a modified second-grade fluid model is presented by Massoudi and Phuoc [8] . In another article Siddiqui et al. [9] discussed Casson fluid in arterial stenosis. Blood flow analysis for micropolar fluid model for axisymmetric but radially symmetric mild stenosis tapered artery is presented by Mekheimer and Kot [10] . According to their observation the magnitude of the resistance impedance is higher for a micropolar fluid than that for a Newtonian fluid model. In another article Mekheimer and Kot [11] presented the same model with the influence of magnetic field and Hall currents on blood flow through a stenotic artery and visualized that the wall shear stress and the shearing stress on the wall at the maximum height of the stenosis possess an inverse characteristic to the resistance to flow with respect to any given value of the Hartmann number and the Hall parameter. Varshney et al. [12] coated the influence of magnetic field on the blood flow in artery having multiple stenosis and made a numerical study. Simulation of heat and chemical reactions on Reiner Rivlin fluid model for blood flow through a tapered artery with a stenosis was presented by Akbar and Nadeem [13] . Numerical simulation of generalized Newtonian blood flow past a couple of irregular arterial stenoses is investigated by Mustafa et al. [14] . They presented that in comparison to the corresponding Newtonian model the generalized Newtonian fluid experiences higher pressure drop, lower peakwall shear stress and smaller separation region. Recently a mathematical model for blood flow through an elastic artery with multistenosis under the effect of a magnetic field in a porous medium is presented by Mekheimer et al. [15] . Some important articles describing the features of blood flow are cited in the Refs. [16–34] . The objective of the present study was to discuss the blood flow analysis of Prandtl fluid in tapered stenosed arteries. The governing equations for considered model are presented in cylindrical coordinates. This model is not discussed for blood flow problem so far. Perturbation solutions are constructed for the velocity, impedance resistance, wall shear stress and shearing stress at the stenosis throat. Attention has been mainly focused to the analysis of embedded parameters in converging, diverging and non-tapered situations. Streamlines have been plotted at the end of the article. 2 Mathematical model For an incompressible fluid the balance of mass and momentum are given by (1) div V ¯ = 0 , (2) ρ d V ¯ dt = - ∇ P ¯ + div S ¯ , The constitutive equation for Prandtl fluid model is given by Akbar et al. [19] (3) S ¯ = A sin - 1 1 C ∂ u ¯ ∂ z ¯ 2 + ∂ w ¯ ∂ r ¯ 2 1 2 ∂ u ¯ ∂ z ¯ 2 + ∂ w ¯ ∂ r ¯ 2 1 2 ∂ w ¯ ∂ r ¯ , 3 Mathematical development We examine an incompressible flow of Prandtl fluid with constant viscosity and density μ in a tube having length ρ L . The cylindrical coordinate system is chosen such that ( r , θ , z ) and u ¯ are the velocity components in the w ¯ and r ¯ directions respectively. Here z ¯ is selected the axis of the symmetry of the tubes. The consideration of stenosis is represented as Mekheimer and Kot r = 0 [10] : with (4) h ( z ) = d ( z ) 1 - η 1 b n - 1 ( z - a ) - ( z - a ) n , a ⩽ z ⩽ a + b , = d ( z ) , otherwise In above equations (5) d ( z ) = d 0 + ξ z . is the radius of the tapered arterial segment in the stenotic region, d ( z ) is the radius of the non-tapered artery in the non-stenoic region, d 0 is the tapering parameter, ξ b is the length of stenosis, is a parameter determining the shape of the constriction profile and referred to as the shape parameter the symmetric stenosis occurs for ( n ⩾ 2 ) and n = 2 a indicates its location (see in Fig. 1 ). The parameter is defined by the expression η in which maximum height of stenosis located at (6) η = δ ∗ n n n - 1 d 0 b n ( n - 1 ) , is in Mekheimer and Kot z = a + b n n n - 1 [10] . The equations governing the flow are (7) ∂ u ¯ ∂ r ¯ + u ¯ r ¯ + ∂ w ¯ ∂ z ¯ = 0 , (8) ρ u ¯ ∂ ∂ r ¯ + w ¯ ∂ ∂ z ¯ u ¯ = - ∂ p ¯ ∂ r ¯ + 1 r ¯ ∂ ∂ r ¯ r ¯ S ¯ r ¯ r ¯ + ∂ ∂ z ¯ S ¯ r ¯ z ¯ - S ¯ θ ¯ θ ¯ r ¯ , (9) ρ u ¯ ∂ ∂ r ¯ + w ¯ ∂ ∂ z ¯ w ¯ = - ∂ p ¯ ∂ z ¯ + 1 r ¯ ∂ ∂ r ¯ r ¯ S ¯ r ¯ z ¯ + ∂ ∂ z ¯ S ¯ z ¯ z ¯ . Defining (10) r = r ¯ d 0 , z = z ¯ b , w = w ¯ u 0 , u = b u ¯ u 0 δ , p = d 0 2 p ¯ u 0 b μ , h = h ¯ d 0 , Re = ρ bu 0 μ , S rr = b S ¯ rr u 0 μ , S ∼ rz = d 0 S ¯ rz u 0 μ , S zz = b S ¯ zz u 0 μ , S θ θ = b S ¯ θ θ u 0 μ , α = A μ c , β = Au 0 2 6 d 0 2 μ c 3 . Using Eqs. (1)–(3) and (10) along with the additional conditions Mekheimer and Kot [10] : (10a) ( i ) Re δ ∗ n 1 n - 1 b ≪ 1 , and for mild stenosis (10b) ( ii ) d 0 n 1 n - 1 b ∼ O ( 1 ) , Eqs. δ ∗ d 0 ≪ 1 (8) and (9) take the form (11) ∂ P ∂ r = 0 , (12) ∂ P ∂ z = 1 r ∂ ∂ r r α ∂ w ∂ r + β ∂ w ∂ r 3 . The boundary conditions are now given by (12a) ∂ w ∂ r = 0 , at r = 0 , (12b) w = 0 , at r = h ( z ) , and (13) h ( z ) = ( 1 + ξ z ) 1 - η 1 z - σ 1 - z - σ 1 n , σ 1 ⩽ z ⩽ σ 1 + 1 , in which (13a) η 1 = δ n n n - 1 ( n - 1 ) , δ = δ ∗ d 0 , σ 1 = a b , ξ ′ = ξ b d 0 , ξ = tan ϕ , is represents the tapered angle. Further, we consider three types of arteries ( ϕ i ) converging tapering or , non-tapered artery ( ( ϕ < 0 ) ϕ = 0) and the diverging tapering . ( ϕ > 0 ) 4 Solution of the problem 4.1 Perturbation solution Eq. (11) is a non-linear equation, therefore we seek the perturbation solution. For perturbation solution, we expand and w , F 1 P by taking as perturbation parameter β (14) w = w 0 + β w 1 + O ( β 2 ) , (15) P = P 0 + β P 1 + O ( β 2 ) , (16) F 1 = F 10 + β F 11 + O ( β 2 ) . The perturbation results for small parameter , satisfying the conditions β (12a) and (12b) , the expression for velocity field and pressure gradient are directly written as (17) w = r 2 - h 2 4 α dP dz + β 16 ( 2 F 1 ) 3 r 4 - h 4 α h 12 , (18) dP dz = - 8 α ( 2 F 1 + h 2 ) h 4 + β - 256 ( 2 F 1 + h 2 ) 3 3 h 10 . The pressure drop across the stenosis between the section ( Δ p = p at z = 0 and Δ p = - p at z = L ) and z = 0 can be obtained using the expression given below z = L (19) Δ p = ∫ 0 L - dp dz dz . 4.2 Resistance impedance The resistance impedance is given by in which (20) λ ̃ = Δ p Q = ∫ 0 a F ( z ) h = 1 dz + ∫ a a + b F ( z ) dz + ∫ a + b L F ( z ) h = 1 dz , F ( z ) = 16 α h 4 + 2048 β F 1 2 3 h 10 . On simplification, Eq. ( 20 ) yields (21) λ ̃ = ( L - b ) 16 α + 2048 β F 1 2 3 + ∫ a a + b F ( z ) dz . 4.3 Expression for the wall shear stress The expression for dimensionless shear stress is (22) S ∼ rz = α ∂ w ∂ r + β ∂ w ∂ r 3 . The wall shear stress is of the form (23) S ∼ rz = α ∂ w ∂ r + β ∂ w ∂ r 3 r = h . The shearing stress at the stenosis throat i. e the wall shear at the maximum height of the stenosis located at can be expressed as z = a b + 1 n n n - 1 (24) τ ̃ s = S ∼ rz h = 1 - δ . The final expressions for the dimensionless resistance to , wall shear stress λ and the shearing stress at the throat S rz are τ s (25) λ = λ ¯ λ 0 = 1 3 1 - b L 16 α + 2048 β F 1 2 3 + 1 L ∫ a a + b R ( z ) dz , with (26) τ rz = S ∼ rz τ 0 , τ s = τ ̃ s τ 0 , (27) λ 0 = 3 L , τ 0 = 4 Q . 5 Numerical solution To see the validity of perturbation results, I have also solved Eq. (12) numerically using shooting technique and results are presented through tables and graphs see Table 1 and Fig. 8 (a) and (b). The results are in good agreement for Converging tapering (CT), diverging tapering (DT) and non-tapered arteries (NT) (see Table 2 ). 6 Graphical discussion Our interest in this section is to analyze the effects of the Prandtl fluid parameter , the stenosis shape α , β n and maximum height of the stenosis for converging tapering, diverging and non-tapered arteries in JeffreyPrandtl fluid. For that purpose we have plots δ Figs. 2–7 . The variation of axial velocity for and α , n , β in converging, diverging and non-tapered arteries are displayed in the δ Fig. 2 (a)–(d). We observed that due to increase in and α , n , β the velocity profile decreases. It is also seen that for the case of converging tapering the velocity gives larger values as compared to the case of diverging tapering and non-tapered arteries. δ Fig. 3 (a)–(d) show how the converging , diverging and non-tapered arteries influence on the wall shear stress . Interestingly with an increase in S rz the shear stress increases and decreases with an increase in α and β , δ n . It is also seen that the stress yield diverging tapering with tapered angle , converging tapering with tapered angle ϕ > 0 and non-tapered artery with tapered angle ϕ < 0 . In the ϕ = 0 Fig. 4 (a)–(c) it has been noticed that the impedance resistance increases for converging , diverging and non-tapered arteries when we increase and α , β n . We also observed that resistive impedance in a diverging tapering appear to be smaller than in converging tapering because the flow rate is higher in the former case when compared with the later. Impedance resistance attains its maximum values in the symmetric stenosis case . ( n = 2 ) Figs. 5–7 show the streamlines for different values of and n , α . Streamlines for different values of the Prandtl fluid parameter β is shown in Fig. 5 . Here it is noticed that the size of the trapping bolus increases while the number of bolus decreases, when we increase the Prandtl fluid parameter . α Fig. 6 is plotted to see the streamlines for different values of Prandtl fluid parameter . Here the size of the trapping bolus decreases with an increase in Prandtl fluid parameter β . Streamlines for different values of stenosis shape β n is presented through Fig. 7 . It is seen that the size of the trapping bolus increases while the number of bolus decreases, when we increase the stenosis shape n . 7 Conclusions Blood flow analysis of Prandtl fluid model in tapered stenosed arteries is discussed. Analytical solution have been evaluated using regular perturbation technique. The main points of the performed analysis are as follows: 1. We observed that due to increase in Prandtl fluid parameter , the stenosis shape α , β n and maximum height of the stenosis the velocity profile decreases. δ 2. It is also seen that for the case of converging tapering the velocity gives larger values as compared to the case of diverging tapering and non-tapered arteries. 3. Interestingly with an increase in the shear stress increases and decreases with an increase in α and β , δ n . 4. It is also seen that the stress yield diverging tapering with tapered angle , converging tapering with tapered angle ϕ > 0 and non-tapered artery with tapered angle ϕ < 0 . ϕ = 0 5. It has been noticed that the impedance resistance increases for converging , diverging and non-tapered arteries when we increase and α , β n . 6. We also observed that resistive impedance in a diverging tapering appear to be smaller than in converging tapering because the flow rate is higher in the former case when compared with the later. 7. Impedance resistance attains its maximum values in the symmetric stenosis case . ( n = 2 ) 8. The size of the trapping bolus increases while the number of bolus decreases, when we increase the Prandtl fluid parameter . α 9. It is seen that the size of the trapping bolus increases while the number of bolus decreases, when we increase the stenosis shape n . 10. The size of the trapping bolus decreases with an increase in Prandtl fluid parameter . β
REFERENCES:
1. AROESTY J (1972)
2. THURSTON G (1972)
3. CHIEN S (1975)
4. SCOTTBLAIR G (1974)
5. CHATURANI P (1986)
6. MANDAL P (2005)
7. VALENCIA A (2006)
8. MASSOUDI M (2008)
9. SIDDIQUI S (2009)
10. MEKHEIMER K (2008)
11. MEKHEIMER K (2008)
12. VARSHNEY G (2010)
13. AKBAR N (2010)
14. MUSTAFA N (2011)
15. MEKHEIMER K (2011)
16. NADEEM S (2011)
17. NADEEM S (2011)
18. NADEEM S (2011)
19. AKBAR N (2012)
20. AKBAR N (2012)
21. AKBAR N (2012)
22. MEKHEIMER K (2010)
23. MEKHEIMER K (2008)
24. MEKHEIMER K (2011)
25. ELLAHI R (2013)
26. ELLAHI R (2014)
27. ELLAHI R (2013)
28. ELLAHI R (2014)
29. SHEIKHOLESLAMI M (2013)
30. SHEIKHOLESLAMI M (2013)
31. SHEIKHOLESLAMI M (2014)
32.
33. SHEIKHOLESLAMI M (2014)
34. SHEIKHOLESLAMI M (2013)
|
10.1016_j.therwi.2024.100114.txt
|
TITLE: Sperm storage in males of the Neotropical rattlesnake Crotalus durissus (Squamata: Viperidae): Structure and seasonal variation of the distal ductus deferens
AUTHORS:
- Carvalho, Leonardo
- Avelar, Gleide Fernandes de
- Resende, Flávia Cappuccio de
ABSTRACT:
In species with asynchronous reproductive cycles, where gamete production is not aligned with the mating season, either males or females must store sperm. This reproductive tactic is an obligatory feature of male rattlesnakes’ reproductive cycle due to asynchrony between spermatogenesis and mating. Given that the ductus deferens is the primary site of sperm storage in male snakes, we aimed to investigate the morphological and histochemical changes in the distal ductus deferens of C. durissus throughout its reproductive cycle. In this species, spermatogenesis begins in spring and peaks in summer, while testes regress during autumn and winter. The distal ductus deferens of 28 mature male specimens was evaluated using histomorphometric and histochemical methods. Spermatozoa were consistently observed within the lumen of the ductus deferens in almost all specimens. The principal cells of the distal region of ductus deferens reacted positively for Periodic Acid-Schiff and Bromophenol Blue. Secretions were observed in the apical region of the principal cells' cytoplasm and along the epithelium edge, which may be related to gamete maintenance. Increased secretory activity of the principal cells was observed during periods of testicular activity. A reduction in the lumen of ductus deferens occurs during testicular regression, indicating possible fluid resorption by epithelial cells. Fluid resorption might be one of the mechanisms to ensure stored sperm viability, as it provides an increase in the glycoprotein’s concentration.
BODY:
1 Introduction The male reproductive system of Squamata consists of a pair of testes, epididymis, ductus deferens, a sexual segment of the kidney, and hemipenes [1,2] . The testes of most snakes have an elongated and cylindrical shape, generally present bilateral asymmetry, and are located in the coelomic cavity [3] . Snakes and other squamates possess a system of efferent ducts that originate from the testis, and these ducts function not only in the passage of spermatozoa but also in its storage and as accessory sex glands [4] . After spermiation, spermatozoa pass through the seminiferous tubules, which are connected to the rete testis, and follow a sequential path to the ductuli efferentes, ductus epididymis, and ductus deferens [4] . The latter typically exhibits bilateral asymmetry, with the right duct being longer than the left, and develops tight coiling as it passes caudally toward the ampulla [5] . Studies on Lepidosauria and Testudines have shown that the ductus deferens presents pseudostratified epithelium, muscular tissue, and dense connective tissue containing blood vessels [6,7] . Sever [6] and Rojas et al. [8] , working with Seminatrix pygaea and Dipsas mikanii , respectively, reported two epithelial cell types in the ductus deferens: principal and basal cells, with the latter arranged between the principal cells along the basal lamina of the epithelium. The ductus deferens is the primary site for sperm storage for snakes [4,9] . Long or short-term gamete storage in ductus deferens may be related to seasonal patterns of spermatogenesis and mating periods [10] . The reproductive cycle of male snakes, as other squamates, can be classified by considering the temporal relationship between spermatogenesis and mating [10,11] . In species with dissociated (or asynchronous) cycles, the gonadal activity does not occur in the period of copulation, requiring gamete storage in males [10,11] . This type of cycle is also called postnuptial spermatogenesis [10,12] . On the other hand, in species with associated (or synchronous) cycles, the production of spermatozoa occurs in the mating period [10,11] . This type of cycle is also called prenuptial spermatogenesis [10,12] . In snakes with asynchronous cycle, spermatozoa are observed in the lumen of the distal ductus deferens during the entire year [10] . Conversely, in snakes with synchronous cycles, spermatozoa are not seen in the ductus deferens after the mating season [10] . The Neotropical rattlesnake Crotalus durissus is a venomous species from the Viperidae family with a wide distribution in South America [13,14] . In Brazil, this species is found in the Cerrado, Caatinga, and Pampas biomes, as well as in open areas of the Amazon and the Atlantic Forest [13,14] . Previous studies have shown that C. durissus spermatogenic activity begins in spring and peaks in summer, while testicular regression occurs in autumn and winter [15,16] . Resende and Avelar [16] observed a peak in the hypertrophy of the sexual segment of the kidney and elevated testosterone levels in summer, when the species shows increased testicular activity. Additionally, individuals of C. durissus have been found to be more active during summer and autumn [17–20] . Combat rituals and mating have been reported in autumn for captive animals [21] , and in summer in the wild [22] . Vitellogenic females of C. durissus are observed from late summer to late autumn [23] . After mating, females store spermatozoa in the uterus throughout the winter, with fertilization occurring in spring [24] . To date, most studies have focused on determining the reproductive cycle of Neotropical male snakes using only macroscopic morphological characteristics [10] . While these studies have provided valuable insights into overall reproductive timing and gross anatomical changes, they offer limited understanding of the finer microstructural and cellular process involved. Notably, there remains a lack of information regarding the microstructural characteristics underlying reproductive tactics [21,25,26] . Although the ductus deferens is the primary site for gamete storage in male snakes, few studies have detailed the changes in this structure throughout the reproductive cycle [6,9,21,27] . Therefore, we aimed to investigate morphological and histochemical changes in the distal ductus deferens of C. durissus throughout the periods of testicular activity and regression. To our knowledge, this is the first detailed study of the distal ductus deferens of a Neotropical snake species. 2 Material and methods 2.1 Specimens and sampling area For this study, we used 28 mature males of C. durissus : 14 exhibiting active testes, collected during the austral summer and spring, and 14 exhibiting testicular regression, collected during the austral autumn and winter. The snakes were collected in the southeastern Brazilian state of Minas Gerais between July 2015 and February 2017 ( Table S1 ), under a license granted by the Chico Mendes Institute for Biodiversity Conservation (ICMBio, # 48897-1). The animals were collected in a Cerrado area, where the dry season corresponds to the period from May to September. The months from November to February present the highest rainfall in the collected area [28] . After inducing unconsciousness through hypoxia using carbon dioxide, animals were euthanized via intracoelomatic injection of barbiturate (Thiopental® 100 mg/kg) [29] . All procedures involving animals were conducted in accordance with the institutional animal care protocols and guidelines approved by the Ethics Committee in Animal Experimentation of Federal University of Minas Gerais (CEUA/UFMG, # 130/2015) and Ezequiel Dias Foundation (CEUA/Funed, # 079/2015). After dissection and collection of organs, the specimens were fixed in formalin and transferred to the Scientific Collection of Snakes of the Ezequiel Dias Foundation, Brazil. After euthanasia, the snakes were weighed, and their snout-vent length (SVL) was measured. The testes of all specimens were weighed, and their gonadosomatic index (GSI) was calculated using the formula: [(testes mass/body mass) × 100]. 2.2 Tissue sampling and preparation Fragments of the distal region of the ductus deferens were removed and fixed by immersion in Bouin’s solution (left fragment) or 5 % buffered glutaraldehyde (right fragment) for 24 h. The fixatives were then replaced with 70 % ethanol and 0.05 M phosphate buffer (pH 7.3), respectively. Samples fixed in glutaraldehyde were dehydrated in graded ethanol series and embedded in glycol methacrylate. Tissue fragments fixed in Bouin’s solution were processed following standard protocols for embedding in Paraplast® (Sigma-Aldrich Corporation, St. Louis, USA). Sections of 4 and 5 μm thickness were obtained from tissue embedded in glycol methacrylate and Paraplast®, respectively, using a Leica RM 2165 microtome (Leica Biosystems, Wetzlar, Germany). Alternating slides were used for the different stain procedures and subsequent analyses. Tissue embedded in glycol methacrylate were used for histomorphometric evaluation and for histochemical analysis of Periodic Acid-Schiff (PAS) and Bromophenol Blue (BB). Tissue embedded in Paraplast® were used for Alcian Blue (AB) staining. The slides were observed under a compound light microscope Olympus BX40 (Olympus Corporation, Tokyo, Japan), and images were obtained using an Olympus DP25 microscope camera (Olympus Corporation, Tokyo, Japan). Morphometric analyses were performed using Image J 1.47t software [30] . 2.3 Ductus deferens morphometry To investigate seasonal variation and spermatozoa storage in the distal ductus deferens, we used fragments on the right side. These slides were stained with toluidine blue and 1 % sodium borate. To measure the diameter of the distal ductus deferens (DDD) and the diameter of the lumen of distal ductus deferens (DDLD), we used 20 cross-sections with a circular shape. We obtained the ductus deferens epithelium height (DDEH) from 30 measures per specimen exhibiting either regressed or active testes. The synthetic activity of the ductus deferens cells was correlated with the nuclear volume of the principal cells, since cells with intense protein synthesis activity may have euchromatic nuclei, with less condensed chromatin, which can alter nuclear volume. Thus, the nuclear diameter was measured from 20 nuclei, considering only those with a spherical shape. The nuclear volume of principal cells (NVPC) was calculated using the formula 4/3πr³, where r = nuclear diameter/2 [31] . The percentage of ductus deferens components during active and regressed testes was obtained through the volume density approach. A grid with 475 intersections was created on ImageJ 1.47t software and placed over images of the distal ductus deferens obtained at 200 × magnification. Fifteen randomly selected fields were analyzed per specimen, resulting in a total of 7125 points per snake. The components evaluated were connective tissue, muscular tunica, epithelium, lumen, lumen with spermatozoa, cytoplasmatic vesicles in the lumen, secretion in the lumen, and detached cells in the lumen (testicular somatic cells and germ cells, except spermatozoa). 2.4 Ductus deferens histochemistry Histochemical methods assessed ductus deferens secretions, with BB staining detecting noncarbohydrate-conjugated proteins and AB staining identifying acidic glycoproteins. The protocols of BB and AB are described in detail in Resende and Avelar [16] . Neutral glycoprotein was evaluated through PAS staining. Slides were washed in distilled water for five minutes and then incubated in 0.5 % periodic acid for 20 min, followed by another five-minute wash in distilled water. After drying, the tissues were incubated in Schiff’s reagent for 45 min. Subsequently, the slides were washed in three baths of sulfurous water for three minutes each and then rinsed under running water for 30 min. We performed the counterstaining with hematoxylin for five minutes and washed the slides under running water for 10 min. Finally, after drying, Entellan (Merck KGaA, Darmstadt, Germany), a water-free mounting medium, was used for obtaining permanent slides from the three histochemical techniques. 2.5 Quantification of secretion activity To quantify the amount of secretions with different histochemical compositions, fifteen cross-sections stained with PAS and BB were analyzed. Areas of interest (secretory granules) were marked, and artifacts or non-relevant structures were removed using the Image-Pro Plus 4.5 software (Media Cybernetics, Silver Spring, USA). The software automatically counted the number of selected objects per field [30] . This approach was used to compare the composition of the secretions present in the epithelium and lumen of the ductus deferens between snakes exhibiting active and regressed testes. 2.6 Statistical analysis We examined both distal ductus deferens to assess structural changes in animals with either active or regressed testes. The number and distribution of animals used for each evaluation in the present study are shown in Table S2 . The normality of the data was verified using the Shapiro-Wilk test. The Student’s t-test was used to analyze the parametric data, including body mass, testes mass, gonadosomatic index, ductus deferens diameter, ductus deferens lumen diameter, nuclear volume of the principal cells, volume density of connective tissue and epithelium, and the secretion activity (PAS and BB). For the non-parametric data (ductus deferens epithelium height, and volume density of the total tubule, total lumen, muscular tunica, lumen, lumen with spermatozoa, cytoplasmic vesicles, secretions in the lumen, and detached cells), we applied the Mann-Whitney test. The results were expressed as the mean ± standard deviation (SD). Statistical analysis was performed using GraphPad Prism 8 software (GraphPad Software, San Diego, USA), and significance was set at P < 0.05. 3 Results 3.1 Biometric data of the snakes A total of 28 males of the Neotropical rattlesnake ( C. durissus ) were collected throughout the year ( Table S1 ). The specimens’ mean SVL was 83.9 cm. The body mass ranged from 250 g to 1050 g for snakes with active testes and from 282 g to 1383 g for snakes with regressed testes, with no statistically significant difference between the two groups (t = 0.7; df = 25; P = 0.459, Table 1 ). In contrast, the testes mass (t = 4.2; df = 25; P = 0.0003, Table 1 ) and GSI (t = 8.5; df = 23; P < 0.0001, Table 1 ) were significantly higher in animals with active testes than in those with testicular regression. 3.2 Histology The distal ductus deferens of C. durissus consists of a pseudostratified epithelium containing principal and basal cells ( Fig. 1 A). The principal cells are columnar in shape, extending from the basement membrane to the luminal edge, while the basal cells are confined to the basal compartment of the epithelium. The muscular tunica is formed by smooth muscle cells, and the outermost layer of the ductus consists of dense, well-organized connective tissue and blood vessels ( Fig. 1 B). Spermatozoa were observed occupying the lumen of the ductus deferens in almost all animals analyzed, regardless of the histologic classification of the testicular parenchyma. It is worth noting that in one of the specimens sampled in January, during mid-summer season in Brazil, the distal ductus deferens contained either no spermatozoa or only a few ( Fig. 1 C). 3.3 Morphometry Morphometric data showed no significant differences in the distal ductus diameter (DDD) (t = 1.2; df =15; P = 0.247, Fig. 2 A) nor in the lumen diameter (DDLD) (t = 1.0; df = 15; P = 0.322, Fig. 2 B) between snakes with different testicular conditions. The nuclear volume of principal cells (NVPC) remained constant across specimens with different testicular conditions (t = 1.23; df = 18; P = 0.232, Fig. 2 C). However, the distal ductus deferens epithelium height (DDEH) was significantly higher in specimens with regressed testes ( ) when compared with those with active testes ( 18.31 ± 5.57 µm 1.77 10.32 ± ) (t = 5.1; df = 26; µm P < 0.0001, Fig. 3 A–C). 3.4 Stereology When analyzing the distribution of components in the distal ductus deferens, we observed the total tubule (epithelium + lumen + lumen with spermatozoa + cytoplasmic vesicles + secretions in the lumen + detached cells) (U = 2987; P = 0.0037, Table 2 ) and total lumen (lumen + lumen with spermatozoa + cytoplasmatic vesicles + secretions in the lumen + detached cells) (U = 17; P = 0.0005, Table 2 , Fig. 4 A–B) percentages were higher in snakes with active testes than in those with regressed testes. The analysis of each component separately revealed significant differences in the percentages of the muscular tunica, epithelium, luminal secretions, and detached cells in the lumen among snakes with different testicular conditions ( Table 2 ). The epithelium (t = 2.3; p = 0.0258, Table 2 ) and muscular tunica (U = 45; p = 0.0141, Table 2 ) occupied a larger proportional space in the ductus deferens of snakes with testicular regression ( Fig. 4 C–D). The increase was approximately 75 % for the muscular tunica and 60 % for the epithelium ( Table 2 ). Conversely, significantly greater proportions of secretions (more than 100 times) and detached cells (5 times) were observed in the lumen of snakes with active testes ( Fig. 4 E–F; Table 2 ). Connective tissue, lumen, lumen with spermatozoa, and cytoplasmic vesicles in the lumen did not show significant variation among the groups analyzed ( Table 2 ). 3.5 Histochemistry Our histochemical analysis demonstrated that the distal ductus deferens of C. durissus reacts positively to PAS ( Fig. 5 A; 5D) and BB ( Fig. 5 B; 5 E) in animals with both active and regressed testes. However, a negative reaction was observed with AB ( Fig. 5 C; 5 F). Secretions were present in the apical region of the principal cells’ cytoplasm and in the epithelial border. Quantification of PAS- and BB-positive secretions revealed more intense secretory activity in the principal cells during testicular activity compared to the testicular regression period (PAS: t = 2.5; df = 8; P = 0.0364; BB: t = 2.7; df = 8; P = 0.0242, Fig. 6 ). 4 Discussion Our results indicate that, despite C. durissus exhibiting seasonal testicular activity [15,16] , no variation was found in the proportion of spermatozoa in the distal ductus deferens. The presence of spermatozoa in the lumen of the distal ductus deferens of snakes with regressed testes, along with the lack of variation in the proportion of gametes between specimens with active and inactive gonads, provides strong evidence for the occurrence of sperm storage in C. durissus . Almeida-Santos et al. [21] demonstrated through laparotomy and extraction of sperm from the ductus deferens that spermatozoa are present throughout the year. However, during winter (the period of testicular regression), a lower quantity of spermatozoa was observed in the ductus. Notably, in their study, sperm extraction was performed along the entire length of the ductus deferens. Although mating among captive C. durissus has been recorded in autumn, the period when males exhibit regressed testes [21] , in the wild, copulation has been observed in summer [22] , when the testes are active. Based on the recent observation of copulation in summer, the species' reproductive cycle could be classified as associated (or synchronous) [11] . Conversely, copulation in autumn would suggest a dissociated (or asynchronous) cycle [11,32] . In this context, the dissociation between gamete production and mating implies that sperm storage is an obligatory tactic for this species [33] . Other studies found dissociated cycle patterns in Neotropical snakes, such as Bothrops erythromelas [34] , B. cotiara [35] , and Spilotes pullatus [36] . The presence of spermatozoa in the ductus deferens throughout the year has been reported in various snake species, such as Vipera berus [37] , Xerotyphlops vermicularis [38] , Seminatrix pygaea [1,5] , C. durissus [21] , Austrelaps superbus, Hemiaspis signata, Notechis scutatus, and Suta gouldii [39] . Prolonged storage of gametes has been described only in snakes among Squamata, and it may play a fundamental role in the reproductive success of species with dissociated cycles [10] . We observed secretory activity in the distal ductus deferens of C. durissus , with the presence of neutral glycoproteins and proteins not conjugated to carbohydrates. These findings are similar to those reported for Agkistrodon piscivorus [9] , Dipsas mikanii [8] , and B. erythromelas [36] . Consistent with the present study, Barros et al. [36] and Rojas et al. [8] found stronger PAS and BB reactions in the apical region of the ductus deferens epithelium, while AB did not result in a positive reaction. According to Rojas et al. [8] , these secretions may be responsible for nourishing spermatozoa during storage. As we demonstrated, the proportion of spermatozoa in the distal ductus deferens did not vary during the main phases of the male reproductive cycle of C. durissus , despite the morpho-functional changes observed for this segment, which clearly followed the gonadal activity cycle. In this context, the percentage of total tubule and lumen in snakes with active testes increased, corresponding to a reduction in these same parameters in snakes with regressed testes. Similarly, Amer and Elshabka [40] described this same response of ductus deferens for the colubrid snakes Psammophis sibilans and Spalerosophis diadema during the active breeding season. Studies that observed secretory activity in the ductus deferens of snakes did not assess variations in secretions during the reproductive cycle [8,9,36] . However, the results and analyses of the amount of secretions in our study indicate that the ductus deferens are more active during spermatogenic activity, when concentrations of plasma testosterone are higher [16] . During this period, we observed stronger histochemical reactions to PAS and BB, indicating that the principal cells of the epithelium exhibit significant secretory activity. No relationship was found between nuclear volume and the biosynthetic activity of the principal cells of the ductus deferens. Further electron microscopy analyses investigating the protein synthesis machinery may clarify the differences in secretory activity. Although the ductus deferens is more active during the spermatogenesis phase, the epithelium height is lower compared to the testicular regression period. Spermatogenesis is when maximum testicular activity occurs; therefore, the distal region of ductus deferens would be receiving spermatozoa, testicular fluid, and possible secretions from the efferent ductus and the epididymis [4] . According to Amer and Elshabka [40] these alterations in epithelium height during spermatogenesis are due to an expansion of the ductus deferens for sperm storage, resulting in a thinner epithelium during this phase. We believe that the pressure exerted by these fluids on the epithelium may alter the overall cell morphology and decrease the epithelium height, which, for some yet-undetermined reason, does not affect the tubular diameter of the ductus deferens. On the other hand, the higher percentage and height of the epithelium in C. durissus with regressed testes, as observed here, follow the findings obtained for snake species from temperate areas, such as the viperid Cerastes vipera [41] and the colubrids Psammophis sibilans and Spalerosophis diadema [40] . Additionally, Viana et al. [7] observed similar changes in the epithelium of the Neotropical turtle, Kinosternon scorpioides . It is likely, that the epithelium height is regulated by luminal pressure, which is low in individuals with regressed testes, as previously suggested by Aldridge et al. [10] for other snake species. A lower epithelium is not necessarily associated with reduced secretory activity by the principal epithelium cells; on the contrary, secretory activity in the ductus deferens is higher in specimens with active testes. Our results suggest that principal epithelium cells might be responsive to androgens, as has been observed in various vertebrates [42–45] . Therefore, seasonal variation in morphology and function of the distal ductus deferens is likely to be affected by testosterone levels, which were higher during testicular activity [16] . Since the evidence suggests that epithelium enlargement is not associated with increased secretory activity, we propose a hypothesis that, during testicular regression, the epithelial cells of the distal ductus deferens assume a role in water reabsorption. This mechanism might be one of the aspects that allow sperm storage in the ductus deferens, the fluids containing glycoprotein produced in the organ itself and in the epididymis would become more concentrated [46,47] . Sever [6] found apical vesicles in the epithelial cells of the ductus deferens of a colubrid, indicating a possible role in fluid absorption. The muscular tunica is responsible for the contractile movements that push the spermatozoa toward to the ampulla of the ductus deferens. Despite this function, the muscular tunica also appears to play a role in maintaining the total diameter of the ductus throughout the reproductive cycle. As observed, the smooth muscle cells were stretched in active testes and contracted in snakes with regressed testes. This suggests a fine volumetric adjustment mechanism that allows the ductus deferens to adapt to the filling condition of its lumen without altering its total diameter. Similarly, in colubrid snakes, the muscular tunica exhibited the same response throughout the reproductive cycle [40] . The presence of few spermatozoa in the lumen of the distal ductus deferens in only one specimen collected during summer may indicate post-copulation, given that mating was recently observed in the wild during this season [22] . Additionally, Resende and Avelar [16] suggested the possibility that C. durissus could also copulate in summer, based on certain reproductive observations. Males collected from December to March (summer in Brazil) exhibited a hypertrophied sexual segment of the kidney, high spermatogenic activity, and elevated plasma testosterone levels [16] . Together, our findings support the hypothesis that males could also copulate in the summer. The mechanism of spermatozoa storage has evolved convergently among species belonging to different groups and constitutes a biological tactic that ensures males can copulate after a long period of hibernation. This is particularly important for species native to temperate areas. In this context, it is well known that the genus Crotalus originated in North America, and later migrated to South America via Central America [48,49] . Rattlesnakes from North America hibernate during winter and become reproductively active in the spring [32,50–52] . A critical question remains: what stimuli trigger spermatozoa storage in the ductus deferens of the Neotropical rattlesnake C. durissus ? While we propose that this phenomenon may be linked to the retention of an ancestral evolutionary trait, as suggested by Barros et al. [36] for B. erythromelas , this hypothesis raises numerous questions. Moreover, we must also consider the marked differences in the reproductive cycles among C. durissus populations in Brazil [53] . Understanding the mechanisms behind this adaptation is crucial for advancing our knowledge of the reproductive biology of this highly adaptable Neotropical genus. In summary, the presence of spermatozoa in the distal ductus deferens of C. durissus across different phases of testicular activity indicates sperm storage, as suggested by the consistent proportion of spermatozoa among the groups analyzed. The intense secretory activity of the epithelial principal cells during the spermatogenic period, marked by PAS- and BB-positive secretions, appears to be associated with gamete maintenance. Furthermore, the reduction in luminal fluid in regressed rattlesnakes is suggestive of the resorptive function of the epithelial cells. We hypothesize that this mechanism may also ensure the viability of stored spermatozoa by increasing the concentration of glycoproteins. However, the duration of sperm viability within the ductus deferens of this species remains unknown [4] . To address this, we are currently evaluating the rattlesnake sperm lifespan inside the lumen of the ductus deferens using the cell tracker bromodeoxyuridine. As final remarks, although sperm storage is considered an obligatory part of the male Neotropical rattlesnake reproductive cycle, based on the dissociation between summer spermatogenesis and autumn mating, a recent study demonstrated a summer copulation for C. durissus [22] . Our results indicate that the maintenance of spermatozoa inside the ductus deferens is a strong evolutionary trait that persists in this species, possibly due to the reduced likelihood of encountering a receptive female in the wild. Understanding the functional mechanisms that support long-term sperm storage may, therefore, unlock new biotechnological opportunities, such as improving assisted reproductive technologies, enhancing conservation efforts for threatened species, and developing novel approaches for sperm preservation and fertility management in both wild and captive populations. 5 Conclusion The present study used histomorphometry and histochemistry techniques to investigate variations in the distal ductus deferens of C. durissus throughout the males’ reproductive cycle. Our approach contributes to a better understanding of sperm storage tactic in males of a Neotropical snake species, highlighting increased secretory activity of the principal cells during testicular activity. Ongoing immunohistochemistry studies focusing on determining whether the epithelial cells are responsive to androgens stimulation and whether their absorptive function is mediated by aquaporins will further contribute to a more detailed understanding of the physiology of the distal ductus deferens. CRediT authorship contribution statement Flávia Cappuccio Resende: Writing – review & editing, Writing – original draft, Supervision, Project administration, Methodology, Investigation, Formal analysis, Data curation, Conceptualization. Gleide Fernandes de Avelar: Writing – review & editing, Resources, Formal analysis, Conceptualization. Leonardo Carvalho: Writing – original draft, Methodology, Investigation, Formal analysis, Data curation. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgments We would like to thank S. M. Almeida-Santos and T. O. Farias for their suggestions regarding data analysis and manuscript development. We are grateful to the Ezequiel Dias Foundation for financial and logistical support and to the Chico Mendes Institute for Biodiversity Conservation (ICMBio) for providing the license to capture the snake specimens. We also appreciate the technical support of M. L. Santos and J. A. G. Cabral. Special thanks go to Giselle Agostini Cotta and Heloísa Marques for their administrative assistance in the laboratory. We are deeply grateful to Anthony Wagner for the English language revision. Appendix A Supporting information Supplementary data associated with this article can be found in the online version at doi:10.1016/j.therwi.2024.100114 . Appendix A Supplementary material Supplementary material
REFERENCES:
1. SEVER D (2002)
2. VITT L (2014)
3. GRIBBINS K (2011)
4. TRAUTH S (2011)
5. SEVER D (2010)
6. SEVER D (2004)
7. VIANA D (2014)
8. ROJAS C (2013)
9. SIEGEL D (2009)
10. ALDRIDGE R (2020)
11. WHITTIER J (1987)
12.
13. CAMPBELL J (1989)
14. NOGUEIRA C (2019)
15. SALOMAO M (2002)
16. RESENDE F (2021)
17. ALMEIDASANTOS S (1990)
18. SALOMAO M (1995)
19. SAWAYA R (2008)
20. TOZETTI A (2013)
21. ALMEIDASANTOS S (2004)
22. ALMEIDASANTOS S (2021)
23. ALMEIDASANTOS S (2002)
24. ALMEIDASANTOS S (1997)
25.
26. MATHIES T (2011)
27. LIANG G (2011)
28. DESAJUNIOR A (2012)
29. WARREN K (2014)
30. ABRAMOFF M (2004)
31. COSTA G (2017)
32. ALDRIDGE R (2002)
33. SEIGEL R (1987)
34. BARROS V (2017)
35. SILVA D (2012)
36. BARROS V (2014)
37. MARSHALL A (1957)
38. FOX W (1965)
39. SHINE R (1977)
40. AMER F (1978)
41. SIVAN J (2012)
42. GOYAL H (1997)
43. OLIVEIRA R (2012)
44. NISHIZAWA H (2002)
45. ENDO D (2003)
46. CLULOW J (1998)
47. DASILVA N (2006)
48. ECHEVERRIGARAY S (2001)
49. QUIJADAMASCARENAS J (2007)
50. SCHUETT G (2002)
51. SCHUETT G (2005)
52. WASTELL A (2016)
53. BARROS V (2012)
|
10.1016_j.vgie.2019.12.008.txt
|
TITLE: Colonoscopic resection of appendiceal endometriosis
AUTHORS:
- Nieto, Jose
- Deshmukh, Ameya
- Sharma, Bashar
- Dawod, Enad
ABSTRACT: No abstract available
BODY:
Endometriosis (EM) is the presence of endometrial tissue outside the uterine cavity. It commonly affects women of reproductive age and results in abdominal/pelvic pain and possible infertility. 1 Appendiceal EM is exceedingly rare; it constitutes approximately 3% of all GI EM and accounts for less than 1% of all EM cases. 1 The appendiceal tip and body are the most frequent locations of involvement. An estimated 66% of cases affect the muscular and seromuscular layers of the appendix. 2 Additionally, 33% of cases involve the appendiceal serosa. 2 It is most often found incidentally during appendectomies or colonoscopies, being contingent on the inversion of the appendiceal orifice. Appendiceal intussusception typically manifests as the result of abnormal appendicular peristalsis arising from local irritation. 2 The incidence is approximately 0.01% in patients who have undergone appendectomy, making it an extremely rare phenomenon. 3 3 A 66-year-old woman was seen with a polypoid lesion found on screening colonoscopy in the appendiceal orifice ( Fig. 1 ). A biopsy specimen could not be taken owing to the submucosal location of the lesion. On repeated colonoscopy, a partially inverted appendix was visualized. Possible carcinoid lesion was included in the differential diagnosis, as was a potential submucosal lesion. Using the double-channel therapeutic endoscope (Olympus GIF-2TH180; North Brooklyn Park, Minn, USA), we identified the appendiceal orifice and the partially inverted appendix. Ten mL Orise lifting gel (Boston Scientific, Maple Grove, Minn, USA) was injected submucosally ( Fig. 2 ). A 2-snare technique was used in the capture and resection of the lesion. Initially, 1 snare was passed over the appendix ( Fig. 3 ). Then, the second snare was passed over the distal portion of the partially inverted appendix ( Fig. 4 ). Using traction, we then completely inverted the appendix into the lumen of the cecum. Once the appendix was correctly in position, the snare overlying the proximal base of the appendix was closed, and a standard polypectomy technique was used to resect the appendix. The appendix was then captured with the snare that was used for traction with the other open channel. Clips were deployed to close the defect ( Fig. 5 ). Appendectomy was completed successfully ( Video 1 , available online at www.VideoGIE.og ). The gross pathologic appearance was that of an infiltrated appendix with endometrial tissue, consistent with appendiceal EM ( Fig. 6 ). Pathologic analysis confirmed negative margins ( Fig. 7 ). Follow-up CT did not reveal any evidence of perforation. The patient was discharged within 24 hours from the hospital. Appendectomies are overwhelmingly performed laparoscopically, and very few case reports have described endoscopic resection of an appendix. Some transcecal appendectomies have been performed successfully for appendicular pathologic conditions, including polyps, although this technique requires a circumferential endoscopic full-thickness incision around the appendiceal orifice because of inadequate inversion of the appendix ( 4 Video 1 , available online at www.VideoGIE.org ). 4 , 5 Disclosure Dr Nieto is a consultant for Boston Scientific. All other authors disclosed no financial relationships relevant to this publication. Supplementary data Video 1 Colonoscopic resection of appendiceal endometriosis.
REFERENCES:
1. GUSTOFSON R (2006)
2. GUPTA R (2019)
3. TRAN C (2019)
4. LIU B (2019)
5. YUAN X (2019)
|
10.1016_j.matdes.2023.111683.txt
|
TITLE: Ultrafast response of self-powered humidity sensor of flexible graphene oxide film
AUTHORS:
- Zeng, Songwei
- Pan, Qiubo
- Huang, Zhijing
- Gu, Chenjie
- Wang, Tao
- Xu, Jinhui
- Yan, Zihan
- Zhao, Feiyu
- Li, Pei
- Tu, Yusong
- Fan, Yan
- Chen, Liang
ABSTRACT:
Next-generation humidity sensor, as one of the smart devices in artificial skin or flexible electronics, will need to be self-powered, flexible, fast and accurately detecting in the microenvironment. Although various flexible humidity sensors have already been demonstrated, most of them rely on large external power sources to operate, and still suffer from slow response, poor selectivity in environmental stimuli, and expensive and complicated procedures. Here, we realize a self-powered humidity sensor with ultrafast response and recovery time of less than 0.3 s by flexible GO film. Importantly, of all state-of-the-art humidity sensors, this is the fastest response for humidity sensor. Moreover, the sensor has a high sensitivity within a wide range of RH from 33% to 98%, and an excellent selective response to humidity under multiple environmental stimuli, due to its moist-induced generator. Theoretical computations reveal that protons generated spontaneously from oxygen-containing groups and water molecules, have a fast diffusion behavior as Grotthuss hopping. In particular, we build four real-time measurement systems capable of respiration monitoring of human, smart leaf surface humidity sensor, and energy harvesting. Our work provides a simple way to fabricate next-generation self-powered sensor using GO film with outstanding humidity sensor performance.
BODY:
1 Introduction As one of the smart devices in artificial skin and flexible electronics, flexible humidity sensors offer a means to achieve sensitive humidity measurement in microenvironment [1] , such as human skin [2] , respiratory tract [3] , surgical incision [4] , plant leaves [5,6] , and surface-reactions in gas–liquid or gas–solid interfaces [7] , which is critical for human biology research, physiological tracking for fitness and wellness applications, and agricultural and industrial production [1,8,9] . Despite the remarkable achievements and charming promise, the increasing adoption of flexible humidity sensor with specific characteristics or functions still remains challenges. These include self-powered or low power consumption to simplify and miniaturize devices, further faster response and recovery times, improve sensor sensitivity, and wider humidity detection range [9–12] . Recently, ultrathin solar cells [13] or near-field communication (NFC) antenna [14] were combined as ultraflexible power sources, potentially for self-powered sensor applications. However, the integrated solar and NFC technologies were still powered for sensors by external sources, which limits potential usage of humidity sensors, as is often the case used in microenvironment with relatively low light and humid [8] . A slow response is another key challenge for humidity sensors [15] due to the slow diffuse process for water molecules in materials, as the traditional previously reported response and recover times were tens of seconds. Composite materials [16–18] are used in various fields because of their excellent properties. Graphene oxide (GO) based composites are also showing up in sensors. One potential sensor material is graphene oxide film, which has excellent performance of moisture-enabled electricity generator [15,19] . Water can permeate unimpeded through micrometer-thick GO film due to its two-dimensional low-friction nanochannels and nanocapillary-like high pressure [20] . Both the physically adsorbed water and the oxygen-containing groups on GO flakes provide GO film with proton conductivity [21–23] , which is highly sensitive to water molecules [15,24,25] . Recently reported a GO-based composite film has self-powered ability, together with fast response and recovery time of 0.8 s and 2.4 s [26] . Despite these great advances, however, their potential usages still suffer from slow response, poor selectivity in environmental stimuli, and complicated procedures. Here, we report an ultrafast response of a self-powered GO film to humidity, which was fabricated by a simple drop-casting method. The GO film exhibited a moist-induced voltage up to 0.4 V, ultrafast response and recovery time of less than 0.3 s, and excellent flexible and stability during periodically touched with a finger with 98% RH. Remarkably, the ultrafast response is about one or two orders of magnitude higher than the traditional humidity sensors reported previously [27–29] . There was no mechanical or photo-induced response of the GO film, confirming an excellent selective response to humidity. Scanning electron microscope (SEM) and X-ray diffraction (XRD) results showed that the GO film was well stacked by GO flakes, and their interlayer spacing between GO flakes varied with the relative humidity. Theoretical studies suggest that protons generated spontaneously from oxygen-containing groups and water molecules, have a fast diffusion behavior. We further realize the real-time measurement of respiration monitoring of human, smart leaf surface humidity sensor, and energy harvesting, confirming a simple avenue for fabricating GO films with exceptional humidity sensor performance. 2 Results GO films were prepared from a GO suspension via the drop-casting method (See Methods). As illustrated in Fig. 1 (a), the humidity sensor has flexible and the simplest sandwich structure. The output voltage of the prepared GO films was recorded with electrochemical workstation CHI760e. When we periodically touched the film surface (thickness of ∼ 10 μm) with a clean finger (the finger surface is about 98% RH), the output voltage showed a periodic ultrafast response with a voltage of ∼ 0.3 V. While touched with a gloved finger, or irradiated with light, there were no obvious output, as shown in Fig. 1 (b). By individually testing and enlarging the detail of the voltage response in Fig. 1 (c), we estimated its response time (tr) and recovery time (tf) of about 0.28 s and 0.30 s. We noted that the ultrafast response is about one or two orders of magnitude higher than the traditional humidity sensors [30–38] , and the fastest previously reported response and recovery time was 0.8 s and 2.4 s [26] , as shown in Fig. 1 (d). Thus, the GO film exhibited a good selective response to humidity under typical environmental stimuli, with the moist-induced self-powering and the fastest response. We further verify the humidity response of the GO film in a wide range of humidity. As shown in Fig. 1 (e), with the relative humidity from 33% to 98% at 25 °C, the moist-induced voltage of the GO film increased from ∼ 10 mV to ∼ 0.33 V, showing a sensitive humidity response within a wide range of humidity. It can be observed that the humidity sensitivity curve presents two linear relationships with humidity response, with the corresponding 1.1 mV/%RH in low humidity (33% RH ∼ 70% RH), and 10.0 mV/%RH in high humidity (70% RH ∼ 98% RH), resulting in higher sensitivity with higher humidity. We prepared GO films with a thickness of about 5 μm ∼ 25 μm, which were controlled by the amount of GO suspension loaded on the substrate. The GO film with a thickness of 15 μm has the largest moist-induced voltage of ∼ 0.42 V at 98% RH. Decrease or increase the thickness, the moist-induced voltage decreases. We note that the thickness of GO film can improve the water content absorbed in film [24] , but reduce the water permeance [39] . Obviously, the former can increase proton concentration with the corresponding enhanced moist-induced voltage, while the latter will expand the space charge region and prolong the time for the internal charge diffusion motion and drift motion to stabilize and lead to a slow response. The effect of moisture on proton conductivity in GO film was further studied by electrochemical impedance spectroscopy measurements. As shown in Fig. 2 (b), an increase in proton conductivity of two orders of magnitude was observed, with increasing of the relative humidity from 33% to 85%. The electroconductibility for GO film is mainly determined by proton conductivity, which varies with humidity from 6.2 × 10 −5 S cm −1 to 1.1 × 10 −3 S cm −1 , with the corresponding small electron conductivity of 9.0 × 10 −7 S cm −1 to 1.2 × 10 −5 S cm −1 . The large increase in proton conductivity shows a high sensitivity to moisture. It was further confirmed using XRD spectra, which showed that the interlayer spacing increased from ∼ 6.7 Å to 12.0 Å with the relative humidity ( Fig. 2 (c)). The increase of the water content in film will increase the proton concentration, and enhance the moist-induced voltage. Unbiased ab initio molecular dynamics (AIMD) simulations further confirm the spontaneous proton transfer on the surface of GO with water molecules adsorbed at room temperature. Fig. 3 shows the AIMD snapshots of GO with or without water molecules adsorbed (See Methods). For GO flake without water adsorbed, we can see no proton transfer on the surface from 0 ps to 40 ps ( Fig. 3 (a)). In contrast, with water adsorbed, three types of proton transfer occur frequently during the 40 ps, which were defined as type I, type II, and type III ( Fig. 3 (b)). For type I, the proton transfers from the hydroxyl group to the neighboring water molecule at t = 10.26 ps, 21.92 ps and 40.0 ps. The proton transfers of type II occur between a neighboring water molecule and a dangling C-O bond at t = 10.29 ps, 22.0 ps and 39.97 ps. For type III, the proton transfers from the hydroxyl group to a neighboring dangling C-O bond at t = 13.48 ps. The AIMD simulation results of the two systems indicate that water molecules can act as a mediator to facilitate the spontaneous proton transfer along the GO interface, consistent with our previous DFT calculations [21–24] . In response to water molecules, the proton transfers on the interface of GO may cause a potential difference across the GO sheet. The theoretical studies suggest that protons generated from oxygen-containing groups and water molecules, have a fast diffusion behavior as Grotthuss hopping [40] . 3 Discussion To further demonstrate the potential applications, we used the sensors for human respiratory, leaf surface humidity, and energy harvesting tests. As shown in Fig. 4 (a), we placed GO humidity sensors at two points inside a pipe, and used an oscilloscope to detect the voltage signal of the sensor. Then, the humidity and velocity of the respiratory airflow can be monitored in real-time. We can evaluate the flow velocity of respiratory airflow was about 111.1 cm/s based on the ultrafast response time, which is consistent with the results measured by existing laboratory airflow meters. The ultrahigh respond time gives a low cutoff frequency, well below the respiratory frequency and thereby causing an accurate measurement in humidity and airflow. Compared to the traditional measurement of oral-nasal airflow using nasal pressure thermistors or thermocouples at present [41] , which provide a semiquantitative estimate of airflow, our GO humidity sensor was more accurate, convenient, and richer in physiological information tracking. Surface humidity of gardenia leaves was monitored during a long-term operation (60 h). The gardenia was placed in a laboratory environment (25 °C, 50% RH) near a window, and the GO humidity sensor was transferred to the leaf surface with water-soluble tape, as shown in Fig. 4 (c). The RH spectra in Fig. 4 (d) showed that the humidity on the leaf surface continues to rise slowly from ∼ 55% RH to 80% RH at night; while after sunrise, the humidity on the leaf surface decreased. In addition, the surface humidities of leaf were consistently higher than the ambient humidity of ∼ 45% RH. The results showed that the external sunlight can affect surface wetness of the gardenia leaf, revealing the effect of sunlight on the structure and composition of the leaf surface. Moreover, our GO humidity sensor can be used as a moisture-electric power generator. Several GO films with a similar size (with a diameter of 10 mm) and thickness (∼15 μm) were connected in series as a self-powered device ( Fig. 4 (e)). A series of light-emitting diodes (LED) can be turned on using the device on the arm. Three devices in series possessed a high output voltage of ∼ 1.3 V and current density of 500nA/cm 2 , respectively. An array of 26 individually self-powered humidity sensors on an ITO with a radius of 50 mm was further fabricated to demonstrate its application in large area multiplex sensing, as shown in Fig. 4 (g). The output voltage increased linearly with the number of sensors in series, for example, the output voltage of 10 sensors in series reached up to ∼ 4 V at 98% RH and 25 °C. For sensors in parallel, the output voltage was stable at ∼ 0.4 V at 98% RH as the number of sensor increased. These results demonstrated that the humidity monitoring and energy harvesting of our GO sensors can be enhanced by in series and parallel to meet the requirement of powering most wearable electronic devices [42,43] , with the potential for large-scale manufacturing. Overall, we successfully developed a GO humidity sensor with superior performance and multifunctionality, such as self-powered, good humidity selectivity, ultrafast and accurate detection in a wide range of relative humidity, flexible and portable in the microenvironment. Importantly, of all state-of-the-art humidity sensors, this is the fastest response for humidity sensor. The sensors in series can increase the output, enough to meet the requirement of powering most wearable electronic devices. This advanced humidity sensor is based on our understanding of the generation and diffusion of protons in GO film and its production is scalable. We note that our previous works show that smaller size [44] and higher hydroxy groups of GO flakes [45] have a much higher water molecules permeance, suggesting that the performance of GO humidity sensor could be further greatly improved. Our findings represent a step towards GO-based applications in miniature energy harvesting and storage devices, surface humidity sensor, and other smart flexible electronics. 4 Materials and Methods 4.1 Preparation of GO solution GO suspension was prepared with the modified Hummers method from graphite powder as previously reported [39] . Graphite powders were put into concentrated H 2 SO 4 , K 2 S 2 O 8 , and P 2 O 5 solution with continuous stirring for several hours. Then the mixture was washed with deionized water (DI), centrifuged, and dried at 60 °C to obtain the preoxidized graphite. Then, they were further oxidized in concentrated H 2 SO 4 and KMnO 4 , diluted with DI water, followed by the addition of H 2 O 2 . The product was centrifuged and washed with 1:10 HCl aqueous solution and DI water sequentially to remove ion species. Following, the layered GO was collected and diluted with 1 L DI water, and followed by ultrasonic treatment. Finally, GO suspension was obtained with a concentration of about 5 mg mL −1 . 4.2 Fabrication of the humidity sensor First, a rectangular polyimide (PI) film (25 mm × 10 mm, a thickness of ∼ 25 μm) covered by mask plate was treated with ozone, and a platinum layer of electrode with a thickness of ∼ 50 nm was coated by a magnetron sputtering vacuum ion coater. Next, GO suspension was drop-casting on the substrate, dried at at 50 °C for 1.5 h, and another platinum electrode was then plated on the exposed surface of the GO film. The thickness of GO films was controlled by the amount of GO suspension loaded on the substrate. Then, the output voltage of the prepared GO films at different humidity were recorded with electrochemical workstation CHI760e. 4.3 Two methods of relative humidity control Constant temperature and humidity box was used to control temperature and humidity. The constant temperature and humidity box selected in this experiment is the HMS type constant temperature and humidity box from Shanghai Jinghong Experimental Equipment Co., Ltd. The temperature control range is 10 ∼ 50℃, and the humidity control range is 50 ∼ 98% RH. The constant humidity salt solution was put into a closed measuring container, and different salt solutions can produce measurement environments with different humidity. The salt solutions were selected in this experiment include MgCl 2 solution, K 2 CO 3 solution, NaBr solution, NaCl solution, and KCl solution with the corresponding relative humidity of 33%, 43%, 59%, 75%, and 85% at 25 °C, respectively. 4.4 Output signal measurement The computer-controlled electrochemical workstation CHI760e was used to record the output electrical signals of the sensor under different humidity. 4.5 Ab initio molecular dynamics (AIMD) simulations There are two simulation systems consist of GO sheets with or without 26 water molecules adsorbed. Their periodic cubic boxes both have xyz dimensions of 14.76 × 17.04 × 21.02 Å 3 . According to the Shi-Tu model [46] , the GO sheet was constructed with 96 carbon atoms, 10 epoxy groups and 11 hydroxyl groups. The DFT calculations and AIMD simulations were carried out with the CP2K 9.1 packages [47] . We first performed the global geometry optimization of the systems and then the eventual optimized systems were applied as the initial structures to run the AIMD simulations. In AIMD simulations, the hybrid Gaussian and plane wave scheme [48] in DFT was employed using the Quickstep module. We used the revised Perdew-Burke-Ernzerh (revPBE) [49] exchange–correlation functional with the DFT-D3 dispersion correction to obtain a reasonable description of interactions between water and GO sheets [50] . The double-zeta valence polarized basis set was used for all atomic kinds and the electronic cores were represented by Geodecker-Teter-Hutter (GTH) pseudopotentials [51] . The electronic density is expanded in the form of plane waves with a cutoff of 460 Ry and the self-consistent field convergence criterion was chosen as 10 −6 a.u. The simulation was performed in the canonical ensemble (NVT) and the temperature was maintained at 300 K by a Nosé–Hoover thermostat with a time step of 0.5 fs. The system was simulated for 40 ps without any constraints. 5 Conclusion In conclusion, we realize a self-powered humidity sensor with ultrafast response and recovery time of less than 0.3 s by flexible GO film. Importantly, of all state-of-the-art humidity sensors, this is the fastest response for humidity sensor. Moreover, the sensor has a high sensitivity within a wide range of RH from 33% to 98%, and an excellent selective response to humidity under multiple environmental stimuli, due to its moist-induced generator. An array of self-powered humidity sensors was further fabricated to demonstrate its application in large area multiplex sensing. This work offers a method for a high performance humidity sensor using GO film with large-scale manufacturing potential. Author contributions L.C. conceived the ideas. L.C., Y.F., S.Z., and C.G. designed the experiments and co-wrote the manuscript. S.Z., Q.P., T.W., C.G., and F.Z. performed the experiments and prepared the data graphs. Y.T. and Z.H. designed and performed the simulations. All authours discussed the results and commented on the manuscript. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgements This work was supported by the National Natural Science Foundation of China (12074341, 11905186, and 12075201), the Scientific Research and Developed Funds of Ningbo University (No. ZX2022000015).
REFERENCES:
1. LIU R (2022)
2. HUA Q (2018)
3. WANG B (2020)
4. BRUCE N (2007)
5. OREN S (2017)
6. GIRALDO J (2019)
7. LONG Y (2021)
8. WANG X (2015)
9. WANG C (2019)
10. KUANG Q (2007)
11. SOMEYA T (2019)
12. ZHANG S (2018)
13.
14. SHAO Y (2022)
15. YU X (2017)
16. CAO D (2021)
17. CAO D (2021)
18. WANG X (2021)
19. LI X (2022)
20. JOSHI R (2014)
21. TU Y (2020)
22. ZHU G (2021)
23. YAN Z (2022)
24. BI H (2013)
25. LIANG Y (2018)
26. LEI D (2022)
27. LI T (2017)
28. CHO M (2020)
29. ZHANG D (2020)
30. WU J (2019)
31. TOMER V (2015)
32. YUAN M (2015)
33. WANG L (2011)
34. ZHANG Y (2015)
35.
36. IMRANAB Z (2013)
37. BAUSKAR D (2012)
38. REN K (2017)
39. CHEN L (2017)
40. AGMON N (1995)
41. FARRE R (1998)
42. LV J (2018)
43. ZHANG Q (2022)
44. ZHANG L (2020)
45. YI R (2021)
46. YANG J (2014)
47. HUTTER J (2014)
48. LIPPERT G (1997)
49. ZHANG Y (1998)
50. GRIMME S (2010)
51. GOEDECKER S (1996)
|
10.1016_j.gimo.2025.103116.txt
|
TITLE: P747: The collection and use of patient sex, gender, race, ethnicity, and ancestry data among US-based clinical genomics laboratories
AUTHORS:
- Golden-Grant, Katie
- Candadai, Sarah Clowes
- Sternen, Darci
- Kotzer, Katrina
ABSTRACT: No abstract available
BODY:
Introduction Clinical genetics organizations are responsible for continually assessing their practices and procedures using a lens of diversity, equity, and inclusion. However, there is a dearth of guidance pertaining to these principles from regulatory and professional organizations that laboratories generally defer to for direction. The collection of patient demographics including sex, gender, and race, ethnicity, and ancestry (REA) by genetic testing laboratories is not standardized and lacks consensus. Furthermore, evidence for the precise scientific value of this information on laboratory workflows is limited. Several studies have called for the development of consensus guidelines to aid both clinicians and researchers in collecting useful sex, gender, and REA data, and to reduce potential harms from the use of historic, confusing, or inaccurate language. In an effort to inform guideline development, we conducted a survey to assess the ways in which sex, gender, and REA data are collected and utilized among clinical genomics laboratories. Methods The survey was developed using SurveyMonkey and distributed by email to laboratory directors and genetic counselors at 30 different US-based laboratories which offer clinical exome and/or genome sequencing. Responses were anonymized and extracted into Excel for analysis by the authors. Results The survey was completed by 11 individuals, each representing a different clinical lab (37% response rate). Four participants work at commercial laboratories and the remainder work in a hospital or academic-based lab. Self-reported roles within their respective laboratories included genetic counselor, director, manager, administrator, and variant scientist. According to survey responses, the most common source of patient sex, gender, and REA data across laboratories is the test requisition form. Patient sex is collected by all 11 labs, whereas gender is collected by two. Ten participants reported that their lab relies on patient sex as a quality assurance metric. Other common uses of sex data include variant interpretation and customized report content. Among the 8 labs that report collecting REA data, the terms used, and sources of this information were markedly variable, with ethnicity being the most common term ascertained. Notably, 5 participants indicated that their laboratory does not routinely use patient REA data at any point in the testing pipeline. Among those that did report using REA data, the most common uses reported were variant interpretation, customized report content, and post-testing data analysis. Participants noted various drivers for their collection practices, including clinical necessity, software requirements, regulatory requirements, and historic practice. Seven participants indicated that their lab had made recent changes to their processes for collection or use of patient sex and gender data, while 4 reported recent changes in the collection or use of patient REA data. Four participants reported lack of published guidelines as a barrier to making changes to their test request form or other processes related to collection and use of patient REA data. Conclusion Taken together, the results of this survey highlight a dependence on patient sex for quality assurance and minimal use of patient gender data. Additionally, methods for collection of REA data are highly variable, and utility of this data appears to be limited or absent in most laboratories. The results from this study can be used to inform the development of guidelines which are needed to help standardize the collection of demographic data and to promote inclusive language practices by clinical genetics laboratories.
REFERENCES:
No references available
|
10.1016_j.nxener.2025.100317.txt
|
TITLE: Electronic interaction coupling of Fe-ZrO2 enables efficient and stable ORR electrocatalyst for long-cycling Zn-air battery
AUTHORS:
- Liu, Zheng
- Chen, Shuo
- Li, Shouzhu
- Yan, Jianhua
ABSTRACT:
Metallic Fe single atom is an efficient catalyst for rechargeable zinc-air batteries (ZABs), but faces problems such as easy deactivation and instability in use. Here, we report a stable Fe-ZrO2 electrocatalyst for oxygen reduction reaction (ORR) catalysis. Atomically coupled Fe-O-Zr heterointerfaces are formed by embedding Fe nanodots (around 18 nm) into ZrO2 nanoparticles dispersed in nitrogen doped bubble-like porous carbon nanofibers (PCNFs). In this structure, Fe can share electrons with ZrO2 to form interfacial coupling Fe-O-Zr bond as a bridge for charge transfer, in which ZrO2 acts as electron promoter to facilitate electron transfer from Fe to the interface, thereby inhibiting the rapid deactivation of Fe and accelerating the activation and conversion of intermediate adsorbates. As a result, the electrocatalyst with a high loading of Fe (7.96 wt%) achieves a high half-wave potential of 0.868 V, with 95.3% of retained activity after cycling for 39600 s. The ZABs show stable open-circuit voltages and high capacities of 823.9 mA·cm−2, and can stably run 1560 cycles at 10 mA·cm−2 with a round-trip efficiency of 51%, exhibiting superior cycling stability.
BODY:
1 Introduction The ORR kinetics occurring on air cathodes plays a critical role in energy technologies such as fuel cells and rechargeable metal-air batteries (MABs) [1-5] . In the cathode reaction, the slow ORR kinetics was generally caused by the high adsorption energy required for O 2 and intermediates. The most widely used noble metals such as Pt and Pd show high efficiency, but their high cost, poisoning effects, and poor stability hinder the commercial application of MABs [10,6-9] . Considering these issues, it is imperative to design and develop stable and sustainable non-precious metal-based ORR catalysts. As alternatives, low-cost transition metals (TMs) with rich electronic structures and energy levels have garnered widespread attention and are considered promising candidates to replace noble metals [11,12] . Particularly, processing TMs into nanoparticles (NPs) is popular research as the particle size, morphology, and dispersion greatly influence their catalytic activity [13-16] . Moreover, every metal atom in the TM catalysts can serve as an active site, allowing maximum exposure to the reaction environment and effectively enhancing reaction kinetics [17,18] . Nevertheless, metal NPs or single atoms are typically randomly dispersed on the substrate in TM catalysts, which lead to the aggregation and loss of active sites and thus significantly reduce the activity and stability of catalysts [19-21] . Using N-doped porous carbon materials to support and disperse TM NPs has been regarded as a promising strategy due to the synergistic effects between the porous carbon framework and metal centers [22-28] . For instance, the recently reported electrocatalyst of Fe nanodots in porous carbon showed comparable ORR activities to the commercial Pt/C catalyst, since the large specific surface area of Fe nanodots provided more active sites [29] . Throughout the ORR, the triphase interface of electrocatalyst, electrolyte and gas jointly controlled the adsorption of intermediate reactants and the diffusion of electrons. The hierarchical porous carbon could effectively regulate the micro-environment of the reaction interface, thus promoting the adsorption and dissociation of oxygen and intermediate products and preventing the clustering of TM atoms [33,34] . The problems of such catalytic system lie in that the isolated individual metal NPs limit the activity of single metal catalysts, while the low TM NP loading and the complex preparation limit their industrial application. In addition, the electronic structures of TMs are easily affected by reactions, especially under strong acidic or alkaline conditions, leading to degradation or deactivation as a large number of electrons are lost in a short time. Moreover, the weak interaction between TMs and the carbon support, as well as the corrosion of the carbon framework by the electrolyte solution, cause catalyst detachment, thus affecting the catalyst stability [35,36] . In light of this, catalysts with metal-support interactions have drawn attention since the support can better stabilize TMs [30-32] . Here, we propose a strategy of using interfacial electronic coupling of metal-oxide to achieve efficient and stable Fe-based ORR electrocatalysts for zinc-air batteries (ZABs). An electrospinning and annealing method was developed to fabricate corrosion-resistant N-doped bubble-like porous carbon nanofiber electrocatalyst containing dispersed Fe nanodots and ZrO 2 NPs (ZrO 2 -Fe@N-PCNFs), in which Fe nanodots were embedded in ZrO 2 to form electronic coupled Fe-O-Zr heterointerface. The interfacial chemical bonds acted as a bridge for charge transfer, and ZrO 2 acted as an electron receptor to promote directional electron transfer from Fe to oxygen molecule, which enhanced electron accumulation and reactivity at the interface, thus effectively reducing the O-O bond dissociation energy of O 2 and accelerating the rate-determining step (RDS) of the reaction kinetics. The density functional theory (DFT) calculations confirmed that the enriched electrons tended to accumulate around the bonding oxygen in Fe-O-Zr, thereby altering the interfacial charge density and preventing the rapid loss of electrons from Fe. As a result, the electrocatalyst exhibited efficient 4e - ORR catalysis and high ORR activity under alkaline conditions, with a half-wave potential (E 1/2 ) of 0.868 V, which was higher than that of commercial Pt/C catalysts (0.85 V). Furthermore, the rechargeable ZABs demonstrated both a high capacity of 823.9 mA·g −1 and stability over 1560 cycles at 10 mA·cm −2 , showing promising commercial prospects and outperforming most reported ZABs. 2 Results 2.1 Material synthesis and interfacial electron transfer mechanism of Fe-O-Zr in 4e - ORR The detail processes of using a sol-gel electrospinning followed by annealing treatment for the fabrication of flexible ZrO 2 -Fe@N-PCNFs films were shown in Fig. S1 . Here, polyvinylpyrrolidone (PVP), zirconium acetate tetrahydrate (Zr(Ac) 4 ·4 H 2 O), ferrous acetate dihydrate (Fe(Ac) 2 ·2 H 2 O), and melamine (C 3 H 6 N 6 ) were used as precursors, polytetrafluoroethylene (PTFE) was used as pore-forming agent, and deionized water as solvent. Fig. S2 shows the physical characterization of the prepared sols, which presented a pink color. The electrospun precursor nanofibers showed a sorrel color and it became brown after oxidation ( Figs. S3 a - b). During the following annealing in N 2 atmosphere, PTFE NPs were gradually decomposed and left hierarchical bubble-like pores on the CNFs ( Fig. S3c ). The cross-linked interaction between Zr(Ac) 4 , Fe(Ac) 2 and PVP ( Fig. S2d ) enabled uniform distribution of Fe and ZrO 2 NPs within the PCNFs. Interestingly, Fe(Ac) 2 was reduced to Fe nanodots during annealing and then were uniformly dispersed and anchored on ZrO 2 NPs ( Fig. S4 ), which effectively limited the growth of Fe nanodots [28] . Fig. 1 (a) summarizes the unique electrocatalytic mechanism of ZrO 2 -Fe@N-PCNFs. The lattice matching between Fe and ZrO 2 created a lattice stress and formed shared electron coupling at the Fe-O-Zr bonds. Considering the interfacial electronic states, electrons tended to transfer from Fe to ZrO 2 to balance the Fermi level. This resulted in strong local reduction activity that accelerated the activation and conversion of O 2 at the interface, thus facilitating efficient 4e - transfer and the stability of the electrocatalyst. Specifically, in the 4e - ORR process ( Fig. 1 b), 3 intermediates of *OOH, *O, and *OH were involved. The enriched electrons at the Fe/ZrO 2 interface accelerated the adsorption and dissociation of O 2 molecules and intermediates. First, it weakened the O-O bond dissociation energy, making it easier for O 2 to dissociate into 2 separate *O radicals at the interface. These radicals then reacted with H 2 O to form *OOH. The increased electron density at the Fe-O-Zr interface ensured sufficient electrons available for *OOH to be dissociated into *O, thus suppressing the 2e - transfer path (*OOH + H 2 O + e - → H 2 O 2 ) and improving the 4e - transfer efficiency. 2.2 Materials characterization Fig. 2 (a) shows that the as-synthesized ZrO 2 -Fe@N-PCNFs had a diameter of around 230 nm. From the scanning electron microscopy (SEM) images of different samples, it was found that the fibers with a ratio of Zr: Fe of 1: 1.5 exhibited more uniform diameters, more pores, and larger specific surface areas ( Fig. S5 ). Under similar synthesis conditions, 2 control samples of Fe@N-PNCFs (d∼350 nm) and ZrO 2 @N-PCNFs (d∼235 nm) were prepared. The Fe@N-PCNFs film was prone to breaking into fragments, with individual fiber tending to fracture, exhibiting rough surface and poor strength ( Figs. S6 a and b). As a comparison, the ZrO 2 -Fe@N-PCNFs film showed higher mechanical property ( Figs. S6 c and d). The transmission electron microscopy (TEM) images reveal that the Fe-ZrO 2 NPs with a diameter of around 20 nm was densely dispersed within the N-PCNFs ( Fig. 2 b). The energy-dispersive X-ray spectroscopy (EDS) further confirmed the uniform distribution of Zr and Fe in the fiber ( Fig. S7 ), substantiating the effectiveness of using ZrO 2 to adsorb and disperse Fe nanodots. The lattice spacing of 0.291 nm and 0.208 nm were consistent with the characteristic planes of ZrO 2 (1 1 1) (PDF No.01-0750) and Fe (1 1 1) (PDF No.98-0258), respectively. The lattice fringe continuity between metallic Fe and ZrO 2 observed in Fig. 2 (c). The close contact and alignment of lattice fringes suggest that Fe NPs are stably anchored onto the ZrO 2 support, facilitating interfacial electron transfer and enhancing both the structural stability and electrocatalytic performance of the ZrO 2 -Fe system. Such interactions can enhance electronic transfer between the metal-support, promote electronic transfer during ORR, and improve the anchoring stability of Fe NPs during long-term operation. When Fe nanodots existed independently, their lattice fringes were indistinct ( Fig. S8 ). However, the lattice fringes at the ZrO 2 /Fe interface were much clearer, indicating that the ZrO 2 NPs altered the electronic structure of Fe by enhancing its crystallinity. This provides sufficient evidence of the strong metal-support interaction between Fe and ZrO 2 , which effectively promoted electron transfer. Thermogravimetric (TG) analysis and inductively coupled plasma mass spectrometry (ICP-MS) tests confirmed that the content of Fe nanodots were 7.9 wt% in ZrO 2 -Fe@N-PCNFs ( Fig. S9 ). Fig. 2 (d) shows the crystal structures of different materials using X-ray diffraction (XRD), where ZrO 2 -Fe@N-PCNFs clearly displayed diffraction peaks of monoclinic ZrO 2 (PDF No.01-0750) and metallic Fe (PDF No.98-0258). The peak centered at 43.4° primarily corresponded to the (1 1 1) plane of metallic Fe. X-ray photoelectron spectroscopy (XPS) was employed to check the functional groups and valence states in ZrO 2 -Fe@N-PCNFs, revealing an overall spectrum that included C 1 s, N 1 s, O 1 s, Zr 3d, and Fe 2p ( Fig. S10 ). The Zr 3d spectrum contained 2 peaks at 182.4 eV and 184.8 eV, both attributing to ZrO 2 ( Fig. 2 e). The Fe 2p spectrum primarily included Fe 2p 1/2 and Fe 2p 3/2 , where the peaks at 707.15 eV and 720.10 eV corresponded to metallic Fe (Fe 0 ), and the peaks at 710.76 eV and 723.82 eV corresponded to Fe 3+ ( Fig. 2 f). Additionally, there was a satellite peaks of Fe 3+ at 712.28 eV. The C 1 s spectrum indicated the presence of C-C sp 2 , C-C sp 3 , C-N, and C O bonds, with the C-N peak suggesting that nitrogen was doped into the CNFs, forming carbon defects. In the O 1 s spectrum, the 2 peaks at 530.3 eV and 532.3 eV corresponded to lattice oxygen in ZrO 2 and oxygen vacancies ( Fig. 2 g), respectively. This confirmed the presence of oxygen vacancies in ZrO 2 NPs, representing Lewis’s acid sites on ZrO 2 . After confirming the presence of a heterojunction between ZrO 2 and Fe, our focus shifted to the catalyst content and specific surface areas (SSA), as they directly influence the catalytic activity and stability. N 2 -adsorption isotherms were used to examine the pore types and SSA of different samples. Fe@N-PCNFs, ZrO 2 @N-PCNFs, and ZrO 2 -Fe@N-PCNFs exhibited distinct type IV isotherms ( Fig. 2 h) with a significant increase in the isotherm at high relative pressures (P/P 0 > 0.9), suggesting there were distinct mesopores and macropores ( Fig. S11 ). The SSA of the 3 samples were 106, 224, and 297 m 2 ·g −1 , respectively, and ZrO 2 -Fe@N-PCNFs had the largest SSA and pore volume. Raman spectroscopy was then used to assess the graphitization degree and defect density of 3 samples. The intensity of the D band represents defect carbon, disordered carbon, and edge carbon atoms, while the intensity of the G band reflects the graphitization degree. The D (1350 cm −1 ) and G (1580 cm −1 ) bands corresponded to the vibration intensities of sp 3 and sp 2 hybridized C-C bonds, respectively. The ratio I D /I G serves as an important indicator for evaluating the defect density and graphitization degree in carbon materials [37] . The I D /I G ratios of Fe@N-PCNFs, ZrO 2 @N-PCNFs, and ZrO 2 -Fe@N-PCNFs were 0.86, 0.92, and 0.95 ( Fig. 2 i), respectively, indicating that co-doping ZrO 2 and Fe increased defects. Additionally, the slight peak rise after 2700 cm −1 (2D band) suggested the presence of a certain amount of single-layer or multi-layer graphene (rGO) within the PCNFs [38] . The effect of N-doping on defects was also examined. The I D /I G ratios for ZrO 2 -Fe@N-PCNFs and ZrO 2 -Fe@PCNFs were 0.95 and 0.94, respectively, indicating that N-doping introduced additional defects, which also impacted the catalyst's ORR activity ( Fig. S12 ). 2.3 Electrochemical ORR performance A 3-electrode testing system in alkaline electrolytes (0.1 M KOH) was used to characterize the electrocatalytic activity. Cyclic voltammetry (CV) and linear sweep voltammetry (LSV) tests were conducted using a rotating disk electrode (RDE, Fig. S13 ). Fig. 3 (a) presents the CV curves of the ZrO 2 -Fe@N-PCNFs catalyst at 50 mV·s −1 under an O 2 -saturated condition, with Fe@N-PCNFs and ZrO 2 @N-PCNFs catalysts as control groups. The reduction potentials of ZrO 2 -Fe@N-PCNFs, Fe@N-PCNFs, and ZrO 2 @N-PCNFs were 0.62, 0.73, and 0.78 V, respectively. The catalysts were also tested under a N 2 -saturated condition, which also exhibited oxygen reduction activity ( Fig. S14 ). Fig. 3 (b) shows the LSV curves of the 3 catalysts at 1600 rpm. It can be observed that ZrO 2 -Fe@N-PCNFs had the highest half-wave potential (E 1/2 ) of 0.868 V, which was much higher than that of Fe@N-PCNFs (0.76 V), ZrO 2 @N-PCNFs (0.80 V), and even the commercial 20% Pt/C catalyst (0.85 V, Fig. S15 ). In addition, the catalyst without N-doping (ZrO 2 -Fe@PCNFs) showed a lower E 1/2 value of 0.846 V, indicating that the N-doping improved the catalytic activity ( Fig. S16 ). The Tafel slopes were calculated based on the LSV curves, where ZrO 2 -Fe@N-PCNFs showed the smallest slope of 89 mV·dec −1 compared with the control groups of Fe@N-PCNFs (130 mV·dec −1 ) and ZrO 2 @N-PCNFs (112 mV·dec −1 ), indicating the smallest overpotential variation ( Fig. 3 c). Both the E 1/2 and Tafel slope values confirmed that the ZrO 2 -Fe@N-PCNFs catalyst had the best ORR activity. Figs. 3 (d-e) shows the LSV curves of different ZrO 2 -Fe@N-PCNFs containing different Fe/Zr ratios, from which the one with a Fe/Zr ratio of 1.5: 1 exhibited the highest catalytic activity. The Tafel slopes also confirmed that the ZrO 2 -Fe@N-PCNFs catalyst with a Fe/Zr ratio of 1.5: 1 had the fastest ORR catalytic kinetics ( Fig. 3 f). On the other hand, by analyzing the LSV curves of different ZrO 2 -Fe@N-PCNFs that synthesized at different annealing temperatures (from 600 to 900 °C), the sample synthesized at 800 °C was confirmed to have the fastest ORR catalytic kinetics ( Fig. S17 ). Increasing the annealing temperature could enhance the electronic conductivity of the electrocatalyst, but excessively high temperatures might cause the detach of Fe nanodots from the ZrO 2 support surface, weakening the interaction between Fe and ZrO 2 , leading to unstable Fe metal clusters, which reduced electron transfer, thereby degrading the ORR catalytic activity. RDE tests were conducted, within a rotation speed range of 400–2500 rpm, to determine the ORR pathways ( Fig. 3 g). The Koutecky-Levich (K-L) plots based on the relationship between J −1 and ω −1/2 were plotted (the sub-figure in Fig. 3 g). The electron transfer number (n) for all catalysts were calculated to be close to 4 ( Fig. S18 ), indicating that the ORR process primarily follows a 4e⁻ pathway. After 1000 cycles, the ZrO 2 -Fe@N-PCNFs catalyst showed a superior low E 1/2 difference of 1 mV ( Fig. 3 h). In comparison, the potential differences of Fe@N-PCNFs and ZrO 2 @N-PCNFs were 9 mV and 2 mV ( Fig. S19 ). The stability tests using the i-t curve showed that the current retention of ZrO 2 -Fe@N-PCNFs, Fe@N-PCNFs, and ZrO 2 @N-PCNFs were 83.3, 84.1, and 95.3% after 39600 s, respectively ( Fig. 3 i), conforming that ZrO 2 effectively stabilized the active Fe sites, significantly enhancing the catalyst durability. In a short summary, the ZrO 2 -Fe@N-PCNFs catalyst exhibited the highest activity and stability compared to the industrial Pt/C catalyst. 2.4 Electrochemical performance of ZABs To assess the application potential of the electrocatalysts, ZABs were assembled where nickel foam composited carbon paper functioned as the cathode and a zinc foil acted as the anode ( Fig. S20 ). Fig. 4 (a) shows the schematic diagram of the as-designed ZAB, which had a 3-phase interface of negative zinc electrode, electrolyte, and positive carbon electrode loaded with catalyst. The performance of ZABs were conducted using an electrochemical workstation and a LAND battery testing system ( Fig. S21 ). The ZAB was able to power a light-emitting diode (LED) display panel ( Fig. S22 ). From the open-circuit voltage (OCV) diagram in Fig. 4 (b), the ZrO 2 -Fe@N-PCNFs based ZAB achieved an OCV of as high as 1.550 V, which was higher than that of the commercial Pt/C catalyst based ZAB (1.376 V). The discharge and charge curves in Fig. 4 (c) show that under different current densities, the ZrO 2 -Fe@PCNFs based ZAB had a smaller voltage gap than the ZAB based on Pt/C catalyst. Additionally, Fig. 4 (d) shows that the peak power density of the ZAB was 79.4 mW·cm −2 , surpassing that of the commercial Pt/C based ZAB (57.6 mW·cm −2 ). As shown in Fig. 4 (e), under a continuous discharge at 10 mA·cm −2 , the discharge capacity of the ZrO 2 -Fe@N-PCNFs based ZABs reached 823.9 mA h·g −1 , significantly higher than that of the Pt/C based ZABs (751.2 mA h·g −1 ). The stability of ZABs was also evaluated by increasing the current density from 2 to 50 mA·cm −2 . The ZrO 2 -Fe@N-PCNFs based ZABs showed no significant voltage decay, whereas the Pt/C based ZABs exhibited noticeable decay at high current densities of 30 mA·cm −2 and 50 mA·cm −2 . Specifically, the discharge potential of the ZrO 2 -Fe@N-PCNFs based ZABs was decreased by only 32.0%, while that of the Pt/C based ZABs was decreased by 38.6% ( Fig. 4 f), indicating that the ZrO 2 -Fe@N-PCNFs based ZABs had a better rate performance, highlighting its superior rate capability. From a practical application perspective, long-term stability of ZABs was evaluated through an extend constant-current discharge testing at 10 mA·cm −2 . As observed from the charge-discharge curves ( Fig. 4 g), the ZrO 2 -Fe@N-PCNFs based ZABs exhibited stable cycling over 1560 cycles at 10 mA·cm −2 , significantly outperforming the commercial Pt/C based ZABs, which only maintained for 240 cycles. Under the same testing conditions, the Pt/C catalyst degraded and deactivated rapidly. In the charge-discharge curve ( Fig. 4 h), the round-trip efficiency of the ZrO 2 -Fe@N-PCNFs based ZABs maintained at 58.7% after 300 cycles (with a charge voltage of 2.01 V and a discharge voltage of 1.18 V), and remained at 52.1% after 900 cycles ( Fig. 4 i) and finally stabilized at 50.7% after 1560 cycles ( Fig. 4 j), confirming the superior stability and high capacity of the assembled ZABs. 2.5 DFT calculation and mechanistic analysis To investigate the electronic coupling at Fe/ZrO 2 interface and understand the origin of high ORR activity, DFT calculations were performed from the perspectives of geometric and electronic structures. The model was constructed based on the crystal structures of Fe and ZrO 2 , with the 2 crystal planes of Fe (1 1 1) and ZrO 2 (1 1 1) overlapped ( Fig. 5 a). In the model, the blue and yellow colors represent lost and gained electrons, respectively. It can be clearly observed that Fe and Zr at the interface lost electrons, and these electrons mainly gathered around the bonding O. This indicated that the d-band electrons in Fe and ZrO 2 were connected through Fe-O-Zr bonds, and the bonding O acted as a bridge for electron transfer. Based on the distribution diagram of electronic local functions ( Fig. 5 b), the upper limit value of 1 indicates complete localization of electrons, and a value of 0 indicates complete delocalization of electrons. The charge density gradually increased from blue to red with a high charge density around O on the ZrO 2 side, indicating that electrons were transferred from the Fe side to the ZrO 2 side, leading to charge redistribution. Fig. 5 (c) shows the electronic structure calculation for the ZrO 2 -Fe@N-PCNFs system, where Fe and ZrO 2 were connected through Fe-O-Zr bond, and oxygen was adsorbed by ZrO 2 -Fe@N-PCNFs at the Fe/ZrO 2 interface. There was charge redistribution and the planar integrated electron charge density difference along the surface normal direction resulted in significant electron accumulation (Δρ(z) > 0) at the interface. At the same time, the surface layer was also accompanied by electron depletion (Δρ(z) < 0). The electron transfer from the ZrO 2 surface layer and Fe atoms to the interface region promoted the adsorption of oxygen at the interface and reduced the Gibbs free energy of the reaction. In Fig. 5 (d), the Gibbs free energy(ΔG) profiles of the ORR process at the Fe/ZrO 2 interface depicted for the ZrO 2 -Fe@N-PCNFs with the 4 elementary steps at U = 0 V and 1.23 V. The RDS of both catalysts was the generation of *OOH from O 2 . The enthalpy changes were 0.78 and 0.55 V, respectively, and the overall ultimate potential energy of ZrO 2 -Fe@N-PCNFs was smaller than Fe@N-PCNFs, indicating that the interface electronic coupling between Fe and ZrO 2 improved the ORR activity of the catalyst by altering RDS. 3 Discussion In this study, we have proposed a feasible strategy of using electronic coupling of metal-metal oxide interactions to achieve efficient and stable ORR activity by designing a new ZrO 2 -Fe@N-PCNFs electrocatalyst, and verified its effectiveness on improving the long cycling stability and discharge capacity of ZABs. The proposed strategy addressed the common issues of deactivation and instability of Fe nanodots in ORR process. According to DFT calculations, in the 4e - reaction process, the transition from the first to the second step promoted the conversion of O 2 to *OOH, and the simulation indicated that the interfacial electron coupling between Fe and ZrO 2 significantly enhanced the 4e - transfer efficiency. Additionally, the in-situ synthesized ZrO 2 acted as an electron reservoir, containing abundant oxygen vacancies that not only improved the adsorption capacity for O 2 but also gradually released electrons from Fe to the interface. This slow electron release effectively prevented the deactivation of active sites on the Fe surface due to excessive electron loss. This strong metal-support interaction induced Fe-O-Zr bonding, forming interface electronic coupling, achieving synergistic improvement of electrocatalyst activity, stability, and efficiency. Consequently, the Fe–ZrO 2 interfacial catalyst exhibits outstanding long-term discharge stability over 1560 cycles in ZABs, demonstrating remarkable operational durability and electrochemical robustness. These findings further underscore the strategic value of interfacial modulation in electrocatalytic systems. In summary, this work not only clarifies the underlying causes of Fe catalyst deactivation but also provides a synergistic route combining electronic regulation and structural optimization in metal–metal oxide hybrid systems. It offers a mechanistic perspective and design blueprint for the development of efficient, durable, and sustainable metal-support electrocatalysts. 4 Conclusion In conclusion, we have reported a stable and efficient Fe-ZrO 2 electrocatalyst for ORR catalysis. With simulation calculations and experiment verification, we confirmed that Fe could share electrons with ZrO 2 to form interfacial coupling Fe-O-Zr bond as a bridge for charge transfer, in which ZrO 2 acted as electron promoter to facilitate electron transfer from Fe to the interface, thereby inhibiting the rapid deactivation of Fe and accelerating the activation and conversion of intermediate adsorbents. With the novel catalytic strategy design, the catalyst with a high loading of Fe achieved fast ORR kinetics and great cycling stability and the ZABs exhibited superior long-term stability over 1560 cycles. This study reveals the dynamic interfacial changes of metal-support catalyst system, providing a different perspective to design metal-support catalysts for sustainable ZABs. 5 Experimental section Chemicals. Zr(Ac) 4 ·4 H 2 O,Fe(Ac) 2 ·2 H 2 O, PVP (Mw = 150,000), deionized water, PTFE, and melamine were all purchased from Aladdin. All chemicals were used without further purification. Synthesis of ZrO 2 -Fe@N-PCNFs Films. First, 5 mmol of Zr(Ac) 4 ·4 H 2 O was added to 6 g of a 15 wt% aqueous PVP solution and stirred for 2 h until homogeneous. Then, 1.5 mmol of Fe(Ac) 2 ·2 H 2 O was added to the above solution and stirred until a uniform solution formed. Finally, 3 g of a PTFE dispersion was added to the mixed solution and stirred for 8 h to form a stable brown sol. The stable sol was then injected into multiple sterile needles and spun under a voltage of 20 kV. Control sample films were prepared using the same method. Then, the spun NF films were heat treated in an air oven at 200 ℃ for 2 h followed by annealing at 800 ℃ in N 2 for 2 h, with a heating rate of 5 ℃ min −1 . Materials characterization . The structures of catalysts were characterized using SEM (Hitachi SU5000) and TEM (JEM-2100F, Japan), and the overall elemental distribution was analyzed by EDS mapping. XRD (Bruker D8 ADVANCE) was utilized to check crystal structures and XPS (Thermo Scientific K-Alpha) was employed to analyze the surface elemental composition and chemical states. An infrared spectrometer (Nicolet 8700) was used to analyze the chemical composition, molecular structure, and functional groups. A TG-DSC synchronous analyzer (TA Instruments, Water LLC, USA) was conducted at a heating rate of 10 °C·min −1 in both air and N 2 . The defect degrees of samples were assessed using a Horiba-LabRAM HR Evolution Raman spectrometer, and the element contents were determined by ICP-OES-MS (Thermo Fisher iCAP PRO). Electrochemical tests. The electrochemical tests were conducted on an electrochemical workstation (CHI660e) using a 3-electrode system. The reference electrode was a saturated calomel electrode, the working electrode was a glassy carbon electrode (S = 0.196 cm², Pine Instrument Co, AFMSRCE 3699), and a platinum mesh electrode served as the counter electrode. The electrolyte solution was 0.1 M KOH, and the tests were carried out under N 2 or O 2 atmospheres. Around 5 mg of the catalyst and 5% Nafion solution were sonicated together to form the applied catalyst. A total of 30 µL of the catalyst solution was drop casted onto the RDE, applied in 3 steps of 10 µL each, and allowed to dry naturally. DFT Calculations. The first-principles calculations were performed in the frame of density functional theory (DFT) with the Vienna ab initio simulation package (VASP) [39] . The exchange-correlation energy was described by the Perdew-Burke-Ernzerhof (PBE) form of generalized-gradient approximation (GGA) exchange-correlation energy functional [40,41] . The structure optimizations of 2 types of Fe and Fe/ZrO 2 have been carried out by allowing all atomic positions to vary and fixing lattice parameters until the energy difference of successive atom configurations was less than 10 −6 eV The force on each atom in the relaxed structures was less than 0.015 eV/Å. The cutoff energy for the plane-wave basis set was set to 400 eV. The k-point spacing was set to be smaller than 0.03 Å −1 over Brillouin zone (BZ) [40] . Assembly and measurements of Liquid ZABs. The anode of the battery was 0.05 mm (99.99%) zinc foil, and the electrolyte was an alkaline electrolyte composed of 6 M KOH and 0.2 M Zn(Ac) 2 . The cathode was foam nickel with an air diffusion layer. The air diffusion layer with an area of 4 cm 2 (2 × 2 cm) allowed oxygen in the air to diffuse into the catalyst. The catalyst layer was made by dropping catalyst ink on foam nickel. All loaded catalysts were 1 mg·cm −2 . Used 20%Pt/C as control samples. All tests were conducted under an ambient atmosphere. Used the electrochemical workstation to test the charge voltage and discharge voltage and open circuit voltage. Use the LAND battery test system to test the battery performance at 10 mA·cm −2 . Each cycle was set to 20 min (10 min for charge and 10 min for discharge). Author contributions J. Yan conceived the project. Z. Liu and S. Chen conducted the experimental and characterizations. J. Yan. and Z. Liu wrote this paper and all authors contributed to discussing and revising the paper. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgments This work is supported by the National Natural Science Foundation of China (52473293) and Tianshan Innovation Team Project of Xinjiang Uygur Autonomous Region (No. 2024D14005). Appendix A Supporting material Supplementary data associated with this article can be found in the online version at doi:10.1016/j.nxener.2025.100317 . Appendix A Supplementary material Supplementary material Supplementary material
REFERENCES:
1. LI L (2023)
2. QIAO Y (2023)
3. WANG X (2021)
4. ZHANG H (2024)
5. BAI Y (2024)
6. ZHANG Y (2021)
7. ZHANG C (2021)
8. ZHAO C (2021)
9. ZHENG H (2024)
10. YU Y (2025)
11. HUANG H (2022)
12. ZHAO C (2020)
13. LEI Y (2020)
14. LI X (2020)
15. GAN T (2024)
16. MIAO Z (2022)
17. HAN A (2023)
18. ZHANG H (2020)
19. LIU Y (2023)
20. TIAN J (2023)
21. ZHANG Q (2021)
22. PARK Y (2020)
23. CHEN G (2020)
24. YAN B (2024)
25. KE C (2022)
26. JIANG W (2016)
27. SUN J (2022)
28. CAO X (2021)
29. WANG Y (2015)
30. AHSAN M (2020)
31. WEI C (2020)
32. WANG T (2024)
33. LI Z (2023)
34. ZHONG J (2022)
35. WANG F (2020)
36. MA Q (2021)
37. YANG X (2022)
38. LUO L (2019)
39. PERDEW J (1996)
40. KRESSE G (1999)
41. MONKHORST H (1976)
|
10.1016_j.asjsur.2021.01.005.txt
|
TITLE: Lymph node metastasis between the sternocleidomastoid and sternohyoid muscle in papillary thyroid carcinoma patients: A prospective study at multiple centers
AUTHORS:
- Song, Linlin
- Zhou, Junyi
- Chen, Wenjie
- Li, Genpeng
- Wang, Zhaohui
- Xue, Gang
- Wu, Jian
- Yan, Hongli
- Lei, Jianyong
- Zhu, Jingqiang
ABSTRACT:
Background
The lymph nodes between the sternocleidomastoid and sternohyoid muscle (LNSS) are not explicitly mentioned in the 2015 American Thyroid Association and 2008 American Head and Neck Society (AHNS) guidelines, but they are easily overlooked in papillary thyroid carcinoma (PTC). We prospectively evaluated the clinical significance of the LNSS in papillary thyroid carcinoma (PTC) patients.
Method
In five medical centers, two hundred and thirty-four PTC patients with lateral neck metastasis who underwent 264 neck dissection were enrolled in this study. LNSS was resected and used as a specimen to investigate the relationship of LNSS with several clinicopathological parameters.
Result
Of the 264 lateral neck dissections, the average lymph node metastasis rate of LNSS was 23.48%, significantly second only to that in level III (p<0.05). Univariate and multivariate analyses showed that a patient age over 45 years (OR 2.155, 95% CI 1.191 to 3.898, p = 0.011), with a tumor located in the inferior lobe of the thyroid (OR 1.517, 95% CI 1.113 to 2.068, p = 0.008), and LN metastasis at levels IIb (OR 2.298, 95% CI 1.121 to 4.712, p = 0.020) and level III (OR 2.408, 95% CI 1.222 to 4.745, p = 0.011) were independent risk factors for LNSS lymphatic metastasis.
Conclusion
The LNSS has a high metastatic rate and is easily overlooked. Additional attention should be paid to LNSS, especially in patients over 45 years old and with PTC located in the thyroid’s inferior lobe.
BODY:
1 Background Papillary thyroid carcinoma (PTC) encompasses most thyroid cancers and accounts for > 90% of cervical lymph node metastases. Neck nodal metastasis, especially lateral neck nodal metastasis, is related to a high risk of disease recurrence even when treated with aggressive primary surgical modality and adjuvant therapy, directly resulting in a poor prognosis. 1 2 , Complicated levels of muscles and several critical anatomical structures may increase the disease omission during the lymph nodes (LN) dissection. 3 The LN between the sternocleidomastoid and sternohyoid muscle (LNSS), which belong to cervical level IV, as indicated in most guidelines, are easily overlooked during lateral neck dissection (LND). Anatomically, LNSS is hidden between two layers of muscles and are discontinuous with the main part of level IV. 4 Furthermore, most guidelines provided no specific statement or illustration of this lymph node (LN) basin 5 6 , 7 . LNSS has not been sufficiently investigated based on the prevailing literature. 5 , Furthermore, we designed and distributed a questionnaire at the National Summit Forum of Thyroid Diseases held from April 19 to 21, 2019, in Chengdu, China, to investigate the intension or cognition of dissecting LNSS among specific surgeons, only 53.64% (59/110) surgeons paid particular attention to LNSS ( 8 Supplementary data 1 ). In the current study, we performed a prospective, multicenter study to evaluate the metastatic rate of LNSS, analyze risk factors, and predict LN metastasis in this area. 2 Methods 2.1 Study design and population From January 2015 to December 2018, this multicenter, consecutive, prospective research was carried out at 5 Chinese centers (West China Hospital of Sichuan University, Sichuan Cancer Hospital, The General Hospital of Western Theater Command, The Chengdu First and Third People’s Hospital). The inclusion criteria were: patients over 18 years old, histologically confirmed PTC with lateral cervical LN metastasis prospectively or intraoperatively (fine-needle aspiration cytology or intraoperative frozen section metastasis confirmation). The exclusion criteria were other kinds of thyroid carcinomas, including medullary/ follicular/ anaplastic thyroid carcinomas or PTC combined with other types of thyroid cancers, with other cervical cancers and accepted prior radiotherapy or chemotherapy before surgery. The data of enrolled patients were timely records completely. West China Hospital of Sichuan University submitted the utilized ethics documents, and four other centers participated in the review. The study was approved by the Ethics Committees of five centers (No. 2019042, Supplementary data 2 ). All patients signed the informed consent forms, and all procedures were carried out based on the Declaration of Helsinki. There were 234 cases (264 LNDs) diagnosed as original PTC with lateral neck LN metastasis enrolled in our study. 2.2 Surgical procedure and LN level definition Surgeons from five centers (Zhu JQ, Wang ZH, Wu J, Xue G, and Yan HL) reached a consensus about each level’s boundary of cervical compartments at the start of this study. The surgical procedures and postoperative adjuvant therapies have been introduced in a previous study. In brief, for each patient’s disease-related lateral neck, the standard surgical protocol comprised total thyroidectomy plus central neck dissection plus LND. The central compartment LN was divided into three parts, ipsilateral and contralateral central compartments and pre-laryngeal (Delphian) and pre-tracheal compartments. During the LND, LN at level IIa, IIb, III, original level IV, level V, and LNSS were dissected collectively and separately analyzed pathologically. In case owing poor LNSS surgical vision, the sternocleidomastoid muscle is pulled medially and laterally to remove LNSS completely. The numbers of lymph nodes dissected and metastasis were counted and analyzed with respect to each neck level. All pathological specimens were reviewed by the same experienced pathologists in each center. We clearly defined LNSS as follows: anteriorly by the sternocleidomastoid muscle, posteriorly by the sternohyoid muscle, superiorly by the intersection of the sternocleidomastoid and sternohyoid muscle, and inferiorly by the suprasternal fossa and clavicle, laterally by the border of the sternohyoid muscle, and internally by the border of the sternocleidomastoid ( 9 Fig. 1 and Fig. 2 ). The paraffin sections were then reviewed and classified based on the 8th version American Joint Committee on Cancer and Union International Cancer Control (UICC/AJCC) pathologic tumor–node–metastasis classification. 3 Statistical analysis The statistical analyses were performed with SPSS for Mac (Version 1.0.0–3112, IBM Corp., Armonk, NY) and Prism 7 for Mac OS X, version 7, Oa, April 2, 2016. T-tests were used to compare the average LN metastatic rate (LNMR) of each individual (LN metastatic rate = metastatic LN numbers/total dissected LN numbers in one individual, if total dissected LN number is equal to 0, the LNMR is defined as equal to 0) between LNSS and each level. To determine the relation between LNSS metastasis and clinicopathologic factors, such as age, sex, primary tumor site, lateral cervical lymph node metastasis, the univariate analysis (including t-test, Mann–Whitney U test, and Chi-square test) were performed according to the variable types and distributions. Variables distinguished by univariate analysis and debatable clinical factors were included in the multivariate binary logistic regression analyses. In all analyses, a p-value <0.05 in a two-tailed test indicated a significant difference. 4 Result All five centers patients were enrolled in a pooled analysis, 234 patients were consecutively enrolled, and 264 LND (30 bilateral and 204 ipsilateral LND) were performed. There was a preponderance of female patients (67.09%), with an average age of 41 years (range 18–76), diagnosed as PTC (pTxN1bM0). Among the 234 cases, demographic, clinic, and tumor-related parameters were recorded and are summarized in Table 1 . Fig. 3 and Table 2 summarize the frequency of node involvement condition in our cohort. Among a total of 264 LNDs, LNSS was visible in 200/264 (75.76%) lateral compartments. The median number of total resected lymph nodes in level II, III, IV, V, and VI were 7 (range, 0–27), 6 (range, 0–18), 3 (range, 0–12), 5 (range, 0–13), and 11 (range, 0–35) respectively. The median number of LNSS was 2 (0–7). The metastasis rate of LNSS was 42% (84/200) in this cohort. The average LNMR of LNSS in patients was significantly higher than that at level IIa, IIb and level V (17.78% vs 12.87% [p = 0.046], 4.96% [p<0.001] and 3.82% [p<0.001]). In other words, except for LN metastasis in the central compartment, LNSS has a relatively high metastatic rate compared with other lateral neck levels, only second to that of the level III and level IV. It is worth noting that a 30 years old female who underwent thyroidectomy, bilateral center LN dissection, and functional left lateral neck dissection, were pathologically diagnosed with PTC, Hashimoto thyroiditis, and lymph node metastasis only in LNSS. Six cases were pathologically proved with central compartment metastasis and LNSS metastasis. LNSS deserves further attention during the LN dissection. Among 234 patients, the clinicopathologic characteristics between the metastatic LNSS group and the nonmetastatic LNSS group were compared and analyzed in Table 3 . The univariate analysis identified that age over 45 years, a tumor located in the inferior thyroid lobe, and LN metastasis at the contralateral level VI, level IIb and level III are metastatic risk factors for LNSS. The multivariate analysis further showed that age over 45 years (OR 2.155, 95% CI 1.191 to 3.898, P = 0.011), a tumor located in the inferior lobe of thyroid (OR 1.517, 95% CI 1.113 to 2.068, p = 0.008), and LN metastasis at level IIb (OR 2.298, 95% CI 1.121 to 4.712, p = 0.020), III (OR 2.408, 95% CI 1.222 to 4.745, p = 0.011) were all significant metastatic risk factors for LNSS. 5 Discussion This is a prospective and multicenter study of the largest cohort analyses on this topic. It investigated the LN metastatic conditions of LNSS and provided significant risk factors for LN metastasis in that area. Our studies demonstrated that tumors located in the inferior lobe of the thyroid represent a significant risk factor for LNSS metastasis, providing valuable guiding significance for evaluating LNSS metastasis and dissection of the LNSS in clinical work. Our previous report in 2012 indicated that lymphatic and adipose tissue is present in the following regions: medially and inferiorly to the internal jugular vein, posteriorly to the sternocleidomastoid muscle and superiorly to the anterior cervical muscles, attached to adipose tissue in the superior sternal fossa. The suprasternal space, also called Burns’s space, consists of superficial and deep layers of the investing layers of the deep cervical fascia above the manubrium of the sternum. 10 Guidelines, including ATA guidelines and the 2008 AHNS consensus, have not specifically referred to this particular LN region. With the decline of LN level borders, LNSS cannot be classified into any complete cervical compartment. Furthermore, the practice of selective lateral and modified radical neck dissection has increased in recent years, and the omission of dissecting across the sternocleidomastoid muscle reduced the probability of LNSS exposure. As the outcome from the questionnaires in 2019 (data not published), the intention to dissecting LNSS was related to the surgeons’ seniority, which could be explained as follows. First, an increasing incidence of thyroid cancer in recent years has resulted in the transfer of most surgeons, especially those who may have high seniority, from the specialty of other organs surgery to the specialty of thyroid surgery. However, without sufficient surgical experience, specific training and systematic analysis, an insufficient understanding of this particular LN among a segment of surgeons. Second, in contrast to relatively senior surgeons’ empiricism, relatively junior surgeons tend to refer to guidelines, conferences, and reports, wherein relevant knowledge may be quickly updated. In addition, refinement of the lateral cervical LN subgroups in recent years has reminded related surgeons to focus on more details. 11 With strict adherence to the agreed upon LN dissection and inspection modes, we have concluded that the LNSS metastasis rate in the original LND was 31.81% (84/264), which is higher than previously reported (14.4–22.6%). 5 , Furthermore, more detailed LN related data were collected in our study. From the individuals’ perspective, an average of 3.71 pieces of LN was dissected from the LNSS, and the individual LNMR was first reported lower only than that at level III and IV ( 8 Table 2 ). These findings support a high rate of metastasis once visible LN occurs in LNSS. LN drainage in the neck is complicated for overlapping anatomical stratifications. To date, the role of nodal drainage in well-differentiated thyroid carcinoma is still not well defined. Controversies remain regarding the correlation between tumor location and neck metastasis in all PTC patients. 3 12 , In general, LN metastasis in PTC patients may start from the thyroid gland and spreads following the order of the central to lateral compartments. 13 Zhang TL et al analyzed 126 patients and found no relationship between tumor location and LN metastasis in LNSS. 14 After removing confounding factors by multivariate analysis, we concluded that the inferior pole tumor of the thyroid had a higher LNSS metastasis rate than the rest of the gland, which coincides with the findings reported by Sun GH et al and is practical for clinical application. 8 The conventional anatomy of lymphatic drainage in the thyroid area indicates that the lymphatic drainage from the inferior thyroid gland flows anteriorly to the veins into the lateral necks and the upper mediastinum (Level VII). 5 15 , As the LNSS metastatic rate is not low in original and recurrent lateral neck metastasis, a preoperative evaluation and cognition of the tumor’s location are essential to reduce rates of residual metastasis potentially. 16 Older age is an important prognostic factor of PTC and a significant tumor staging index. Although several studies have shown a correlation between young age and LN metastasis in PTC, 6 studies also clarified that older patients presenting with LN involvement associate with a relatively poor prognosis. Sueyoshi M. reviewed that age of 45 or older predicting more superior mediastinal LN metastasis and a greater number of cervical LN metastasis. 18–20 Different from previous study results, 17 our univariate and multivariate analysis showed that a patient age over 45 years (OR 2.155, 95% CI 1.191 to 3.898, p = 0.011) was an independent risk factor for LNSS metastasis. From this perspective, whether LNSS metastasis is a prognosis factor of papillary thyroid carcinoma is a valuable question and deserves further survival analysis. 5 Our study had some limitations. The follow-up data was not included, and we could not reflect the prognostic value of LNSS. Furthermore, our five centers’ pathological systems do not report valuable information about metastatic lymph nodes such as the diameter and vascular invasion of nodes, limiting our study to do some profound analysis. 6 Conclusion In conclusion, LNSS presents a high metastatic rate and is easily overlooked. Attention should be paid to LNSS, especially in patients over 45 years old, lateral cervical LN metastasis reported by fine-needle aspiration, and with PTC located in the inferior lobe of the thyroid. Supportive foundations This study was supported by grants from the 1.3.5 project for disciplines of excellence, West China Hospital, Sichuan University ( ZY2017309 ). Declaration of competing interest No benefits in any form have been received or will be received from a commercial party related directly or indirectly to the subject of this article. However, the work from the same cohort named “Lymph node metastasis between the superior belly of the omohyoid and internal jugular veins in papillary thyroid carcinoma patients: A prospective and multicenter analysis” was presented as a poster at the recent 89 Annual Meeting of the American Thyroid Association. That original article is not published or under review. Appendix A Supplementary data The following are the Supplementary data to this article: Multimedia component 1 Multimedia component 1 figs1 figs1 Appendix A Supplementary data Supplementary data to this article can be found online at https://doi.org/10.1016/j.asjsur.2021.01.005 .
REFERENCES:
1. SIEGEL R (2016)
2. PODNOS Y (2005)
3. LOMBARDI D (2018)
4. JURGENLIEMKSCHULZ I (2019)
5. SUN G (2013)
6. HAUGEN B (2015)
7. ROBBINS K (2008)
8. ZHANGHLC J (2014)
9. LEI J (2017)
10. ZHU J (2012)
11. H W (1982)
12. WADA N (2003)
13. KWAK J (2009)
14. ITO Y (2004)
15. SAKORAFAS G (2010)
16. LIKHTEROV I (2017)
17. MORITANI S (2016)
18. WANG J (2018)
19. MICCOLI P (2008)
20. CAN N (2015)
|
10.1016_j.dibe.2025.100720.txt
|
TITLE: Fire scenario simulation method for residential buildings based on generative adversarial network
AUTHORS:
- Cheng, Qingle
- Wang, Xuyang
- Zhuang, Jin
- Liao, Wenjie
- Xie, Linlin
ABSTRACT:
Fire scenario simulation in residential buildings is crucial for fire safety design, risk assessment, and emergency management. Traditional CFD-based methods face challenges, including long computation times and reliance on expertise, limiting their use for real-time prediction and rapid design optimization. This study introduces a novel simulation method using Generative Adversarial Networks (GANs). A database of 50 residential layouts encompassing a wide variety of apartment configurations is constructed, with high-resolution spatiotemporal data on temperature and soot visibility generated via CFD. The GAN-based model uses layouts, ignition locations, and fire development times as inputs to predict temperature and soot fields. Experimental results show the model achieves an average Structural Similarity Index (SSIM) of 95.7 % compared to CFD and reduces prediction time to 2.56 s—an efficiency improvement of 80,000 times. This method provides an efficient tool for fire risk assessment, evacuation planning, and intelligent fire protection system design in residential buildings.
BODY:
1 Introduction Fire is one of the most significant global disasters, causing extensive property damage and loss of life ( Khan et al., 2022 ; Zhang et al., 2025 ). According to fire statistics compiled by the International Association of Fire Services ( CTIF, 2024 ), 2.5 to 4.5 million fire incidents are reported annually across 27 to 57 countries, resulting in 17,000 to 62,000 fatalities. In China alone, over 800,000 fire incidents were recorded in 2022, leading to more than 2000 deaths and direct economic losses exceeding 7.16 billion yuan ( National Fire and Rescue Administration, 2023 ). Importantly, survey data from 21 cities reveal that 80.3 % of fire-related fatalities and 74 % of injuries occurred in residential buildings ( CTIF, 2024 ). These statistics highlight the high-risk nature of residential building fires and their significant threat to public safety. Consequently, in-depth research into the dynamic characteristics of residential building fires and the development of effective prevention strategies is critically important for reducing fire-related losses and protecting lives and property. As a core approach to reducing fire losses, fire simulation enables the prediction of flame spread paths, temperature distribution, and smoke diffusion patterns, providing critical scientific support for building fire protection design, safety evacuation strategy development, and emergency planning optimization ( ISO 23932-1, 2018 ). Currently, fire simulation methods are classified into three main categories: empirical models ( Alpert, 1972 ; Heskestad and Delichatsios, 1979 ), zone models, and field models. Empirical models estimate fire parameters, such as ceiling jet temperatures, using simplified assumptions like the Alpert formula. These models offer high computational efficiency but are limited to idealized scenarios, such as open spaces ( Alpert, 1972 ). Zone models ( Quintiere and Wade, 2016 ) divide the fire space into two layers—a hot smoke layer and a cool air layer—assuming uniform parameter distribution within each layer. They simulate fire development and smoke diffusion through conservation equations. While zone models are computationally efficient and straightforward to implement, making them suitable for overall fire assessments in buildings, their reliance on idealized assumptions reduces accuracy and limits their ability to handle localized details or complex geometric structures ( Johansson, 2021 ). The field model, a fire simulation method based on Computational Fluid Dynamics (CFD), solves conservation equations for mass, momentum, and energy to accurately describe the three-dimensional fluid flow, heat transfer, and chemical reactions involved in fires. This approach offers high precision and broad applicability, enabling the analysis of complex fire behaviors. In recent years, the field model has been widely utilized in fire engineering. For example, Al-Waked et al., (2021) employed CFD to simulate smoke dispersion and temperature distribution in residential building atriums under natural ventilation and fire scenarios, highlighting its utility in designing ventilation strategies and conducting safety assessments. Similarly, CFD has been used to simulate smoke dispersion and passenger evacuation in subway train fires, fire conditions in small-scale tunnel and atrium experiments, and smoke control and structural impacts in parking garage fires ( Deckers et al., 2013 ; Roh et al., 2009 ; Tilley et al., 2011 ). These applications underscore CFD's versatility in analyzing complex fire scenarios and supporting performance-based designs. However, CFD methods are computationally intensive, with a single simulation often requiring several hours or even days to complete. Furthermore, they demand expert knowledge to configure boundary conditions and mesh parameters, posing challenges for real-time disaster prediction and rapid design iterations ( McGrattan et al., 2012 ; McGrattan and Miles, 2016 ; Novozhilov, 2001 ). To address these challenges, researchers have increasingly adopted machine learning methods. These approaches can uncover latent relationships within complex datasets, achieving high predictive accuracy while allowing computationally intensive training processes to be performed in advance, thereby enabling efficient responses in practical applications ( Cheng et al., 2024 , 2025a , 2025b ; Zheng et al., 2020a ; Zheng and Yuan, 2021 ). For instance, Cheung et al. (2025) introduced a dual-agent deep learning framework for predicting real-time fire hazards and burning fuel types in smart buildings, demonstrating high accuracy and resilience even under sensor failure conditions. Lu et al. (2025) developed a GAN-based model to assess required safe egress time in complex public buildings, enabling rapid evacuation assessments with high prediction accuracy. Rianto et al. (2025) employed generative AI models, including GANs and diffusion models, to predict smoke movement and temperature distribution in complex building layouts, achieving near real-time assessments with over 94 % accuracy. Additionally, Zeng et al. (2023) developed a GAN-based fire detection and analysis model to predict smoke flow and detector responses using hypothetical building floor plan data, although these layouts were not derived from actual residential buildings. Zeng et al. (2024) further extended this approach by applying GANs and diffusion models to efficiently simulate smoke flow in large spaces and buildings with complex geometries. Zhang and Geng (2024) proposed a dynamic prediction model for fire smoke layers using BP neural networks, which learned fire behavior patterns from data generated by the Fire Dynamics Simulator (FDS). Ji et al. (2024) introduced a machine learning framework combining LSTM and transfer learning for real-time identification of large-space fires and airflow temperature prediction. Similarly, Zhang et al. (2022a) applied deep learning models to predict fire progression and flashover phenomena in enclosed spaces. These studies underscore the promising potential of machine learning in fire simulation. However, most current research focuses on fire simulations in large spaces or hypothetical buildings, whereas residential buildings—characterized by smaller, more compartmentalized spaces—differ significantly in fire growth, and propagation mechanisms. For instance, the segmented layouts of residential buildings can create irregular fire spread paths, greater smoke flow disturbances, and increased challenges in accurately predicting fire behaviors. Furthermore, existing methods have yet to demonstrate their effectiveness in modeling residential building fires, despite these being the most common and hazardous type of fire incidents. Therefore, research targeting fire scenarios in residential buildings is critically important, necessitating the development of efficient predictive models tailored to these complex environments. To address this issue, this study proposes a fire scenario simulation method for residential buildings based on GANs. A fire simulation database was first constructed, encompassing 50 typical residential building layouts. The model uses building layouts, ignition locations, and fire development times as inputs, while the corresponding indoor temperature and soot visibility distributions serve as outputs. Based on this data, a GAN model was designed and implemented to efficiently capture the dynamic evolution of fire development, simulating the progression of indoor temperature and soot visibility distributions in residential buildings. After training and validation, the model achieved an average structural similarity index (SSIM) of up to 95.7 % compared to CFD simulation results. Furthermore, the computational efficiency was improved by approximately 80,000 times compared to traditional CFD methods. This approach not only significantly reduces the computational cost of fire scenario simulations but also enables the rapid generation of high-precision dynamic fire simulation results. It provides an efficient and practical tool for building fire protection design and disaster emergency response. The second section of this paper introduces the overall methodological framework. The third section details the construction of the residential fire database, the preprocessing of fire scenario data, and the design of the GAN. The fourth section presents the results of model training and evaluation, accompanied by an in-depth discussion of the model's performance. Finally, the fifth section summarizes the findings of this study and highlights potential directions for future research. 2 Framework of fire scenario simulation method for residential building based on GAN This study proposes a fire scenario simulation method for residential buildings based on GAN, designed to efficiently and accurately predict the spatiotemporal distribution of temperature and soot visibility during building fires. The methodological framework, illustrated in Fig. 1 , consists of the following four components. (1) Fire Database Construction To develop a high-quality fire simulation model, it is crucial to construct a comprehensive database encompassing diverse building layouts and fire scenarios. Given the high cost and practical limitations of obtaining real-world experimental data, current research predominantly relies on CFD methods to perform fire numerical simulations, which generate large-scale, high-precision training datasets ( Zeng et al., 2023 , 2024 ; Ji et al., 2024 ; Lattimer et al., 2020 ; Su et al., 2021 ). Following this widely adopted approach, this study employs CFD calculations to generate fire evolution data for a variety of building layouts and ignition locations, capturing the spatiotemporal variations of key physical variables, such as temperature fields and soot visibility fields. (2) Fire Scenario Data Preprocessing Feature encoding of input data is essential for enabling deep learning models to effectively learn patterns from the data. Therefore, during database construction, preprocessing is performed for both input data and CFD simulation results to meet the training requirements of the deep learning model. First, the input features (e.g., building layouts, ignition locations, and fire development times) and CFD simulation outputs (e.g., temperature and soot visibility distributions) are encoded into formats suitable for neural network processing. Second, the data dimensions and scales are standardized to ensure consistency across different scenarios. Additionally, data augmentation is applied using rotational transformations to increase the diversity of the training dataset. (3) Design of the Generative Adversarial Network Given the high-dimensional complexity of fire scenarios, this study develops a GAN based on the pix2pix architecture ( Isola et al., 2018 ). The generator takes building layouts, ignition locations, and fire development times as inputs to predict the spatiotemporal distributions of temperature and smoke within fire scenarios. Meanwhile, the discriminator evaluates the realism of the generated data, guiding the generator to produce outputs that closely resemble the physical characteristics of fire fields. (4) Model Training and Prediction The model training process integrates supervised learning with adversarial training, using CFD simulation data as supervision signals to iteratively optimize the adversarial interaction between the generator and the discriminator. The loss function quantifies the error between the generated results and the target data, while iterative optimization improves the model's generation accuracy. To ensure optimal performance, the study systematically compares the effects of different generator and discriminator architectures on prediction results and conducts hyperparameter analyses to identify the optimal model configuration. Once the training is complete, the model can be applied for rapid simulation and prediction in new building layouts. By inputting parameters such as building layout and ignition location, the model efficiently generates the spatiotemporal distributions of temperature fields and smoke diffusion throughout the fire evolution process. 3 Main methods This section provides a detailed description of the construction of the fire database for residential buildings, the preprocessing of fire scenario data, and the design of the GAN. 3.1 Construction of the fire database for residential buildings In this study, the fire database is constructed using the FDS, a CFD-based tool ( McGrattan, 2013 ). FDS is widely recognized in the field of fire simulation ( Ye and Hsu, 2022 ; Zhang et al., 2024 ), and its reliability has been validated by numerous studies ( Friday and Mowrer, 2001 ; Hietaniemi et al., 2004 ; Shen et al., 2008 ; Wen et al., 2007 ). A total of 50 architectural design drawings of typical residential buildings were collected and processed to extract wall layout information. An example of a typical building layout is shown in Fig. 2 , while the complete set of building layouts is provided in Appendix Figure A1 . Relevant architectural parameters, such as length, width, area, and room count, are summarized in Table 1 . The parameter ranges for these residential buildings are as follows: length (15.76–37.0 m), width (9.5–23.1 m), total area (168.27–532.8 m 2 ), and number of rooms (10–36). Based on these building layouts, corresponding FDS models were developed, with model parameters configured according to established literature. The primary parameters include fire source settings, ceiling height, and grid resolution, with specific configurations outlined as follows. (1) Fire Source Configuration: The fire source setup plays a crucial role in influencing the fire propagation process and smoke diffusion patterns. The configurations in this study were determined based on combustion characteristics and various fire locations: (a) Fire Location: Two typical fire scenarios were defined—one with the fire located at the center of the master bedroom and the other 0.2 m from the wall in the same room. The selection of these locations was inspired by Zeng et al. (2023) , which demonstrated the importance of capturing fire dynamics from both central and wall-adjacent ignition points. The central location represents a scenario with minimal obstruction, while the wall-adjacent location examines the impact of wall-induced flow disturbances on fire growth and smoke propagation. Specific distributions are provided in Figure A1 . Future studies will consider additional fire locations to further enhance the analysis of fire behaviors in complex residential layouts. (b) Transient Heat Release Rate (HRR): The widely used t 2 fire growth model was adopted, where the HRR reaches a peak of 1000 kW within 150 s and remains constant until 300 s ( Zeng et al., 2023 ). (c) Combustible Material: The burning surface area was set to 1 × 1 m 2 , with ethanol used as the fuel. Ethanol has a heat of combustion of 25.6 MW/kg and a radiation fraction of 0.26 ( Zeng et al., 2023 ). (2) Ceiling Height: The ceiling height, which significantly affects temperature development and smoke spread, was set to 3 m for all residential building simulations. (3) Grid Resolution: Grid size is a critical parameter for ensuring simulation accuracy. As shown in Appendix B , a comparison of grid sizes (0.05 m, 0.1 m, and 0.2 m) revealed that a 0.1 m grid achieves an optimal balance between computational efficiency and accuracy. Consequently, all models were configured with a grid size of 0.1 m. (4) Simulation Duration: The total simulation time was set to 300 s ( Zeng et al., 2023 ), covering the primary stages of fire evolution, from ignition to steady-state burning. This study established reasonable values for key parameters based on fire dynamics principles, with specific parameter values and their justifications summarized in Table 2 . Using these parameters, a total of 100 fire scenarios were constructed, encompassing 50 building layouts and 2 fire ignition locations. To the best of the authors' knowledge, this represents one of the largest residential building fire databases currently available. The grid count and computation time for each case are provided in Table 1 . Notably, the total computation time required to complete all 100 cases is approximately 274 days. It is important to note that these fire scenarios constitute a fire scene database under specific parameter settings. Although the range of parameter categories is currently limited, this database serves as a foundational framework and a valuable reference for future research. By following the methodology outlined in this study, the parameter range and the number of cases can be further expanded to improve the comprehensiveness and applicability of the fire scenario database. 3.2 Fire scenario data preprocessing To construct a fire scenario database suitable for deep learning modeling, this study preprocesses the FDS simulation results, including temperature and soot visibility slice extraction, data format conversion, temporal sampling, and data augmentation, as described below. (1) Temperature and soot visibility slice extraction Temperature and soot visibility are critical factors influencing occupant safety during a fire. Therefore, this study selects the temperature field and soot visibility field as the primary simulation outputs and extracts slice data at key positions. (a) Temperature slices: Extracted at a height of 2.7 m, aligning with the height of typical sprinkler systems, to facilitate fire detection and fire suppression system response analysis. (b) Soot visibility slices: Extracted at a height of 2 m, consistent with the design standards of typical building smoke exhaust systems ( Zeng et al., 2024 ). (2) Data format conversion To ensure compatibility with deep learning models, the extracted temperature and soot visibility slice data are converted into RGB image format, serving as the neural network's output. The image size is standardized at 256 × 256 pixels to maintain consistent spatial resolution. Additionally, to provide more comprehensive information about fire scenarios, the building layouts and fire ignition locations are included as input data. These features are plotted on the same image (as shown in Fig. 2 ) to enable the neural network to learn fire propagation patterns within various building layouts. (3) Temporal Data Sampling Fires exhibit significant temporal evolution characteristics, making time-series data critical for understanding fire progression. This study adopts a fixed-interval sampling method to extract time-series data from the simulation results. Specifically, one image is recorded every 10 s, forming a continuous dataset that captures the dynamic progression of the fire. This time-series data provides valuable information about the evolution of temperature and smoke diffusion, facilitating detailed fire behavior analysis. Furthermore, it serves as training data with temporal correlations for deep learning models, enabling the prediction of fire development trends. (4) Data Augmentation To expand the dataset and improve the generalization capability of the model, this study employs rotational transformations for data augmentation. Each fire scenario image is rotated by 90°, 180°, and 270°, effectively increasing the dataset size by a factor of three. This approach enhances the model's adaptability to different building orientations and fire locations, thereby improving its prediction accuracy across diverse fire scenarios. Moreover, it strengthens the model's robustness by enabling it to handle variations in fire dynamics under different spatial configurations. 3.3 Design of the GAN GANs have demonstrated remarkable capabilities in a variety of applications ( Wu et al., 2022 ), including image generation and repair ( Aggarwal et al., 2021 ), structural design ( Liao et al., 2021 , 2024 ), apartment floor design ( Zheng et al., 2020b ), indoor airflow prediction ( Kim et al., 2024 ), airflow distribution ( Hu et al., 2024 ), velocity field around a building ( Zhang et al., 2022b ), pollutant dispersion field ( Alanis Ruiz et al., 2025 ), and ground motion generation ( Florez et al., 2022 ; Xu and Chen, 2024 ). Building on these strengths, this study develops a GAN-based deep learning model to predict temperature and soot visibility distributions in residential fire scenarios. The model is inspired by the pix2pix framework ( Isola et al., 2018 ), which was selected for its proven effectiveness in image-to-image translation tasks. Specifically, pix2pix excels in generating high-quality outputs from paired datasets and is well-suited for capturing spatial relationships in complex environments ( Rianto et al., 2025 ; Henry et al., 2021 ). These features align with the study's goal of accurately predicting temperature and soot visibility distributions in small, compartmentalized residential fire scenarios. Its overall structure is illustrated in Fig. 3 . At the input stage, the model incorporates the building layout, fire ignition location, and fire development time as input data to capture the spatial and temporal characteristics of fire propagation. At the output stage, the model generates the temperature field and soot visibility field at the fire development time, enabling accurate predictions of fire scenarios. To optimize the model's performance, different generator and discriminator architectures are compared to analyze their effects on prediction accuracy. Additionally, hyperparameter tuning is performed during the experiments to determine the optimal network configuration and parameters. Detailed comparative experiments and analyses are presented in Section 4.1 . For hyperparameter settings, aside from the optimized network structure and parameters, all other hyperparameters follow the default configurations from the pix2pix model. The specific parameter settings are summarized in Table 3 . 4 Results and discussion 4.1 Model training Based on the constructed fire scenario database and GAN, this section analyzes and evaluates the network architecture and model hyperparameters. The dataset was divided into training, validation, and testing sets in a ratio of 8:1:1 to identify the optimal network architecture and training parameters. 4.1.1 Comparison of generator and discriminator architectures To evaluate the impact of different combinations of Generators (netG) and Discriminators (netD) on fire scenario prediction, the following network architectures were tested. a) Generator (netG) types: resnet_9blocks, resnet_6blocks, unet_256, unet_128 b) Discriminator (netD) types: basic (default), n_layers, pixel These generators and discriminators are recommended network structures recommended within the pix2pix framework ( Isola et al., 2018 ). For the generators, resnet_9blocks and resnet_6blocks consist of 9 and 6 residual blocks, respectively. These residual connections enhance feature extraction and representation, enabling the effective modeling of complex relationships. Similarly, unet_256 and unet_128 are based on the U-Net architecture, which employs skip connections to directly link the encoder and decoder. For the discriminators, the basic discriminator is the default PatchGAN discriminator, which uses a receptive field of 70 × 70 pixels to evaluate spatial authenticity. The n_layers discriminator allows for the customization of the number of convolutional layers, with the default configuration being used in this study. Meanwhile, the pixel discriminator evaluates authenticity strictly at the pixel level, emphasizing fine-grained prediction accuracy. The model evaluation metrics used in this study include the Structural Similarity Index (SSIM), Normalized Root Mean Square Error (NRMSE), and Intersection over Union (IoU) ( Liao et al., 2021 ; Bakurov et al., 2022 ; Mentaschi et al., 2013 ; Setiadi, 2021 ). These metrics are employed to evaluate the similarity, error magnitude, and spatial overlap between the predicted results and the ground truth, respectively. Specifically, SSIM quantifies the structural, luminance, and contrast similarity between the predicted results and the ground truth. Its value ranges from [−1, 1], where a value closer to 1 indicates greater similarity. The calculation formula for SSIM is shown in Equation (1) . NRMSE, on the other hand, is a normalized metric used to measure the error between the predicted and actual values, with smaller values indicating better predictive performance. Its calculation formula is presented in Equation (2) . Finally, IoU assesses the spatial overlap between the predicted and actual regions, with values ranging from [0, 1]. A larger IoU value signifies a greater overlap between the prediction and the ground truth. The calculation formula for IoU is provided in Equation (3) . (1) SSIM I AI , I FDS = 2 μ I AI μ I FDS + C 1 2 σ I AI I FDS + C 2 μ I AI 2 + μ I FDS 2 + C 1 σ I AI 2 + σ I FDS 2 + C 2 (2) NRMSE ( I AI , I FDS ) = 1 n ∑ i = 1 n ( I AI − I FDS ) 2 I FDS , max − I FDS , min where (3) IoU ( I AI , I FDS ) = | R AI ⋂ R FDS | | R AI ⋃ R FDS | I AI and I FDS represent the AI-generated images, which are the results of the proposed GAN-based method, and FDS-simulated temperature (or soot visibility) images, respectively. Constants C 1 and C 2 are introduced to avoid division by zero. The terms μ , σ 2 , and σ AI,FDS denote the mean, variance, and covariance, respectively. n represents the total number of pixels in the image. For IoU, R AI and R FDS denote the regions where the temperature (or soot visibility) exceeds the threshold (set to 128) in the AI-generated and FDS-simulated results, respectively, while |A ∩ B| and |A ∪ B| represent the area of intersection and union, respectively. The evaluation metrics are calculated as the average similarity metrics across all time steps for all cases in the test set, providing a comprehensive assessment of the model's prediction accuracy. Table 4 summarizes the impact of different network architecture combinations on the prediction results, including SSIM, NRMSE, IoU, and training time. Based on the results presented in Table 4 , several key observations can be made. First, the pixel discriminator achieves the highest values for IoU and SSIM, indicating its superior performance in preserving structural details in the predicted fire scenarios. Furthermore, the combination of unet_256 as the generator and the pixel discriminator as the discriminator achieves the highest IoU (0.9573) and the lowest NRMSE (0.0137). This demonstrates that this specific combination generates fire scenarios that are most similar to the ground truth data. As a result, this study identifies unet_256 as the optimal generator and the pixel discriminator as the optimal discriminator. The detailed network architecture for this combination is illustrated in Fig. 4 . 4.1.2 Network hyperparameter analysis After identifying the optimal network architecture (unet_256 + pixel), further optimization of key network hyperparameters was performed to enhance the model's prediction accuracy. The optimization focused on the following parameters. a) The learning rate (lr) and L1 loss weight (lambda_L1), where lambda_L1 controls the contribution of the L1 loss, which measures the absolute difference between the predicted and target images. This parameter was carefully tuned to balance the trade-off between image reconstruction accuracy and overfitting, ensuring the model effectively captures spatial and temporal features relevant to fire scenarios; b) The number of generator filters (ngf) and the number of discriminator filters (ndf). The experimental results, presented in Table A1 , highlight the impact of various hyperparameter combinations on the evaluation metrics, including SSIM, IoU, and NRMSE. By analyzing these trends, the optimal values for the learning rate, L1 loss weight, and network width parameters were determined to maximize the model's predictive performance. The final hyperparameter configuration is as follows: lr = 0.0002, lambda_L1 = 60, ngf = 256, ndf = 128. 4.2 Model evaluation 4.2.1 Overall evaluation on the test set To further validate the prediction accuracy and generalization capability of the model, 10 cases from the test set were analyzed. SSIM, NRMSE, and IoU were used as evaluation metrics. The experimental results are summarized in Tables 5 and 6 . Table 5 presents the evaluation results for temperature-related metrics, while Table 6 focuses on soot visibility. Additionally, comparisons of the model's predicted fire dynamics with the FDS simulation results are provided through animations (Videos can be accessed via the provided Link ( https://github.com/QingleCheng/FireSimulation )). From the results in Table 5 , several conclusions can be drawn. First, the model achieved consistently high SSIM values, with all cases exceeding 0.93. This indicates a strong structural consistency between the generated images and the actual fire scenarios. Second, the NRMSE values were uniformly low, with all cases below 0.02, demonstrating minimal pixel-level prediction errors. Finally, the IoU scores were consistently high, with all cases exceeding 0.89. This reflects excellent spatial alignment between the predicted and actual temperature distributions, highlighting the model's ability to accurately predict the spatial dynamics of fire-related heat propagation. Regarding the soot visibility results summarized in Table 6 , similar trends were observed. Overall, the model exhibits strong predictive performance in capturing the dynamic evolution of fire scenarios. The predicted temperature and soot visibility distributions are highly consistent with the FDS simulation results, underscoring the model's reliability and accuracy. 4.2.2 Typical case analysis To further evaluate the model's prediction performance, three representative cases were randomly selected from the test set. The temperature fields and soot visibility distributions at key time steps (100 s, 200 s, and 300 s) were compared with the FDS simulation results. These comparisons are presented in Figs. 5–7 . From Figs. 5–7 , several observations can be made. First, the generated temperature fields demonstrate strong consistency with the FDS results, both in overall trends and local details. Second, the generated soot visibility distributions closely align with the FDS results, accurately capturing the spatial distribution and concentration gradients of fire smoke. This indicates that the model effectively learns the characteristics of fire smoke flow dynamics. However, it is worth noting that the model exhibits some errors in certain detailed areas, which require further investigation in future research to improve prediction accuracy and robustness. To further quantitatively analyze the prediction accuracy of the proposed model, the temperature and soot visibility predictions at key monitoring locations for the three representative cases were compared. Monitoring points A and B were specifically chosen based on their relative positions to the fire source: point A is located close to the fire source to capture the immediate effects of the fire, while point B is positioned farther away to observe the propagation of temperature and soot over distance. The corresponding variations in temperature and soot visibility over time were plotted as curves in Figs. 8–10 . From the curves in these figures, several conclusions can be drawn. First, the temperature variation trends predicted by the model are generally consistent with the results from the benchmark simulation tool (FDS). Specifically, the initial heating and final stabilization phases of the temperature curves align closely with the FDS results, exhibiting minimal prediction error. Second, the soot visibility variation trends also match the FDS results well, accurately capturing the rapid decrease in soot visibility during the early stages and the subsequent gradual reduction. These trends reflect the key dynamic characteristics of smoke behavior. 4.2.3 Computational efficiency analysis In addition to prediction accuracy, computational efficiency is a critical metric for evaluating the model. Traditional FDS simulations, which are CFD-based fire simulations, require an average of 57.6 h to complete. In contrast, the proposed GAN-based prediction model completes fire scenario predictions in just 2.56 s. This represents an extraordinary computational efficiency improvement of approximately 80,000 times, as tested on an Intel(R) Xeon(R) Gold 6226R CPU @ 2.90 GHz and an NVIDIA GeForce RTX 4090. In summary, the proposed GAN-based prediction method not only achieves high prediction accuracy but also significantly enhances computational efficiency. This enables fire scenario predictions to be completed in an extremely short time, providing rapid assistance for building design, fire protection decision-making, and other practical applications. 4.3 Limitations Despite the promising results achieved in this study, the proposed method has certain limitations that should be addressed in future work. (1) The current fire simulation database primarily focuses on residential buildings with specific layout types and compartmentalized structures. While this allows for accurate predictions in these scenarios, the applicability of the model to larger, open spaces or complex multi-functional buildings (e.g., shopping malls or airports) is limited. Expanding the database to include a broader range of building layouts and fire scenarios is necessary to enhance the model's generalization capability. (2) The model's performance is highly dependent on the quality and representativeness of the training data. The input data distribution, which currently relies on simulated fire scenarios, may not fully capture the variability and unpredictability of real-world fires. Additionally, any biases in the training data could limit the accuracy and reliability of the model. Incorporating richer, multimodal data sources, such as fire detector response data and video surveillance data, could improve the model's robustness and accuracy. (3) While the model utilizes advanced deep learning techniques to predict fire dynamics, the physical principles governing fire behavior—such as heat conduction, convection, radiation, and smoke diffusion—are not explicitly integrated. This omission may reduce the scientific rigor and reliability of the predictions, particularly in scenarios involving complex fire and smoke propagation. Future work should aim to integrate these dynamic mechanisms to better align the model with established fire dynamics principles. (4) Although the proposed model demonstrates strong performance in simulated environments, its generalizability to real-world scenarios remains untested. Real-world factors such as human behavior during evacuation, external ventilation effects, and unanticipated obstacles (e.g., furniture or debris) are not accounted for in the current approach. Future studies should validate the model using experimental or real-world fire data to assess its practicality and reliability. (5) The proposed method has yet to be integrated into real-world fire emergency systems. The current implementation is computationally efficient for offline analysis, but its application in real-time fire monitoring and decision-making systems has not been explored. Developing an intelligent fire simulation and decision-support platform will be critical for advancing the practicality and real-time applicability of this method. 5 Conclusions and future work This study proposes a fire scenario simulation method for residential buildings based on GAN. Addressing the limitations of existing fire simulation methods in terms of computational efficiency and practical application, the proposed model is capable of rapidly generating high-quality fire temperature and soot visibility distributions. The research was conducted by constructing a fire simulation database containing 50 typical residential building layouts and developing a GAN model for fire scenario prediction using this dataset. The key conclusions are as follows. (1) A comprehensive fire simulation database was established, comprising 50 typical residential building layouts that represent common configurations. The database includes multidimensional data on temperature and soot visibility distributions during fire development, providing a reliable foundation for model training. (2) A fire scenario simulation method based on GAN was developed. By using building layouts, fire ignition locations, and fire development times as inputs, the model predicts the corresponding temperature and soot visibility distributions. Experimental results indicate that the predicted results achieved an average SSIM of 95.7 % compared to CFD simulation results, with a computational efficiency improvement of approximately 80,000 times over traditional CFD methods. (3) The proposed method enables rapid and accurate simulation of fire scenarios in residential buildings, demonstrating its potential applications in building fire protection design and disaster emergency response. It provides efficient technical support for practical fire prevention and control strategies. CRediT authorship contribution statement Qingle Cheng: Writing – review & editing, Writing – original draft, Resources, Funding acquisition, Data curation, Conceptualization. Xuyang Wang: Visualization, Validation, Data curation. Jin Zhuang: Data curation. Wenjie Liao: Writing – review & editing. Linlin Xie: Writing – review & editing, Supervision. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgement The authors are grateful for the financial support received from the National Key R&D Program of China (No. 2023YFC3805800 ). The authors would like to thank Shandy Rianto from Tsinghua University for his valuable suggestions. Appendix A Building layout and hyperparameters analysis Fig. A1 Building layouts and fire locations (A total of 50 building layouts, each with two fire location settings) Fig. A1 Table A1 Prediction similarity and training time for test sets under different hyperparameters Table A1 Parameter name Value SSIM NRMSE IoU Time (h) lambda_L1 0 0.5628 0.0228 0.7368 3.80 10 0.9548 0.0138 0.9585 3.67 30 0.9564 0.0137 0.9594 3.78 60 0.9576 0.0136 0.9601 3.58 100 0.9572 0.0137 0.9573 3.83 150 0.9559 0.0137 0.9564 3.80 lr 0.001 0.9564 0.0137 0.9521 2.57 0.0001 0.9556 0.014 0.9505 4.23 0.00015 0.9563 0.0138 0.9563 3.88 0.0002 0.9572 0.0137 0.9573 3.83 0.0005 0.9574 0.0137 0.9571 3.93 0.00001 0.9348 0.0172 0.9085 4.15 0.00005 0.9517 0.0148 0.9432 1.93 ngf + ndf 64 + 64 0.9572 0.0137 0.9573 5.83 64 + 128 0.9570 0.0137 0.9579 4.43 128 + 64 0.9600 0.0134 0.9630 4.80 128 + 128 0.9597 0.0134 0.9612 5.00 256 + 64 0.9625 0.0130 0.9642 10.62 256 + 128 0.9627 0.0129 0.9664 10.55 ∗When one parameter is fixed, the other parameters take the default values of the pix2pix model: lambda_L1 = 100, lr = 0.0002, ngf = 64, and ndf = 64. Appendix B Sensitivity analysis of grid size in CFD simulation In CFD simulations, grid size is a critical parameter that significantly affects both computational efficiency and numerical accuracy. Selecting an appropriate grid size is essential to strike a balance between these two factors. In this study, a typical building layout (Case 77) was randomly chosen to analyze the sensitivity of simulation results to grid size. The grid sizes tested were 0.05 m, 0.1 m, and 0.2 m, which are commonly used in fire simulations. Simulations were conducted to evaluate temperature and soot visibility at different locations, and the results are presented in Figures B1 and B2 . Additionally, the computational times required for each grid size were recorded, showing that the simulation took 2.81 h for a grid size of 0.2 m, 26.22 h for 0.1 m, and 550 h for 0.05 m. The results reveal that the influence of grid size on temperature simulation is most evident in high-temperature regions. Larger grids, such as 0.2 m, showed slight deviations in capturing temperature peaks, whereas medium grids (0.1 m) and smaller grids (0.05 m) produced similar results, accurately capturing the temporal variations of temperature. For soot visibility, the trends simulated across different grid sizes were generally consistent. However, the larger grid size of 0.2 m exhibited some deficiencies in capturing the details of the smoke attenuation phase, while the medium grid (0.1 m) and the finer grid (0.05 m) yielded nearly identical results, effectively representing the dynamic variations in soot visibility. This analysis demonstrates that a grid size of 0.1 m provides an optimal balance between numerical accuracy and computational efficiency. It accurately captures the variations in both temperature and soot visibility while significantly reducing computational resource requirements compared to the finer grid size of 0.05 m. Consequently, a grid size of 0.1 m was selected for the simulations conducted in this study. Fig. B1 Effect of grid size on simulated temperature results at different locations Fig. B1 Fig. B2 Effect of grid size on simulated soot visibility results at different locations Fig. B2
REFERENCES:
1. AGGARWAL A (2021)
2. ALWAKED R (2021)
3. ALANISRUIZ C (2025)
4. ALPERT R (1972)
5. BAKUROV I (2022)
6. CHENG Q (2024)
7. CHENG Q (2025)
8. CHENG Q (2025)
9. CHEUNG W (2025)
10. (2024)
11. DECKERS X (2013)
12. FLOREZ M (2022)
13. FRIDAY P (2001)
14. HENRY J (2021)
15. HESKESTAD G (1979)
16. HIETANIEMI J (2004)
17. HU C (2024)
18. (2018)
19. ISOLA P
20. JI W (2024)
21. JOHANSSON N (2021)
22. KHAN A (2022)
23. KIM Y (2024)
24. LATTIMER B (2020)
25. LIAO W (2021)
26. LIAO W (2024)
27. LU T (2025)
28. MCGRATTAN K (2013)
29. MCGRATTAN K (2016)
30. MCGRATTAN K (2012)
31. MENTASCHI L (2013)
32.
33. NOVOZHILOV V (2001)
34. QUINTIERE J (2016)
35. RIANTO S (2025)
36. ROH J (2009)
37. SETIADI D (2021)
38. SHEN T (2008)
39. SU L (2021)
40. TILLEY N (2011)
41. WEN J (2007)
42. WU A (2022)
43. XU Z (2024)
44. YE Z (2022)
45. ZENG Y (2023)
46. ZENG Y (2024)
47. ZHANG Y (2024)
48. ZHANG T (2022)
49. ZHANG B (2022)
50. ZHANG X (2024)
51. ZHANG Y (2025)
52. ZHENG H (2021)
53. ZHENG H (2020)
54. ZHENG H (2020)
|
10.1016_j.jciso.2024.100104.txt
|
TITLE: Thoughts on specific ion effects
AUTHORS:
- Leontidis, Epameinondas
ABSTRACT:
Graphical abstract
Image 1
BODY:
Specific Ion Effects (SIE) have fascinated the community of Physical Chemistry, ever since they were first highlighted by Franz Hofmeister [ 1 ]. Work on SIE has been going on and off for several decades. Following the most recent renaissance, which can be traced to the seminal paper of Ninham and Yaminski [ 2 ], who proposed dispersion forces as the major mechanism behind SIE, the area is currently facing a scientific “slump” once more. We live after all in an era, during which fundamental investigations are not strongly favored by funding bodies. The study of SIE however provides an important opportunity for Physical Chemistry, and Colloid and Interface Science to become major forces in Biology and Physiology, as rather few realize nowadays [ 3 , 4 ]. In addition, SIE still remain a major puzzle of Physical Chemistry. The term “Specific Ion Effects” currently envelops all phenomena, in which ions act in individual ways not dictated by their charge. This is a rather unfortunate definition since broad generalizations are not always the best ways to approach hard problems. The following points will illustrate the difficulty: (a) The study of SIE focused for a long time on the Hofmeister or lyotropic series of anions and cations. Although in many instances the specificity of “lyotropic” anions tends to behave in an orderly and prescribed fashion, this is not so true for the cationic series [ 5 ]. It is time to realize that there are many ion types, which will not find their place in simple lyotropic sequences. Transition metal cations, lanthanides and actinides, “hydrophobic ions”, superchaotropic ions, surfactants and ionic liquids are certainly hard to place in the same shelf, regarding SIE [ 6 ]. (b) Should we expect that the same mechanisms govern SIE in a bulk solution and at surfaces and interfaces? Marcus [ 7 ] argued some time ago that this may not be the case. Why should we expect that activity coefficients and conductivities of electrolytes, or the perturbation of water structure by ions, may be understood on the same basis as electrolyte effects on the cmc of surfactants, the zeta potentials of colloid particles, or the forces between lipid bilayers? In bulk phenomena, there are only two actors that need be considered, the electrolyte and the solvent. Even then, the problem of mixed salts is almost impossible to deal with, without using empirical correlations. Surfaces and interfaces have their own individuality, as was demonstrated in important work by Schwierz et al. who showed that the hydrophobicity or hydrophilicity of a surface can invert the lyotropic series [ 8 ]. Similarly, the interaction of ions with a “patchy” protein surface is too complicated to be treated in a simplistic “binary” fashion [ 9 ]. (c) How similar are SIE in water and in other solvents? Very few researchers, mostly from the group of Craig in Australia, still address this fundamental issue [ 10 ]. The long effort to understand the lyotropic series is strongly reminiscent of the effort to model real gases based on ideal gases. Recall the approach of van der Waals, who postulated that a real gas may be described by only two individual parameters, one defining the molecular size and the other intermolecular interactions. In many SIE models, one clearly discerns the wish to find a single molecular parameter that would be responsible for all or most of the ion specificity. In this respect, experimental measures of SIE were often correlated to various parameters, sometimes plausible or “obvious” (hydration free energy or entropy, ionic size, partial molar volume, excess polarizability), and sometimes indirect and even eccentric (viscosity B-coefficient, surface tension increment) [ 11 ]. This approach is based on the idea that there is only one specific ionic property responsible for the behavior of ions under all circumstances, either because only one acting “force” is dominant, or because there is a balance of “forces”, all depending on the same single parameter. This is the case of the dispersion parameter, favored by Ninham and coworkers [ 12 ], the ion adsorption energy of the Bulgarian School [ 13 ], or the partitioning parameter based on the ionic size, assumed to affect two energy terms in different ways [ 14 ]. In most cases, it is clear that not all ions conform to the orderly behavior demanded by the universal parameter [ 14 ], with exceptions, inversions and other anomalies appearing. The same line of reasoning, albeit in the bulk electrolyte solution context, is the proposition of a single isodesmic hydration parameter by Zavitsas [ 15 ], or of similar parameters proposed by Heyrovska and by Wexler [ 15 ], which – together with ion-pairing – appears to render useless the venerable activity coefficient theories. The concept of ion-pairing (formation of contact ion pairs) is behind the law of “matching water affinities” of Collins [ 3 ], which was later generalized to treat ionic surfactants [ 16 ], but cannot be easily generalized to SIE on nonionic systems, or to ions other than those of the lyotropic series. However, this concept is still a major tool to understand SIE in simple systems, and recognizes the interplay of ion-ion and ion-solvent, and also partly ion-surface interactions. The quest for a proper SIE parameter continues today, as was demonstrated by the recent impressive introduction of the radial charge density of the ions by Craig and coworkers [ 17 ]. Following the progress in the last 20–30 years, most researchers would tend to agree that the basic electrostatic picture, taught in every basic Physical Chemistry course, is inadequate, or downright wrong, and should be amended. Ionic size, specific ionic hydration, excess polarizability, and the potential hydrophobic character should all become a part of the larger picture [ 6 , 18 ]. Ninham and Duignan in a series of important papers examined the interplay of ionic size, hydration (cavity), and dispersion/induction interactions, albeit in a continuum solvent picture, and showed how hard it is to (a) exclude some of these effects from consideration, and (b) properly account for everything from first principles [ 19 ]. The issues of surface hydrophobicity/hydrophilicity vs ionic chaotropicity have been stressed by Netz and coworkers [ 8 ]. The role of the surface and the differences between hard and soft surfaces have been discussed [ 6 , 20 ]. The fact that soft matter interfaces, being “active” [ 21 ], respond to the presence of “sticky” [ 22 ] ions and are modified is another serious consideration. It is equivalent to continuously changing the electrostatic boundary condition upon ion adsorption to a surface. Soft-matter structure may in fact be largely disrupted, modified or even destroyed in the presence of strongly chaotropic or hydrophobic ions [ 6 , 20 ], but also in the presence of multivalent ions [ 23 ]. It has also been demonstrated that SIE arising from heavy metal cations cannot be understood without considering not only surface complexation, but also bulk complexation [ 24 ]. There is no doubt that much has been accomplished in the past few decades and SIE have been partially demystified. There still remain important challenges however, and there is still need for strong fundamental work. Several possible research avenues can be identified, both on the theoretical and on the experimental side: (a) Introducing SIE in workable double-layer models, which will break the continuum conundrum, treat all forces present on an equivalent basis [ 2 , 4 ], account for specific hydration, and ion-pairing/complexation in the bulk. (b) Focusing on superchaotropes [ 25 ] and other types of structure-disrupting ions. Ionic liquids appear to be a new avenue that is strongly exploited in biophysical chemistry and biotechnology [ 16 , 26 ]. (c) Establishing accurate measures of ion adsorption at various interfaces using all possible tools from sophisticated photoelectron spectroscopy, scattering, X-ray fluorescence, diffraction and reflectivity experiments [ 27 ] to ingenious classical thermodynamic methods [ 28 ]. This is critical information. After all, vibrational sum-frequency generation (VSFG) and second-harmonic generation (SHG) measurements at the air-water interface of electrolyte solutions [ 29 ] proved the presence of ions at the air-water interface, verifying simulation predictions [ 30 ] that were in contrast to the predictions of older theory. (d) Examining the coupled effect of the presence of nanobubbles and SIE [ 4 , 31 ]. The potential presence of such “internal interfaces” in solutions and at surfaces will enrich SIE research with significant insights obtained for the surfaces of electrolyte solutions [ 32 , 33 ] and for bubble-bubble coalescence phenomena [ 34 ]. (e) Studying the solvent effect on SIE. It is not surprising that most work on SIE involves water. Very often, SIE are found to be much less pronounced in other solvents. However, any theoretical/mechanistic understanding that strives for completeness must cover solvent effects; theories and mechanisms should not be water specific after all. Or should they? CRediT authorship contribution statement Epameinondas Leontidis: Conceptualization, Writing – original draft, Writing – review & editing. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
REFERENCES:
1. KUNZ W (2004)
2. NINHAM B (1997)
3. COLLINS K (1997)
4. COLLINS K (2012)
5. NINHAM B (2017)
6. NINHAM B (2017)
7. CACACE M (1997)
8. LONOSTRO P (2012)
9. LEONTIDIS E (2017)
10. MARCUS Y (2009)
11. SCHWIERZ N (2010)
12. SCHWIERZ N (2013)
13. PEGRAM L (2008)
14. OKUR H (2017)
15. GIBB B (2019)
16. MAZZINI V (2018)
17. GREGORY K (2019)
18. MIRANDAQUINTANA R (2021)
19. MANET S (2010)
20. CHIALVO A (2021)
21. BOSTROM M (2003)
22. IVANOV I (2011)
23. LEONTIDIS E (2009)
24. ZAVITSAS A (2016)
25. ZAVITSAS A (2022)
26. HEYROVSKA R (2013)
27. WEXLER A (2019)
28. VLACHY N (2009)
29. GREGORY K (2021)
30. GREGORY K (2022)
31. LEONTIDIS E (2016)
32. DUIGNAN T (2013)
33. DUIGNAN T (2014)
34. LEONTIDIS E (2014)
35. ZEMB T (2004)
36. COLLINS K (2005)
37. NAYAK S (2022)
38. SAHA R (2022)
39. BERA M (2015)
40. SOFRONIOU C (2019)
41. STHOER A (2022)
42. UYSAL A (2023)
43. ASSAF K (2018)
44. HOHENSCHUTZ M (2023)
45. SHUKLA S (2020)
46. BESTERROGAC M (2023)
47. HAN Q (2023)
48. OTTEN D (2012)
49. ROCK W (2018)
50. YOO S (2022)
51. PEYCHEV B (2023)
52. PETERSEN P (2005)
53. JUNGWIRTH P (2002)
54. TAN B (2020)
55. ZHANG H (2020)
56. LEVIN Y (2009)
57. JUBB A (2012)
58. SEKI T (2023)
59. HENRY C (2010)
60. DUIGNAN T (2021)
|
10.1016_j.breast.2022.03.003.txt
|
TITLE: Experiences of health professionals treating women diagnosed with cancer during pregnancy and proposals for service improvement
AUTHORS:
- Stafford, Lesley
- Sinclair, Michelle
- Gerber, Katrin
- Christobel Saunders,
- Ives, Angela
- Peate, Michelle
- Lippey, Jocelyn
- Umstad, Mark P.
- Little, Ruth
ABSTRACT:
Objective
To examine the experiences, needs, and perceptions of health professionals(HPs) treating women diagnosed with cancer during pregnancy(gestational cancer, GC).
Methods
Interviews were undertaken with Australian HPs who had treated women diagnosed with GC over the previous five years. HPs were recruited via social media, and professional and community networks. Questions focussed upon HPs’ confidence caring for these women, whether current guidelines/training met their needs, psychological impacts of care provision, and service gaps. Interview data were analysed thematically.
Results
Twenty-seven HPs were interviewed; most were oncology HPs(22/27) with experience caring for women with gestational breast cancer and 13 had a breast-specific clinical focus (e.g. breast surgeon). Many were currently treating women with GC(48%) or had in the last 6–12 months(29.6%). Four themes were identified: A clinically complex case, Managing multi-disciplinary care, Centralised resources for health professionals, and Liaison, information and shared experiences for women. HPs found this population personally challenging to treat. They reported initial uncertainty regarding treatment due to infrequent exposure to GC, limited resources/information, and the need to collaborate with services with which they did not usually engage. Solutions offered include centralised resources, clinical liaison/care coordinators, and connecting women with GC with peer support.
Conclusions
HPs perceived women with GC as a vulnerable, complex population and experienced challenges providing comprehensive care; particularly when treatment was delivered at geographically separated hospitals. Systemic changes are needed to optimise comprehensive care for these women. Their insights can guide the development of more integrated cancer and obstetric care, and better HP support.
BODY:
1 Introduction Comprehensive cancer care is complex, and requires the coordination of services from multidisciplinary teams of health professionals (HPs) which may be located at different sites and use different models of care [ 1 ]. The quality of care coordination depends on factors like location, size and type (public/private) of treatment facility; service availability; type/stage of cancer; and access to dedicated cancer nurses/coordinators [ 1–4 ]. Providing comprehensive care is more complicated when cancer is diagnosed during pregnancy (gestational cancer, GC); a reality for 137.3 in 100,000 pregnancies of which 20% are breast-cancer related [ 5 ]. Indeed, breast cancer is the most common form of malignancy in pregnant women [ 6 ]. In Australia, cancer care occurs within a mixed public-private health service model. All patients have access to a universal Medicare system (i.e., free, or low-cost pharmacy, primary and hospital care that is taxpayer funded) [ 7 ]; and individuals can also hold private health insurance that provides access to care in private or public hospitals, in the latter case, individuals are admitted as private patients. Approximately 58% of Australians have private health insurance [ 8 ]. Many patients access both private and public care simultaneously or move between the two systems for different components of their care. Approximately 300 healthcare organizations have a dedicated cancer service in either the private or public sector and these collaborate to improve coverage of cancer services across the country. The public sector, at state level, is responsible for coordinating cancer prevention, screening programs and providing comprehensive cancer care for all patients [ 1 ]. Private sector services are less likely to provide comprehensive cancer services and more likely to offer only one or two types of service (e.g., radiotherapy). Like oncology services, Australian maternity services are delivered through a mix of public and private services [ 9 ]. There are multiple models of maternity care provision that may involve combinations of private and/or public obstetricians, midwives, and general practitioners. In 2018, the majority (96%) of Australian women gave birth in hospital and of these 75% birthed in the public system [ 10 ]. Though there are overarching national strategic directions to support Australia's high-quality maternity [ 10 ] and oncological [ 7 ] care systems and enable improvements in line with contemporary practice, evidence and international developments, there are no national guidelines concerning oncological management during pregnancy. Larger metropolitan tertiary hospitals may offer both obstetric and maternity care, but few specialist cancer centres have an adjacent or co-located obstetric facility. This can be a challenge in care provision of women with GC, where coordinated care is required from a larger than usual range of HPs including obstetric and maternal-fetal medicine specialists working together under sometimes challenging, time-sensitive and rapidly changing circumstances. They may face conflicting ethical obligations and difficulties ensuring shared decision-making and informed consent are achieved, particularly regarding termination of pregnancy or pregnancy continuation with treatment [ 11 ]. The best oncology treatment for the mother may compromise the obstetric outcome or the fetus, and vice versa [ 12 ]. Consequences of treatments such as mastectomy or chemotherapy (and its associated toxicity) may limit opportunities and/or the ability to breast feed [ 13 ]. Any conflicting responsibilities require coordination across disciplines and systems that were not designed with collaboration in mind; and given the relatively small number of women with GC, opportunities for HPs to establish efficient collaboration processes are limited. The role of comprehensive cancer care coordination in optimizing patient outcomes is recognized in Australia [ 7 ], but there is little research within the context of GC. The limited information available suggests that meeting the challenges of the complexity of care inherent in this group is central to perceived quality of care and wellbeing [ 4 , 14 , 15 ]. No previous study has investigated the experiences of the HPs treating this population; perspectives which are needed to better understand the barriers to comprehensive GC care. This study therefore aimed to explore HPs experiences; assess their personal and professional capacity to meet this population's needs; determine whether current guidelines and training meet HPs' professional needs; and identify areas for improvement. 2 Materials and methods This research utilised data collected for the ‘Exper i ences of Preg n ant Women w it h Canc e r: Explorin g Pa r enting and Ment a l Heal t h N e eds’ (INTEGRATE) study, which examined the healthcare experiences and supportive care needs of women with GC, their partners and HPs treating this population. Ethics approval was received from The Royal Women's Hospital Research Ethics Committee (ID#18/25), with all participants providing informed consent. Findings on women's experiences and additional methodological details are published elsewhere [ 15 , 16 ]. Methodology details specific to collection of information from HPs were as follows. HPs were eligible if they had clinical experience with women with GC in the last five years within Australia. HPs could belong to any discipline in oncology, obstetrics, or mental health. Nationwide recruitment included advertisements and emails (see Appendix A and B ) disseminated by the study team, professional and community networks, and social media. Advertisements provided a weblink to a participant information webpage where eligibility was self-assessed. HPs were then contacted to confirm their eligibility, obtain consent, and arrange a suitable time for a data collection interview. No compensation was offered for study participation. Representatives from all relevant disciplines and sub-specialities were invited to participate and no a priori sample size was set. Audio-recorded, semi-structured telephone interviews were utilised, with relevant professional and demographic information collected prior to the audio-recording (see Table 1 ). The interview guide (see Appendix A and B ) was designed by the multidisciplinary research team to explore HPs experiences treating this population including whether current guidelines and training met their needs; the psychological impact of caring for this population; and any gaps in the provision of services. Interviews were conducted by psychologists with extensive qualitative research experience who ensured that the relevant content was covered in each interview. In this study, GC was defined as cancer diagnosed during pregnancy (not postpartum), excluding molar pregnancies or trophoblastic disease. Following verbatim transcription, interview data were analysed thematically using Braun and Clarke's method [ 17 ] in NVivo 12 software. Trustworthiness was enhanced using Nowell and colleagues' principles [ 18 ]. The interviewers frequently shared field notes and, prior to formal coding, identified initial impressions and codes relating to service gaps. One fifth of the interviews were independently coded by two other study members to identify further, tentative codes. All codes were then discussed with the lead investigator until consensus on preliminary codes was reached. These codes were applied to the remaining transcripts and grouped into potential themes which were revisited against lower order codes and the original dataset. Data collection continued until saturation was achieved and no new themes emerged. Final themes and sub-themes were refined, and illustrative quotes were identified. 3 Results Twenty-seven HPs were interviewed from five states across Australia, from the disciplines of obstetrics, oncology and allied/mental health. Nearly half were currently treating women with GC. Of the 27, 22 (81%) had experience caring for women with gestational breast cancer (GBC) and 13 had a breast-specific clinical focus (e.g., breast surgeon, breast care nurses). All except one HP practiced in a major city. On average, interviews lasted 44.3 min(range = 24.9–64.0, SD = 10.6). For details of the sample description, see Table 1 . 4 Themes Four themes were identified and are detailed below, with illustrative quotes edited for clarity. The two inductive themes included: ‘ A clinically complex case’ and ‘ Managing multi-disciplinary care’. The two deductive themes included: ‘C entralised resources for HPs’ and ‘ Liaison, information and shared experiences for women ’. 4.1 Inductive themes 4.1.1 A clinically complex case All HPs emphasised that women with GC are a particularly vulnerable population with complex needs that are harder to meet compared with most other patients. Clinicians highlighted that each woman's management had unique challenges and described the need to be flexible and adaptive while balancing the constant risk to the mother with risk to the fetus, with compromise often necessary. Many HPs described inexperience and uncertainty balancing this risk; especially when it was their first time treating a woman with GC. “… it's not a situation that you encounter all that often that you'd know all the facts and figures. So often you have to go and really read things and look and again if the treatment you want to give is going to cause an excessive risk … it's about where you prioritize mother's cancer outcome versus the mother's pregnancy outcome and fetus …“ (C9, medical oncologist) HPs urgently consulted with more experienced colleagues and tertiary hospitals and reviewed available protocols and evidence on best practice. However, few more experienced colleagues existed, and protocols/evidence were not always readily available. Some HPs stated that only doctors experienced in treating GC should lead treatment decisions. “[They should be treated by the] tertiary hospitals … If they're not with a clinician who has seen that before, they could be at risk of missing out on the best treatment, on the latest knowledge.“ (C6, breast care nurse) “Sometimes … accurate information doesn't exist … you're in a situation you haven't dealt with before … there is uncertainty. You look at the literature and see what other people have published, you talk to your colleagues … and you're trying to advise a person what to do from a place of limited information.“ (C9, medical oncologist) HPs described striving to meet the complex needs of these women, which required coordinated, holistic, priority care. This included ongoing communication with other HPs; open discussions about termination, fertility and family planning implications, evidence for safety of cancer treatments, and breast feeding; facilitating convenient scheduling arrangements; prompt allied health and mental health referrals for the woman and her partner; consideration of the family (including existing dependants, partners and support networks); and postnatal follow-up. “I think we all go the extra mile … I've made an arrangement to see [the patient] on a day when she's at [the hospital] for pregnancy care even though it's not one of our standard days for pregnancy care. So, I've used another clinic … for the sake of fitting in with her program and her availability” (C13, obstetrician) “Whoever she ended up seeing was great because when they went in to do the termination they'd already discussed egg harvesting” (C15, cancer nurse) Many HPs observed that treating this population is more psychologically intense than treating patients who are pregnant or have cancer, but not both simultaneously. They noted the situation is often more emotionally charged, more time is spent with patients, and treatment planning is more involved. For some HPs, treating these patients was distressing and more memorable. Others found treating these women more rewarding, or similar to treating other complex presentations. Most HPs observed heightened uncertainty, anxiety, and distress in colleagues treating women with GC. Self-care, peer support and debriefing were aids to coping; however, some HPs would welcome more formal support (e.g., one-on-one supervision or psychological support). “I do a lot of debriefs with the staff … it's so out of their realms of normal healthy pregnancy and childbirth, I think the midwives struggle … I think the medical staff struggle too, I think the obstetricians … find it really difficult” (C2, breast care nurse) “When the baby was born … I raced into the labour ward and then to the ward to see the child and count its fingers and toes and make sure it was all right because I'd given it chemotherapy as well as the mother … those things are very, very difficult.“ (C10, medical oncologist) 4.1.2 Managing multi-disciplinary care Continuity of multidisciplinary care was difficult to maintain when care was across public and private health systems, or treatment sites were not geographically co-located (e.g., obstetric care provided in a maternity/women's hospital and cancer care in a general hospital). Consistency of clinicians and identifying the most appropriate point of contact were also issues, particularly in public obstetric services. “When they come in and you say, ‘Who is your obstetrician?’ And the answer will be, ‘Oh I go to Antenatal Clinic B on a Wednesday afternoon … I saw the Registrar’ … They will often not know who is looking after them and this particular diagnosis really means that they need to have more focused care … it's achievable but it's harder ”(C18, breast surgeon) Yet, large public tertiary hospitals were generally viewed as better equipped to support this population, with a perceived greater sense of shared clinical responsibility across multidisciplinary teams. “ They might say to their … oncologist … ‘How I should have my baby? … The oncologist would say, ‘I don't know. You should ask the obstetrician’ … Their questions aren't answered, because the person they are seeing might not have the skillset. Whereas if you have that multi-disciplinary approach, then you have covered all of those important questions … I can't imagine how you would do that anywhere other than a tertiary centre.“ (C19, obstetrician) Other obstacles to optimal multidisciplinary care were reluctance to collaborate and delayed communication. Multidisciplinary care worked well when a team approach was led by a senior HP from each discipline; clinical responsibility was clearly allocated; regular multidisciplinary meetings occurred; and communication was frequent, responsive and consistent. “It is about getting the right people in the room to discuss what the best management is … surgical, medical, oncology … allied health … it was about getting all those people saying, ‘What are we going to do? What is the plan?’“ (C3, obstetrician) Central to the management of multidisciplinary care was how HPs defined their role and clinical responsibility. Regardless of discipline, most HPs reported their clinical responsibility went beyond providing expertise and included care coordination (e.g., proactive interdisciplinary communication and collaboration) and providing consistent information tailored to their patients’ unique needs. “We all got in a room together, there was the obstetrician, the oncologist, myself, the anaesthetist, and the neonatologist … [We] prioritise the team approach so that there is really a united and consistent message. And the woman knows that everyone is on the same page, providing the same information … So, the woman becomes the centre of care, rather than the team being the centre.“ (C19, obstetrician) 4.2 Deductive themes The deductive themes were found in response to questions regarding: ‘What is missing from current care?’ and ‘What could be put in place to better support HPs and women with GC?’ 4.2.1 Centralised resources for HPs HPs consistently identified a need for centralised and coordinated resources, including up-to-date, accurate, evidence-based, clinical information and best practice guidelines. Ideally, discipline-specific information would be shared across teams to bridge knowledge gaps and minimise conflicting information being provided to patients. Information about psychosocial aspects of GC was sought across both disciplines. “Some kind of protocol for cancer clinicians to consider the perinatal aspects … for the obstetricians … they're obviously missing the cancer bits … [a protocol]that could pull both sides together.“ (C1, clinical psychologist) “It would be better if there was an integrated state-wide national service that was coordinated … a database telling you how many of this and what has been done and all that …, that'll be very useful.“ (C10, medical oncologist) HPs were also interested in opportunities to consult with more experienced HPs, a registry of patient outcomes, and a list of interdisciplinary resources and supports. “It would be really great to have someone that's dedicated to these issues … someone that we could just pick up the phone and they can give us advice” (C15, cancer nurse) “Having adequate access to the various supportive [professionals] … breast care nursing, psychology, social work, financial support, where needed. Having those available and well known is important. Having … links with obstetricians who are also comfortable in this … confidence comes from access to modern knowledge.“ (C7, breast surgeon) Some noted that HPs would benefit from additional training on managing complex cases, multidisciplinary care, leading team meetings, and communication training (e.g., addressing difficult conversations and active listening). “That stuff about having difficult conversations with patients. I think we could all benefit. And, for some people, it doesn't come naturally. So, very specific training could be helpful.“ (C3, obstetrician) 4.2.2 Liaison, information and shared experiences for women HPs identified the need for a designated clinical liaison or case coordinator to connect obstetric and oncology care. This liaison would ideally be a clinician who is involved in team meetings, facilitates team communication, coordinates appointments and referrals, and is accessible to patients. Where available, a cancer nurse performed this role. “Someone who's thought about the … implications of being pregnant with a cancer diagnosis … issues around the question of termination … what's going to happen to my child if I die … how do I manage drugs and breastfeeding and feeling sick from cancer and feeling sick from pregnancy … Having a resource person who has experience and thoughtfulness around those issues ”(C12, obstetrician) “Knowing that there's one identified person who's the coordinator of all of their care … tends to decrease anxiety” (C6, breast care nurse) HPs reported that women with GC also had unmet information needs. A centralised information hub (e.g., a single organisation/resource) accessible to women with GC was suggested. “I think information is key about the risks of these treatments on the pregnancy … [women need] reassurance that they're doing okay by the baby or that the baby wouldn't be unduly affected. And … getting all this information in a timely way” (C21, clinical psychologist) HPs noted that women wanted to connect with others with GC. HPs commented that where they had previously treated women with GC, they may facilitate this via their own networks. However, a centralised facilitator was recommended. “It would be nice for women to be able to speak to other women who have been very specifically in that scenario …“ (C18, breast surgeon) “Speaking to someone who's actually … been through this situation … being pregnant, having cancer is really invaluable … I've got a couple of women that I've used a few times … patient peer support type people” (C24, haematologist) 5 Discussion In Australia, the provision of well-coordinated cancer care with open communication has been consistently identified as challenging but integral to positive patient experiences [ 19 ]. This may be more pronounced in women with GC who have multifaceted needs and receive care across several disciplines. This is the first study exploring the experiences of HPs treating these women and the challenges they face to providing comprehensive care. In this study, HPs considered women with GC to be vulnerable, complex patients. Barriers to comprehensive care included treatment delivery at multiple, often geographically separated hospitals and interdisciplinary communication hampered by lack of staffing continuity. Solutions offered included dedicated team leaders, centralised resources, clinical liaison or cancer care coordinators (this could potentially be upskilling of a breast care nurse), interdisciplinary educational resources, prioritising interdisciplinary meetings, and developing ways to connect women with GC with peer support. HPs highlighted that usual care is insufficient to meet these women's needs, and holistic care including psycho-social and antenatal needs must be prioritised. This is encouraging as recent patient accounts similarly emphasised that prioritised, tailored and holistic care that goes beyond medical treatment is supportive [ 4 , 15 ]. These findings are consistent with large-scale Australian research identifying serious service gaps in cancer supportive- and survivorship care across service providers and the need for more integrated, holistic, multidisciplinary care [ 1 ]. However, this study highlighted the dissonance between what patients want and HPs are trying or able to provide, and availability of enabling services and structures. As a result, HPs in this study perceived the need to bridge this service gap by going the ‘extra mile’ for these patients. Tertiary centres and public hospitals, where teams are co-located and interdisciplinary meetings occur routinely, were perceived as better equipped to care for women with GC. However, care across multiple hospitals remains the norm and is challenging, especially without shared Electronic Health Records(EHR). In these settings, HPs should prioritise communication and regular interdisciplinary meetings, allocate dedicated and consistently available team leaders; or when possible, refer patients to centres with co-located care. Some HPs reported a greater emotional impact from caring for these women compared with other patient groups and spent more time on their care. Additional support for HPs treating women with GC is required (e.g. communication training has been associated with reduced HP occupational stress [ 20 , 21 ], and was mentioned by HPs). Supports suggested by HPs included cancer care coordinators, increased access to more experienced clinicians/mentors and centralised resources. Care coordinators may reduce the continuity of care challenges, a common problem in complex conditions which can adversely affect patient outcomes [ 22 ]. The value of dedicated cancer care coordinators has been recognized by professional bodies [ 23 ] and by women with GBC [ 4 ]. Where available, cancer nurses currently serve in this role; however, their availability is inconsistent, and their obstetric expertise varies. They may also help to centralise resources, something noted as lacking by HPs in this study. Currently, no centralised service is available and the closest are international charities [ 16 , 24 ]. Formalised ways to connect women with GC that do not necessarily rely solely on individual HPs are also needed and have been suggested by women with GC [ 16 ]. Technological advances may facilitate these connections and improve collaboration via shared EHR, telehealth integration and virtual multidisciplinary meetings for patients treated across teams and locations. Notably, HPs in this study reported face-to-face meetings were important; literature emerging on the impact of virtual medicine during COVID-19 may provide insights into the successes and pitfalls of this practice and how it could be applied to HP teams treating GC. In sum, recommendations arising from this study include that: • Systemic changes such as co-location of services and integration of supportive care are needed if comprehensive cancer and obstetric care is to be provided for women with GC. • HPs would benefit from more interdisciplinary education and opportunities for mentorship. • Clinical liaisons or cancer care coordinators would greatly enhance care for these women. • The collation of evidence, available resources, training opportunities and shared information would support HPs treating this population. • Formalised ways to connect women with GC with peer support are needed. Study limitations include that most participants came from metropolitan areas, which was unsurprising given women with GC are commonly referred to tertiary centres. Consequently, the study results cannot be generalised to HPs in regional/rural areas, where the challenges highlighted may be further compounded. This study was not designed to focus on discipline- or subspeciality-specific challenges and it is acknowledged that such a focus may have given rise to different results. All HPs were Australian, and an international sample may have yielded different perspectives. Though representatives of all relevant disciplines were invited and eligible to participate, not all HPs involved in GC care (e.g., neonatology, radiotherapy) were represented in the sample, and almost all oncology HPs worked predominantly in breast cancer. This reflects the high rates of breast cancer in the GC population. A more diverse sample may provide other data. Despite these shortcomings, this study has many strengths. It is the first study exploring the experiences of HPs treating women with GC and the challenges these clinicians face when providing comprehensive care. As such, these data are novel. The study has methodological rigor: the interview schedule was developed by a multidisciplinary team of clinicians, and data were collected and analysed by experienced researchers following a formalised methodology. Data saturation was comfortably achieved and despite the multiple disciplines represented in the sample, the data were consistent in identifying universal themes. Taken together, the findings provide an excellent starting point from which to consider supporting HPs to provide optimal care for women with GC. 6 Conclusion This is the first study exploring the experiences of HPs treating women with GC and the challenges and barriers to providing comprehensive care which they encounter. Most HPs had experience with GBC and many had a breast-specific clinical focus. The findings and solutions offered add unique insights, which can be utilised to develop more integrated cancer and obstetric care for women with GC, and better supports for HPs providing this care. Data availability statement The data that support the findings of this study are available from the corresponding author upon reasonable request, and subject to approval from the hospital's research ethics committee. Ethics approval Ethics approval was received from The Royal Women's Hospital Research Ethics Committee(ID#18/25). Declaration of interest None. Acknowledgments The authors would like to acknowledge the role of Western and Central Melbourne Integrated Cancer Service (WCMICS) and The Royal Women's Hospital Foundation, which provided the funding for this research. These funding bodies had no role in the study design, execution, analysis, interpretation of the data, or the decision to submit results. The funding bodies have had no role in the writing of this manuscript. The contributions of the authors include: LS conceived and designed the original study on which this manuscript is based, secured funding for the project and supervised the study. LS, KG and MS completed the data acquisition, analysis and interpretation. RL, LS and MS were involved in drafting and critically revising this manuscript. All other authors contributed to the refinement of the study protocol of the original study and approved the final version of the manuscript. Appendix A Supplementary data The following are the Supplementary data to this article: Multimedia component 1 Multimedia component 1 Appendix A Supplementary data Supplementary data to this article can be found online at https://doi.org/10.1016/j.breast.2022.03.003 .
REFERENCES:
1. HUNTER J (2019)
2. HAYNES K (2018)
3. PANOZZO S (2019)
4. HAMMARBERG K (2018)
5. LEE Y (2012)
6. SMITHLH D (2003)
7. (2015)
8. STATISTICS A (2022)
9. WELFARE A (2018)
10.
11. ZANETTIDALLENBACH R (2006)
12. EEDARAPALLI P (2006)
13. JOHNSON H (2020)
14. LEUNG V (2020)
15. STAFFORD L (2021)
16. STAFFORD L (2021)
17. BRAUN V (2006)
18. NOWELL L (2017)
19. WALSH J (2010)
20. RUSSO A (2014)
21. EELENS B (2014)
22. (2018)
23. (2015)
24.
|
10.1016_j.breast.2022.05.002.txt
|
TITLE: Understanding the barriers to, and facilitators of, ovarian toxicity assessment in breast cancer clinical trials
AUTHORS:
- Cui, Wanyuan
- Phillips, Kelly-Anne
- Francis, Prudence A.
- Anderson, Richard A.
- Partridge, Ann H.
- Loi, Sherene
- Loibl, Sibylle
- Keogh, Louise
ABSTRACT:
Background
Detailed toxicity data are routinely collected in breast cancer (BC) clinical trials. However, ovarian toxicity is infrequently assessed, despite the adverse impacts on fertility and long-term health from treatment-induced ovarian insufficiency.
Objectives
To determine the barriers to and facilitators of ovarian toxicity assessment in BC trials of anti-cancer drugs.
Methods
Semi-structured interviews were conducted with purposively selected stakeholders from multiple countries involved in BC clinical trials (clinicians, consumers, pharmaceutical company representatives, members of drug-regulatory agencies). Participants were asked to describe the perceived benefits and barriers to evaluating ovarian toxicity. Interviews were transcribed verbatim, coded in NVivo software and analysed using inductive thematic analysis.
Results
Saturation of the main themes was reached and the final sample size included 25 participants from 14 countries (9 clinicians, 7 consumers, 5 members of regulatory agencies, 4 pharmaceutical company representatives); half were female. The main reported barrier to ovarian toxicity assessment was that the issue was rarely considered. Reasons included that these data are less important than survival data and are not required for regulatory approval. Overall, most participants believed evaluating the impact of BC treatments on ovarian function is valuable. Suggested strategies to increase ovarian toxicity assessment were to include it in clinical trial design guidelines and stakeholder advocacy.
Conclusion
Lack of consideration about measuring ovarian toxicity in BC clinical trials that include premenopausal women suggest that guidelines and stronger advocacy from stakeholders, including regulators, would facilitate its more frequent inclusion in future trials, allowing women to make better informed treatment decisions.
BODY:
Abbreviations BC Breast cancer COVID-19 Severe Acute Respiratory Syndrome Coronavius-2 CRF Case Report Form EMA European Medicines Agency FDA Food and Drug Administration GnRHa Gonadotrophin releasing hormone analogue QOL Quality of life OS Overall survival PARP Poly adenosine diphosphate-ribose polymerase 1 Introduction Ovarian toxicity is a potentially irreversible adverse effect of anti-cancer treatment for premenopausal women with breast cancer (BC). It can result in infertility, and can have profound impacts on longer-term bone density, cardiovascular health and cognitive function [ 1–3 ]. As the incidence of BC in premenopausal women increases [ 4 ], and survival rates improve [ 5 ], minimising long-term treatment-related adverse effects in young women is paramount. The impact of cancer treatments on fertility and ovarian function is a significant concern for premenopausal women with BC [ 6 , 7 ] and is an important consideration when making treatment decisions [ 7 ]. International guidelines recommend potential gonadotoxicity is addressed with all patients of reproductive age [ 8–14 ]. However, little is known about the ovarian effects of newer classes of anti-cancer therapies. A recent systematic evaluation demonstrated that ovarian toxicity is infrequently assessed in phase 3 (neo)adjuvant early BC clinical trials which enrolled premenopausal women [ 15 ], so the relevant information needed for informed decision-making for premenopausal women is usually not available when they are first used in practice. Many stakeholders are involved in the design of clinical trials, including clinicians, drug-regulatory agencies, patient advocates (consumers) and pharmaceutical companies [ 16 , 17 ]. Phase 3 BC trials are often global multicentre studies, and stakeholders from different countries are involved in their design. It is not clear why ovarian toxicity endpoints are rarely assessed in trials enrolling premenopausal women. Existing qualitative research primarily focus on the barriers to ovarian preservation [ 18 ]. Qualitative research has been important for the incorporation of patient reported outcomes in clinical trials [ 19 ], and for examining trial design uncertainties [ 20 ]. Therefore, this international qualitative study was designed to determine the barriers and facilitators to ovarian toxicity assessment in BC clinical trials from the perspective of key decision-makers. 2 Methods 2.1 Design Through semi-structured interviews with key decision-makers in BC clinical trials, this qualitative study explored the barriers to and facilitators of ovarian toxicity assessment in curative-intent pharmacological BC trials which enrol premenopausal women. Approval was obtained from the Peter MacCallum Cancer Centre Human Research and Ethics Committee (LNR/61921/PMCC). 2.2 Participants Key decision-makers regarding clinical endpoints in BC trials are trial investigators (clinicians), consumers, pharmaceutical companies which produce BC anti-neoplastic agents [ 21 ], and drug-regulatory agencies. We purposively sampled for individuals from each of these groups, who had been personally involved in the design and conduct of BC trials or in drug regulation for BC anti-neoplastic agents in the last 10 years, and who were able to speak English and participate in an interview. We sought to include eligible participants from a range of countries. Clinicians were eligible for the study if they were lead authors (first, second or last) of published phase 3 (neo)adjuvant BC trial manuscripts, or listed as the responsible party on clinicaltrials.gov or EudraCT databases, or were scientific advisory committee members of collaborative trial groups. Consumers were eligible if they were consumer advisors for BC collaborative trial groups. Pharmaceutical company representatives (pharmaceutical company employees involved in trial design: medical advisors, chief scientific officers, chief research and development managers) and advisory committee members of drug-regulatory agencies (Food and Drug Administration (FDA), European Medicines Agency (EMA) or Pharmaceutical Benefits Advisory Committee) were eligible for inclusion and were identified through public internet search. To identify pharmaceutical company representatives, the website and LinkedIn page of each pharmaceutical company was searched. To identify regulatory agency members, the regulatory agency website and drug advisory committee meeting materials were searched. In order to extend the sample and identify other important stakeholders, snowball sampling was also conducted by asking participants to forward the email invitation to colleagues they believed would meet the inclusion criteria. Eligibility was checked by the researchers prior to interview for participants recruited through this process. Eligible participants were sent an email or social-media invitation and a participant information sheet. If no response was received a reminder letter and a final email or social-media contact were sent. Consent was implied if the participant responded and provided contact details and a time for the interview. 2.3 Data collection and analysis The study investigators included BC clinical trialists (clinicians), a reproductive specialist and a health sociologist. An interview guide was developed exploring the following questions: i) who contributes to trial endpoint selection, ii) what factors are considered during decision-making, iii) are ovarian toxicity endpoints included, iv) barriers to and/or benefits of ovarian toxicity assessment in BC trials, and v) strategies to improve ovarian toxicity evaluation. Only data related to themes iii), iv) and v) are reported in this paper, themes i) and ii) will be reported elsewhere. Interview questions were refined following piloting with colleagues initially, then one BC clinician who met the eligibility criteria; the pilot interview with the clinician who met the eligibility criteria was included in the overall analysis. Semi-structured interviews were conducted by phone or video-conference by one author (WC, medical oncologist). The interviewer obtained verbal consent before each interview. Participants were initially asked closed-ended questions to collect demographic data. The themes covered in the interview were: the participant views on whether ovarian function endpoints were considered during trial design, whether these endpoints were important to include (why/why not), the barriers to their inclusion and the facilitators/strategies to improve their inclusion. All interviews were audio-recorded and professionally transcribed verbatim. Transcripts were de-identified after the transcription process. Transcripts were not returned to participants for comment after the interview. Grammatical changes were made to quotes for readability. Inductive thematic analysis was performed by reading the transcripts in order to develop a coding framework, framework analysis was then undertaken to structure themes and to further analyse the emerging themes in light of the research question facilitated using NVivo software [ 22 ]. After five interviews, a summary of the emerging themes with supporting quotes was collated by one author (WC). Emerging themes were reviewed by co-authors LK (health sociologist) and KAP (medical oncologist) for feedback regarding the key themes and any refinements required for the interviews. After 15 interviews, a coding framework capturing the full range of comments was developed by WC and reviewed by co-authors LK and KAP. After interview 15, new interviews were reviewed in light of the coding framework to determine whether saturation of the main themes had been achieved. Two authors (WC and LK) independently coded several interviews and the coding framework was further refined. WC coded the remaining transcripts according to the coding framework. The final coding framework was discussed with all study investigators (listed authors). 3 Results 3.1 Participants Between June 2020 and April 2021, 260 stakeholders were invited to participate. 18 participants responded to the initial email invitation. Reminders were purposively sent to stakeholders who were from North American and Asian regions, members of drug-regulatory agencies and pharmaceutical company representatives, to broaden the range of participant demographics. Saturation was reached when no new themes were identified after interview 21. Another four interviews were conducted and no new themes were identified, therefore no further interviews were conducted after interview 25. The final sample included: 9 clinicians, 7 consumers, 5 members of drug-regulatory agencies, 4 pharmaceutical company representatives; half were female ( Table 1 ). Interviews ranged between 24 and 46 min in duration. 3.2 Why are ovarian toxicity data infrequently included in breast cancer clinical trials? Almost all participants reported that ovarian toxicity is rarely assessed in BC clinical trials. Four main barriers were reported. The main reported barrier was that this issue was rarely prioritised (barrier 1). Other important barriers included limited trial resources (barrier 2), lack of knowledge regarding how to assess treatment-related ovarian toxicity (barrier 3) and settings where these data were considered less relevant (barrier 4). Table 2 details the proportion of participants who reported each barrier, Appendix Table 1 describes the quotes to support each of these barriers. 3.2.1 Not prioritised Almost all participants reported that ovarian toxicity was not prioritised and infrequently discussed during trial design. Ovarian toxicity was often deemed not the primary question of clinical trials and more than half of the participants reported that ovarian toxicity was considered less important than survival endpoints. Indeed, one consumer stated “There's still that pervasive idea that if we keep you alive the rest doesn't matter.” Furthermore, collection of ovarian toxicity data is not required for regulatory approval of cancer drugs; this was a key barrier identified by one pharmaceutical company advisor and almost all members of drug-regulatory agencies. 3.2.2 Considered too resource intensive Many participants reported the pressure on trial resources required to assess ovarian toxicity, such as the cost and the additional burden on investigators and patients was perceived as prohibitive. Concerns regarding the duration of follow-up required to capture fertility events and loss of ovarian function, and the difficulty in collecting good quality data during the follow-up period were also distinctive barriers. 3.2.3 Lack of knowledge Another barrier, particularly reported by clinicians, was the need for guidance regarding which ovarian markers are most useful to assess. Participants also reported that stakeholders designing clinical trials may not know the potential ovarian side-effects of the anti-cancer drugs they are studying. 3.2.4 Assessing ovarian toxicity may be less relevant in certain settings Assessing ovarian toxicity was considered as less relevant in certain settings, such as trials where low numbers of premenopausal women are included. One quarter of participants reported that ovarian toxicity assessment may be difficult in trials where patients also receive concurrent gonadotoxic chemotherapy. Additionally, trials investigating hormone-receptor positive BC, where drugs are specifically administered to induce ovarian suppression [ 23 ], might make additional assessment of ovarian toxicity more challenging. 3.3 What are the perceived benefits of including ovarian toxicity endpoints? Despite the perceived barriers, overall, participants felt that evaluating the impact of BC treatments on ovarian function was valuable. Indeed, one clinician stated that lack of ovarian toxicity assessment was “ a failure of the entire research world” . The two main benefits of including ovarian endpoints are these data are important to i) patients and ii) clinicians. Supporting quotes are provided in Appendix Table 2 . 3.3.1 Data are important to patients The importance of such data to patients was recognised by almost all clinicians and consumers, and half of the pharmaceutical company representatives and members of regulatory agencies. The impact of gonadotoxicity on a woman's quality of life (QOL), and the importance of these data in informing cancer-treatment and family-planning decisions were leading reasons why these data should be assessed. Furthermore, information regarding the potentially irreversible impact of cancer treatments on ovarian function was seen as important to avoid unknown long-term side-effects. 3.3.2 Data are important to clinicians Participants reported that ovarian toxicity assessment may help clinicians better understand the investigational agent and improve counselling their patient regarding treatment options. Indeed, one pharmaceutical company representative reported that “If you had two molecules that behaved in the same way, but one of them was able to result in better preservation of ovarian function than the other, then that would be a benefit that you could then make a claim in.” Moreover, there is increasing interest in the impact of ovarian suppression on disease outcomes, and these data may enrich interpretation of trial results. These data were also identified as best collected prospectively, as gonadotoxicity can be difficult to assess retrospectively. 3.4 What are strategies that might help to increase the inclusion of ovarian toxicity endpoints? Participants identified two main strategies: i) increased awareness and ii) increased stakeholder advocacy for these data to be assessed as important facilitators for the collection of these data in future trials (see Table 3 for further detail and Appendix Table 3 for supporting quotes). 3.4.1 Increased stakeholder awareness Primarily, increased awareness through trial design guidelines was a central strategy reported by half of participants. One member of a drug-regulatory agency stated “Whenever we come up with a conundrum, that's when we look at [National Comprehensive Cancer Network] guidelines or St Gallen's […] Having consensus with guidelines would be a great start.” Other strategies to increase ovarian toxicity assessment included improving familiarity with ovarian function markers among trial design decision-makers and increased discussion regarding gonadotoxicty within the scientific community. 3.4.2 Increased stakeholder advocacy A stronger consumer voice and clinician promotion were regarded as especially important by most participants. In addition, some participants felt that drug-regulatory agencies could also play an influential role in guiding sponsors and/or clinical trialists regarding the importance of ovarian toxicity assessment in trials. Other stakeholders identified as key to advocate for these data to be assessed included reproductive specialists and cooperative trials groups. Only two participants reported increased interest from pharmaceutical companies as a strategy to increase the inclusion of ovarian toxicity assessment in BC trials. 4 Discussion To our knowledge, this is the first study exploring the barriers and potential facilitators to assessment of ovarian toxicity in BC clinical trials. Despite the importance of avoidance of unnecessary treatment-induced menopause, we found that ovarian toxicity assessment was not prioritised and rarely even considered during trial design. Overall, key stakeholders of BC trials felt that assessing ovarian toxicity was important, and that these data were necessary to inform treatment decisions. We have identified a disconnect between what stakeholders desire to know, and what is currently assessed in BC trials. Many participants in this study recognised that preservation of ovarian function was fundamental to a woman's QOL. Yet, the majority felt these data were currently under-prioritised during trial design. QOL is now considered an endpoint for clinical benefit by the FDA and leading oncology organisations [ 24 , 25 ], which has led to increased incorporation of QOL assessment in cancer clinical trials [ 26 , 27 ]. Similar to the experience of incorporating QOL assessment, increased awareness regarding the importance of ovarian toxicity data among trial design decision-makers was highlighted as a core strategy to increase assessment of ovarian toxicity. We found that different stakeholders prioritised ovarian toxicity assessment differently. Although almost all clinicians and consumers reported that ovarian toxicity data were important, only half of the pharmaceutical company representatives and members of drug-regulatory agencies shared this view. Regulatory requirements may be an important strategy to ensure incorporation of ovarian toxicity assessment given their influence on the pharmaceutical industry. Further expansion of the FDA guidance for industry documents [ 28 ] and similar guidance from the EMA may lead to improved prioritisation of ovarian toxicity in trials enrolling premenopausal women. A need for increased guidance regarding how to assess gonadotoxicity was considered another barrier to routine inclusion of these data. Indeed, in BC trials which do assess ovarian toxicity, including gonadotrophin-releasing-hormone analogue (GnRHa) trials where treatment-induced ovarian insufficiency was the primary endpoint, the ovarian measures used are variable, and arguably inadequate [ 29 ]. Furthermore, menstruation is often used to assess ovarian function [ 30 ]. Yet, reduced ovarian function can occur in women who are still menstruating and anti-cancer drugs may deplete the ovarian reserve without stopping menses [ 31 ]. Some participants considered that the gonadotoxic effects of the non-investigational chemotherapy clouds the interpretation of the ovarian toxic effects of novel cancer treatments. However, many novel agents may be given in a prolonged maintenance phase after chemotherapy has finished [ 32–35 ], at a time when “ovarian protection” measures may be discontinued. Therefore, ovarian toxicity assessment of these novel agents is still pertinent regardless of whether they are initially combined with chemotherapy or not. On the contrary, participants felt that assessing ovarian function may help assess the impact of these changes on cancer outcomes and enhance interpretation of trial results. This is particularly relevant for hormone-receptor positive BC where ovarian suppression has been shown to impact cancer outcomes [ 23 ]. Oestrogen also modifies the function of immune-cell populations [ 36 ]. This is relevant in triple-negative BC, where immunotherapy is now licenced [ 33 , 37 ]; trials assessing immunotherapy for other BC subtypes are also underway. Another perceived barrier to ovarian toxicity assessment is the desire not to add excessive burden on the finite trial resources. There is a notable paradox between the detailed requirements in trial protocols to mandate often multiple contraceptive methods and pregnancy tests in women of potential child-bearing capacity throughout trial drug administration, but lack of attention to ovarian toxicity during and after treatment. Improved agreement regarding the most informative markers of ovarian toxicity could improve interpretation of such data and also minimise collection of unnecessary data and reduce resource intensity. The duration of follow-up required was another perceived barrier. However, longer-term monitoring for late cancer-treatment toxicities is increasingly practiced [ 38 , 39 ]. Prospective cardiac surveillance is now often incorporated into trial protocols of cardiotoxic BC treatments, sometimes up to 10 years after treatment completion [ 40 ], demonstrating the feasibility to understand the existence of late treatment toxicities. Guidelines are an important tool to guide best clinical practice, and were a key strategy suggested by participants to improve implementation of ovarian toxicity assessment. Current tools used for trial endpoint decision-making [ 25 , 41 , 42 ] do not address ovarian toxicity assessment. As reported by many participants in this study, development of clinical trial design guidelines which make recommendations regarding which ovarian measures to collect and when, may overcome many of the barriers we identified [ 43 ]. There were several limitations to this study. All interviews were conducted by phone or video-conference to allow international stakeholders to participate. Only one participant from Asia was included, and our findings may not be generalisable for this region; a barrier to participation may be that the interviews were conducted in English and this study was conducted during the COVID-19 pandemic. Half of stakeholders included were involved in industry-sponsored trials, given these are generally more prevalent; but three-quarters were also involved in cooperative group coordinated trials and therefore our findings are relevant to both trial types. Although we sampled stakeholders from different backgrounds, there may be participation bias as participants may be more likely to have known more about or been more interested in the question of measuring ovarian toxicity than non-participants. Moreover, other stakeholders involved in trial design such as statisticians were not interviewed. Lastly, this study was exploratory in nature and represents the opinions of participants. Therefore, the data presented in this study is subjective, a caveat of our study design. 5 Conclusion Despite the potential profound impact of treatment-related ovarian toxicity for premenopausal women, this qualitative study found that ovarian toxicity assessment in BC trials is currently not prioritised. Yet, stakeholders, particularly consumers, believe that assessing ovarian toxicity is important and these data are vital to inform treatment decisions. Stronger advocacy is needed to change practice. Clinical trial design guidelines may break down many of the existing barriers to ovarian toxicity assessment, raise awareness of this important knowledge gap and provide guidance on how to collect informative data while minimising the burden on trial resources. Inclusion of ovarian toxicity assessment in future trials will provide invaluable information regarding a potentially serious adverse effect of cancer treatment, to ultimately empower women to make fully-informed treatment decisions that will impact their family-planning choices and long-term health. Funding This work was supported by a grant from Breast Cancer Trials Australia New Zealand (BCT-ANZ). The funder was not involved in the study design, data collection, interpretation or reporting. KAP is an NHMRC Leadership Fellow. WC was supported by an Australian Government Research Scholarship for submitted work. Declaration of competing interest W Cui: Dr Cui reports honoraria from AstraZeneca, Pfizer, Janssen and Merck. KA Phillips: Professor Phillips reports two unpaid Advisory Boards for AstraZeneca in 2021. Breast Cancer trials Scientific Advisory Committee member. Breast Cancer Network Australia Strategic Advisory Committee member. PA Francis: Professor Francis is the Breast Cancer Trials Australia & New Zealand Chair Scientific Advisory Committee. She reports travel to lecture overseas: Novartis, Ipsen. RA Anderson: Professor Anderson reports personal fees and non-financial support from Roche Diagnostics, outside the submitted work. AH Partridge: Professor Partridge reports travel to lecture overseas: Novartis. S Loi: Professor Loi receives research funding to her institution from Novartis, Bristol Meyers Squibb, Merck, Puma Biotechnology, Eli Lilly, Nektar Therapeutics Astra Zeneca, Roche-Genentech and Seattle Genetics. She has acted as consultant (not compensated) to Seattle Genetics, Novartis, Bristol Meyers Squibb, Merck, AstraZeneca and Roche-Genentech. She has acted as consultant (paid to her institution) to Aduro Biotech, Novartis, GlaxoSmithKline, Roche-Genentech, Astra Zeneca, Silverback Therapeutics, G1 Therapeutics, PUMA Biotechnologies, Seattle Genetics and Bristol Meyers Squibb. Professor Loi is a Scientific Advisory Board Member of Akamara Therapeutics. She is supported by the National Breast Cancer Foundation of Australia Endowed Chair and the Breast Cancer Research Foundation, New York. S Loibl: Professor Loibl reports grants and other from Abbvie, other from Amgen, grants and other from AstraZeneca, other from Bayer, other from BMS, grants and other from Celgene, grants, non-financial support and other from Daiichi-Sankyo, other from Eirgenix, other from GSK, grants, non-financial support and other from Immunomedics/Gilead, other from Lilly, other from Merck, grants, non-financial support and other from Novartis, grants, non-financial support and other from Pfizer, other from Pierre Fabre, other from Prime/Medscape, non-financial support and other from Puma, grants, non-financial support and other from Roche, other from Samsung, non-financial support and other from Seagen, outside the submitted work; In addition, Professor Loibl has a patent EP14153692.0 pending, a patent EP21152186.9 pending, a patent EP15702464.7 issued, a patent EP19808852.8 pending, and a patent Digital Ki67 Evaluator with royalties paid. LA Keogh: No conflict of interest. Acknowledgements The authors wish to acknowledge the Breast Cancer Trials Australia New Zealand (BCT-ANZ) Consumer Advisory Panel for their assistance in the grant application for this study, and all the clinicians, consumers, pharmaceutical company representatives and members of regulatory agencies for their participation in this study. Appendix A Supplementary data The following are the Supplementary data to this article: Multimedia component 1 Multimedia component 1 Multimedia component 2 Multimedia component 2 Appendix A Supplementary data Supplementary data to this article can be found online at https://doi.org/10.1016/j.breast.2022.05.002 .
REFERENCES:
1. MARFAN H (2010)
2. SHAPIRO C (2001)
3. WEBBER L (2016)
4. THOMAS A (2019)
5.
6. CHRISTIAN N (2019)
7. RUDDY K (2014)
8. LAMBERTINI M (2020)
9. VANDORP W (2016)
10. PALUCHSHIMON S (2020)
11. COCCIA P (2018)
12. JACKISCH C (2015)
13. OKTAY K (2018)
14. ANDERSON R (2020)
15. CUI W (2021)
16. BLAGDEN S (2020)
17. MARSDEN J (2004)
18. PANAGIOTOPOULOU N (2018)
19. LASCH K (2010)
20. OCATHAIN A (2015)
21.
22. GREEN J (2014)
23. FRANCIS P (2018)
24. SEIDMAN A (2018)
25. CHERNY N (2017)
26. (2018)
27. VERMA S (2011)
28. (2021)
29. LAMBERTINI M (2018)
30. SILVA C (2016)
31. OKTAY K (2006)
32. TUTT A (2021)
33. SCHMID P (2020)
34. JOHNSTON S (2020)
35. MAYER E (2021)
36. SEGOVIAMENDOZA M (2019)
37.
38. CURIGLIANO G (2020)
39. ARMENIAN S (2017)
40. VONMINCKWITZ G (2017)
41. HUDIS C (2007)
42. TOLANEY S (2021)
43. ANDERSON R (2021)
|
10.1016_j.brs.2025.05.103.txt
|
TITLE: A comparative analysis of technical data: At-home vs. in-clinic application of transcranial direct current stimulation in depression
AUTHORS:
- Vogelmann, Ulrike
- Stadler, Matthias
- Soldini, Aldo
- Chang, Kai-Yen
- Chen, Miaoxi
- Bulubas, Lucia
- Dechantsreiter, Esther
- Plewnia, Christian
- Fallgatter, Andreas
- Langguth, Berthold
- Normann, Claus
- Frase, Lukas
- Zwanzger, Peter
- Kammer, Thomas
- Schönfeldt-Lecuona, Carlos
- Kamp, Daniel
- Bajbouj, Malek
- Hunold, Alexander
- Schramm, Severin
- Priller, Josef
- Palm, Ulrich
- Charvet, Leigh
- Keeser, Daniel
- Burkhardt, Gerrit
- Padberg, Frank
ABSTRACT:
Objective
The application of transcranial direct current stimulation (tDCS) at home for the treatment of depression and other neuropsychiatric disorders presents both significant opportunities and inherent challenges. Ensuring safety and maintaining high-quality stimulation are paramount for the efficacy and safety of at-home tDCS. This study investigates tDCS quality based on its technical parameters as well as safety of at-home and in-clinic tDCS applications comparing the data from two randomized controlled trials in patients with major depressive disorder.
Methods
We analyzed 229 active stimulation sessions from the HomeDC study (at-home tDCS) and 835 sessions from the DepressionDC study (in-clinic tDCS). Notably, five adverse events (skin lesions) were reported exclusively in the at-home cohort, highlighting the critical need for enhanced safety protocols in unsupervised environments.
Results
The analysis revealed a significant difference in the average variability of impedances between at-home and in-clinic applications (F1,46 = 4.96, p = .031, η2 = .097). The at-home tDCS sessions exhibited higher impedance variability (M = 837, SD = 328) compared to in-clinic sessions (M = 579, SD = 309). Furthermore, at-home tDCS sessions resulting in adverse events (AEs) were associated with significantly higher average impedances than sessions without such issues.
Conclusion
The study demonstrates that monitoring the technical parameters of at-home tDCS used in this study is essential. However, it may be not sufficient for ensuring safety and promptly detecting or preventing adverse events. Quality control protocols including digital training and monitoring techniques should be systematically developed and tested for a reliable and safe application of at-home tDCS therapies.
BODY:
1 Introduction Non-invasive brain stimulation (NIBS) techniques allow a well tolerable stimulation of the human brain with few side effects for treating and diagnosing neuropsychiatric syndromes [ 1–4 ]. In the psychiatric field, NIBS techniques have shown positive effects, especially in the treatment of major depressive disorder (MDD) [ 5–7 ]. Therefore, the use of NIBS techniques in clinical trials on the treatment of MDD has increased steadily in recent years. However, the application of NIBS also faces many challenges: high staffing cost, limited capacity, and requirement of multiple treatment sessions for MDD. It often means a high burden for patients to come to the clinic for daily treatments due to time- and cost-intensive commute, on-site employment, childcare responsibilities, physical disabilities, or a combination thereof. Therefore, the idea of a home treatment application is a reasonable option. Compared to repetitive transcranial magnetic stimulation (rTMS), transcranial direct current stimulation (tDCS) and other transcranial electrical stimulation (tES) modalities can be administered using a small, portable device, making its application at home possible. Although rTMS has a broader evidence base for MDD treatment, the advantage of at-home tDCS administration has led to increased research on this method in recent years. In line with the current demands of modern psychiatry, which emphasize treating patients in their home environment whenever possible [ 8 ] to prevent hospitalization, numerous studies have been published on the feasibility and safety of at-home tDCS treatment [ 9–12 ], most prominently in the neurological field [ 13–16 ] and recently also for psychiatric disorders [ 17–20 ]. The advantages of tES at-home application in terms of enhancing effectiveness include the simpler feasibility of multiple applications, which can address the potential issue of underdosing. In terms of trial methodology, the unspecific effect of clinical care is reduced, which may equally add to the efficacy of active and control arms in randomized controlled clinical trials and could potentially mask a possible difference between active tES and control conditions. Finally, at home (e.g., in a quiet room), tES could be more easily and effectively combined with an ideally synergistic behavioural task or activity [ 21 , 22 ], both to control and harmonize the brain state and to maximize the tDCS effect. Two large randomized controlled trials (RCTs) [ 23 , 24 ] investigating at-home tDCS in major depressive disorder (MDD) have demonstrated sound feasibility of this approach. Whereas one RCT did not find a significant difference between active and sham tDCS arms [ 24 ], the other trial showed superior efficacy of active tDCS [ 23 ]. Regarding safety, all studies reported good tolerability with only a few adverse events (AEs). A detailed examination of the AE reports frequently describes skin lesions following tDCS. In the RCT by Woodham [ 23 ], 7 AEs related to skin irritations were reported in 6 subjects (only in the active group with n = 87 subjects), with one AE classified as severe. Skin irritations (such as redness, heat, and burning sensations) were also reported in the RCT by Borrione et al. [ 24 ], occurring more frequently in the double active and tDCS only groups compared to the double sham group. AEs were classified as mild. Local redness was reported in 55 patients in the double active and tDCS only groups (40 %), and in 11 patients in the double sham group (15 %). Heat or burning sensations were reported by 25 patients in the double active and tDCS only groups (18 %), and by 11 patients in the double sham group (15 %). In addition, one study discontinuation due to a skin lesion under the anode occurred in the double active group. In the HomeDC study, which also investigated the feasibility and safety of at-home tDCS [ 25 ], a significant accumulation of AEs in the form of skin lesions led to the premature termination of the study [ 26 ]. This accumulation was not observed in the DepressionDC study, which used a similar protocol but applied tDCS in an in-clinic setting with a different electrode fixation technique [ 27 ]. To investigate tDCS quality which is relevant for both safety and efficacy of at-home tDCS treatment, we compared the technical data from the stimulation sessions of the Munich cohort in the DepressionDC study (in-clinic application) [ 27 ] with those from the HomeDC study (at-home application) [ 25 ]. Since no established procedures have been described so far for controlling technical parameters in home-based treatment, we utilized impedance variability and exact current measures as potential proxies for stimulation quality, which may also indicate a higher risk for the development of skin lesions, building on our previous work [ 28 ]. 2 Methods 2.1 Data inclusion We compared technical parameters (impedance and current) from two independent tDCS clinical trials using the same protocol but differing in terms of application: In the HomeDC trial (Trial Registration: NCT05172505), patients self-administered tDCS at home, and in the DepressionDC trial (Trial Registration: NCT0253016), trained study personnel performed tDCS in the clinic. The HomeDC trial investigated the feasibility, effectiveness, and safety of prefrontal tDCS as a treatment at home for MDD in a placebo-controlled, double-blinded, randomized design. Patients with the primary diagnosis of MDD applied prefrontal tDCS daily in a home treatment setting. Only the very first session was conducted in the clinic for training reasons, but also by the patients themselves with explanation, assistance and supervision by study staff. The DepressionDC study is a multicentre, double-blinded, randomized, placebo-controlled trial that investigated the efficacy and tolerability of prefrontal in-clinic tDCS as treatment for MDD [ 27 , 29 ]. Technical data of a blind selection of active stimulations from different centers in the DepressionDC trial has been previously reported [ 28 ]. Here, we used only the technical data of 835 active tDCS sessions from the Munich study site within the DepressionDC trial, to allow comparability with the 229 active tDCS sessions from the HomeDC trial which was conducted as monocentric study in Munich. Technical data from sham stimulation sessions were not included. Importantly, while the clinical analysis presented in Kumpf et al. [ 26 ] was based on the five HomeDC patients who completed active stimulation according to the trial protocol, the present technical analysis includes active stimulation data from a broader set of nine patients of the HomeDC trial. This was done to increase the number of analyzable sessions and also comprises datasets from pilot participants who completed full active at-home tDCS as well as from patients who were offered active at-home tDCS in a second treatment phase following nonresponse to the initial blinded phase. 2.2 Stimulation procedure and transfer of technical data In the HomeDC trial, tDCS was conducted with identical stimulation parameters as in the DepressionDC trial [ 29 ], with the exception that the total number of stimulations was increased from 24 to 30 sessions to achieve longer lasting effects, resulting in the application of five tDCS sessions per week (Monday to Friday) for six weeks. Electrode montage was bifrontal with the anode over F3 and the cathode over F4 (international 10–20 EEG system). Stimulation was at 2 mA in the active condition for 30 min each, plus ramp-in (15 s) and ramp-out (30 s). The control group received sham treatment with identical parameters. However, direct current was only active for 15 s during ramp-in and for 30 s during ramp-out periods. The HomeDC study used the same equipment as the DepressionDC trial, except for the caps, which were specially designed for home-based treatment [ 25 ]. To ease handling of electrode positioning, a custom-made stimulation cap (neuroConn GmbH, Ilmenau, Germany) was used with the electrodes already integrated [ 30 ]. The use of the mobile equipment, as well as all necessary application steps — with particular attention to moistening the electrodes with saline solution — was explained during an initial training session and practiced together with the patient. The patient was instructed to apply approximately 20 ml of saline solution per side using a provided syringe. Instructions for moistening the electrodes together with safety recommendations and caveats against frequently occurring mistakes (e.g. moistening very dry electrode sponges too quickly) were discussed and provided on an information sheet (please see supplementary material). In both trials, the same portable, CE-certified stimulators (DC-Stimulator mobile, neuroConn GmbH, Ilmenau, Germany) were used, with implemented stimulation code system to ensure blinding of operators and participants. 2.3 Monitoring of technical data Technical parameters were stored during stimulations on a storage device, which could be connected to a laptop after treatment sessions to export the data to the purpose-built “DCStimulator mobile” software (neuroConn GmbH, Ilmenau, Germany). Within this software, stimulation parameters (sham or active) and technical data were uploaded to a cloud-based database, after the investigator had inserted a stimulation-ID code for the respective patient. Technical data (impedance, voltage, current) were measured and stored during stimulation every second. Data was transferred to the cloud every time, the saving tool was connected to the study laptop. After each stimulation session patients of the DepressionDC trial and of the HomeDC trial filled in the Comfort Rating Questionnaire (CRQ) to assess potential side effects of the treatment [ 31 ]. 2.4 Statistical analysis The stimulator records the route mean square values of stimulation current and electrode voltage averaged over 1 s, resulting in one sample per second. The reported impedance values are calculated values from the stored values of stimulation current and electrode voltage. For the analyses, all respective measured data of the performed active stimulations were used, excluding the measured values of the ramp-in (15 s) and ramp-out (30 s) phases, since these are naturally characterized by a high variability of the technical parameters due to the increase and decrease of the current. The data (impedance and current) were evaluated using the open-source software “R 4.2” [ 32 ]. In order to analyse the variability across measurements, we estimated the similarity between all sequences of measures within each participant using dynamic time warping (DTW) as implemented in the “dtw” package [ 33 ]. This algorithm estimates the similarity of two sequences by calculating the optimal match between them based on certain restrictions and rules (for more information on DTW, see Ref. [ 33 ]). The resulting scores indicated the variability across trials clustered in participants, days, and study centers. Scores had a minimum of zero indicating no variability across trials whereas higher scores indicated higher variability. After an increased incidence of adverse events, specifically skin lesions, occurred during the HomeDC study, the corresponding sessions that led to such skin lesions were subjected to detailed analysis. In terms of impedance, the means between the “AE-sessions” and the sessions that proceeded without adverse events were compared. In addition, we investigated how long it took to reach a steady state of the technical parameters. For this purpose, we used a moving window function to estimate the standard deviation for each consecutive set of three observations. A steady state was assumed for the point after which the standard deviation for a certain window did not differ significantly from the standard deviation of the previous window. Steady state calculations: 3 Results The HomeDC trial was terminated prematurely due to five AEs in four patients. Therefore, regarding the technical data, only 229 stimulation sessions from the HomeDC study could be evaluated and compared with the technical data of 835 stimulation sessions of the DepressionDC trial. All five AEs were skin lesions ( Fig. 4 ). 3.1 Comparison of impedance variability between in-clinic and home-based tDCS applications As a first step, impedance patterns of the stimulations were analyzed independently of the AEs and the variability of the impedances between the at-home tDCS application, and the in-clinic application was compared. Fig. 1 shows the estimated variability for all stimulation sessions and all participants. In line with our hypothesis, there was a significant ( F 1,46 = 4.96, p = .031, η 2 = .097) difference in the average variability of impedances between the in-clinic application (mean[M] = 579, standard deviation [SD] = 309) and the home-treatment application (M = 837, SD = 328). The stimulation device featured a safety mechanism that automatically stops the session if impedance exceeds 55 kΩ. Throughout the entire study, no stimulation was interrupted due to this mechanism, indicating that absolute impedance levels remained below the threshold in all sessions. 3.2 Comparison of current across settings Current during active stimulations (without ramp in and ramp out phase) varied between 1996 μA and 2012 μA depending on impedance and voltage. The average current of the conducted tDCS sessions (without ramp in and ramp out) was compared between participants and the two settings (at-home tDCS vs. in-clinic tDCS, Fig. 2 ). We found no significant difference (F 1,46 = 0.34, p = .561, η 2 = .007) in average currents between the DepressionDC (M = 2000, SD = 2.95) and HomeDC (M = 1999, SD = 2.72) samples. 3.3 Steady states of impedance and current A steady state was assumed for the point after which the standard deviation for a certain window did not differ significantly from the standard deviation of the previous window. For the impedance, this point was reached after 11 to 25 observations (i.e., seconds). For the current after 11 to 45 observations (after ramp-in) ( Table 1 ). Based on these analyses, we recommend using the most conservative estimates, the 95 % confidence interval (i.e., 30 and 55 s), for all analyses. 3.4 Adverse events and impedance We also compared the impedances of stimulation sessions in the HomeDC trial, when skin lesions occurred to sessions without such events. Average impedance values were significantly (F 1,90 = 3.98, p = .047, η 2 = .017) higher for sessions in which AEs occurred (M = 2.69, SD = 0.31) than in sessions without any AE (M = 2.12, SD = 0.63) as shown in Fig. 3 . Skin lesions were occurring only underneath the cathode after at least 8 stimulations each (patient 1: after 8 tDCS sessions and after a break after another 12 tDCS sessions, patient 2: after 17 tDCS sessions, patient 3 after 15 tDCS sessions and patient 4 after 23 tDCS sessions). Patient 4 did not adhere to the trial protocol and applied 36 tDCS sessions. This was possible because the saving tools were loaded after each study visit for more stimulation sessions (usually loaded with 14 sessions) than needed (usually 10 sessions within 2 weeks to the next study visit), to prevent the situation that patients are unable to restart after interrupting a session due to technical issues. Other AEs were not observed based on the CRQ. None of the four patients reported clearly elevated values in the CRQ in association with the occurrence of the skin lesion. Only one patient reported 4/10 for pain and 5/10 for burning after a stimulation that had caused a skin lesion, which was somewhat elevated in the interindividual comparison, but the patient had already reported values between 4 and 5 in the scales for burning and pain during previous stimulations that had not resulted in a skin lesion. None of the patients affected by a skin lesion manually stopped the respective stimulation, although all patients were informed of this possibility during the instruction phase and also stated afterwards that they were aware of this, but that the stimulation itself, which had led to the skin lesion, had not been particularly painful. This is in line with the available reports about occurrence of skin lesions [ 34 ]. Retrospectively, three of the four patients reported a somewhat increased burning sensation during the corresponding stimulation. 4 Discussion This study compared the technical data of tDCS sessions in relation to relevant information on safety from two trial cohorts, namely an in-clinic tDCS cohort from the DepressionDC trial [ 27 ] and an at-home tDCS cohort from the HomeDC trial. The objective of the analyses was to investigate differences in stimulation quality between self-administered at-home tDCS and in-clinic tDCS administered by trained technicians. In the HomeDC trial, the occurrence of skin lesions consequently led to a premature termination of the study [ 26 ], but provided an opportunity to examine technical parameters in relation to the lesion occurrence. As expected, we observed greater variability in impedance during at-home applications; however, this variability did not exceed the preset safety threshold (i.e. 55 kOhm) and was not associated with premature session termination. Clinically, impedance is also relevant: at-home tDCS sessions that resulted in AEs were associated with significantly higher average impedances compared to sessions without such AEs. This raises the question of whether the preset impedance threshold might have been too high. Given that the same impedance threshold was used in both studies, but skin lesions only occurred in the home-based setting, it appears more likely that factors such as humidification and chemical reactions under the cap contributed to lesion development, rather than absolute impedance values alone. Moreover, our approach provided an online recording of technical parameters, but no real-time transfer of the data or immediate feedback. This means that both adherence and critical technical information regarding tDCS session's events were only noticed when the data were read out at the next study visit. Technical data were extracted via the storage module that patients returned during visits and were subsequently transferred directly to the cloud. The technical data were then manually analyzed for outliers and irregularities, sometimes few days later. In case of such irregularities, feedback was given to the study team. For early detection and potentially preventing AEs, continuous online monitoring with real-time transfer of parameters and immediate feedback would be necessary. For example, brief automated warning message could be generated when significant impedance fluctuations or elevated voltage are detected. Furthermore, increased impedance variability or abrupt impedance shifts is only one of several potential causes [ 35 ] of skin lesions. Although impedance variability was higher in the at-home application ( HomeDC ), this did not result in significant group differences in average current. This was expected, as the stimulation device operated in a current-controlled mode, adjusting voltage dynamically to maintain the target current. Moreover, impedance variability, as used in our analysis, does not directly reflect average or absolute impedance levels and therefore would not be expected to correlate with current intensity measures. Understanding electrochemical processes during tDCS is essential for identifying the underlying causes of AEs. During the current ramp-up phase, electro-osmosis has not yet begun. Initially, the current flows preferentially through paths with lower resistance, notably the sweat glands, potentially resulting in localized heating and potential lesion formation near the anode [ 36 ]. As electro-osmosis stabilizes, temperature changes primarily in response to changes in impedance or current. Temperature elevation at the skin-electrode interface correlates with impedance and the square of the current [ 35 , 36 ]. Insufficient skin-electrode contact may contribute to skin lesions by reducing the effective contact area and increasing impedance, which in turn increases heat generation [ 36 ]. Additionally, the confined heat is distributed over a smaller area, reducing the ability to dissipate heat. Factors such as inadequate contact medium or the presence of hair and skin irregularities can cause localized disruption of skin-electrode contact [ 31 , 35 ]. A fast ramp-in (suggesting that ramp durations of 10–20 s may be preferable) may also contribute to the development of skin lesions by initiating electro-osmosis and transferring heat from the stimulation site to surrounding tissues. Both the HomeDC and DepressionDC trials implemented a 15-s ramp-in period. Furthermore, the rectangular sponge electrodes used in these studies may lead to uneven current distribution, with peak current concentration at the corners compared to round electrodes [ 37 ]. In the HomeDC study, the fact that all skin lesions occurred under the cathode led our consulting dermatologists to hypothesize a thermochemical reaction. This suggests that, beyond thermal effects, the electrode current may induce a chemical reaction that shifts the skin pH to alkaline levels. Based on physiological saline (pH 5–7), the pH value under the anode stabilizes or decreases, which does not represent a major change for the skin's physiological acidic environment. Conversely, at the cathode, a higher pH value is established within the alkaline range, resulting in a discernible pH gradient when compared to the skin. This pH gradient has been demonstrated to contribute to electrochemical skin reactions, predominantly occurring under the cathode [ 38 , 39 ]. According to previous studies, tDCS-induced temperature changes in the skin play a relatively minor role [ 40 , 41 ]. Although the electrode corners were beveled at 45°, the edge effect of rectangular electrodes was not completely eliminated in the applications of this study, potentially leading to current density hotspots near to the electrode edges. Consequently, an increase of current density at the electrode edges likely contributed to the skin lesions [ 42 ]. In addition, microscopic analysis of used electrodes performed at the manufacturer's facilities revealed that the silver coating had been partially dissolved, leading to inhomogeneity in the electrical contact distribution. It was also observed that the knitted silver filaments of the sewn-in electrodes darkened over time, and the zinc-coated snap fasteners used to attach the cables to the sponges showed similar discolorations. These findings suggest that galvanization processes may have caused degradation of the silver and zinc components, leading to uneven conductivity within the electrode material. This uneven conductivity may have caused localized impedance peaks that are not reflected in the average impedance value provided by the device, as it reports only a single combined impedance value for both electrodes. As a solution to this issue, the use of a sentinel electrode has been suggested, which would allow for separate monitoring of each electrode's resistance [ 43 ]. Table 2 shows a summary of published skin lesions after tDCS [ 31 , 34 , 44–49 ]. An important direction for future research will be to identify participant-specific factors that may influence impedance variability and the risk of adverse events. Although the present analysis focused on overall technical performance, we did not collect standardized information on individual characteristics such as skin type (e.g., Fitzpatrick classification), or dermatologically relevant concurrent medication use. Regarding medication, it should be noted that in the DepressionDC trial, patients were treated with a stable dose of a selective serotonin reuptake inhibitor (SSRI) according to the ATHF (Antidepressant Treatment History Form) criteria, with stability required for at least four weeks prior to study initiation and maintained throughout the study period, as detailed in the published study protocol [ 29 ]. In contrast, in the HomeDC study, inclusion criteria were less strict: combination pharmacotherapies were allowed, and medication stability was required for only two weeks before study entry and during the trial. Therefore, a case-by-case analysis of participants who developed skin lesions was conducted. However, due to the small sample size resulting from early study termination, no meaningful statistical analysis could be performed. Supplementary Table 1 presents medication profiles for participants with skin lesions. Among participants with skin lesions, two out of four had received bupropion, two had received venlafaxine, and one had received lithium. Analyses examining a potential association between age and impedance variability revealed no significant correlation (r = .03; p = .642). Variables like medication and skin type may plausibly affect the skin-electrode interface and should be considered in future studies to support the development of more individualized safety protocols. These safety considerations should also be contextualized within the broader literature. Large-scale at-home tDCS programs at NYU and the University of Florida have reported more than 14,000 sessions in over 750 patients without any serious adverse events or sustained skin lesions [ 50 , 51 ]. These studies used standardized sponge-based electrodes, stable headset systems, structured training procedures, and daily remote check-ins with study staff, allowing for early detection and management of technical issues. In contrast, our findings suggest that increased impedance variability, although remaining within the device's safety limits, may reflect unstable stimulation conditions and could contribute to adverse skin reactions. Material-related factors, including possible degradation of embedded electrode filaments or uneven current distribution, may also have played a role. These observations underscore the importance of not only monitoring impedance, but also ensuring robust electrode design, consistent application procedures, and appropriate follow-up mechanisms. Strengthening patient training, optimizing ramp-in durations, and implementing brief daily remote contact may be essential for improving the safety of future at-home tDCS applications. To explore whether impedance variability changed with repeated use of the tDCS setup, we analyzed the standard deviation of impedance values across sessions for each participant. To improve comparability between the two cohorts, we limited the analysis to the first 24 sessions, as the DepressionDC protocol included a maximum of 24 sessions. We observed that impedance variability, as measured by session-wise standard deviation, decreased over time. In the HomeDC cohort, this may reflect growing familiarity with the procedure and more stable behavior during stimulation (e.g., reduced movement, less tension on the cables). Interestingly, this finding appears to contradict the observation that skin lesions occurred predominantly after multiple sessions rather than at the beginning of treatment. This may suggest that other factors beyond impedance variability contribute to lesion development, or that the delayed occurrence reflects cumulative effects. However, the observed decrease in impedance variability over time was small and likely not clinically meaningful. In DepressionDC , for example, only 20 of the 24 sessions were mandatory according to the protocol. A graphical overview of the trend in impedance variability is provided in Supplementary Fig. 1 . While preliminary, the results showed a statistically significant decline in variability when including only the first 20 sessions (b = −7.15, p = .010), and a stronger effect when all available sessions were included (b = −12.88, p < .001). Further study is needed to evaluate the potential for adaptation over time, which may have implications for the design of user training and real-time feedback in home-based tDCS. Our study has significant limitations. The sample sizes are not large, partially due to the premature termination of the HomeDC trial. This particularly limits the reliability of the analyzed stimulus current, with a permissible measurement deviation of approximately 5 % (due to offset, noise, RMS value formation and 12-bit quantization). The calculated impedance values also carry an uncertainty of up to 10 % (due to error propagation). Furthermore, the comparability of the two studies was limited by the, albeit marginally different, equipment (i.e. electrode material), although the same protocol was used. Thus, differences in technical parameters and safety outcomes cannot be attributed to the study setting alone, as was initially assumed. Moreover, impedance and current variability are not the only way to assess stimulation quality. Other important factors (e.g. intra- and interindividual variations of tDCS electrode positioning, day time of stimulation, concurrent activity) are general challenges in tDCS, and in particular in at-home application, need to be included in order to obtain correctly performed tDCS sessions at home, including maintaining a quiet environment and regulating activity before, during, and after stimulation to control for brain state effects. The results of the impedance variability analyses suggest that conventional impedance thresholds may not be sufficient to prevent potential AEs, while artificial intelligence (AI)-based algorithms could facilitate faster and more effective real time analysis of technical parameters, enabling quicker detection of issues. Despite these insights, the mechanisms underlying skin lesion development during tDCS remain incompletely understood. While impedance variability is one possible contributor, it is likely that multiple interacting factors, including cumulative effects over time, microstructural skin properties, or subtle electrode displacement, play a role. Given the limited number of adverse events and participants, our analysis cannot establish causality. Therefore, future studies with larger samples, systematic dermatological assessments, and real-time monitoring should further explore these questions to improve the safety of home-based tDCS protocols. Our study provides a methodological foundation to guide such efforts. 5 Conclusion In sum, this study compared two sets of technical tDCS data from independent RCTs, i.e. a trial with tDCS self-administered at-home and a trial where tDCS was applied by trained operators in the clinic. Continuous monitoring of tDCS technical parameters (i.e. current and impedance) was valuable in both studies using a technical cloud based approach, but could be further improved by real-time recording and feed-back to tDCS applicants in future applications. In general, safety may be more critical in applications at-home. Our findings show that in addition to impedances and their intra-individual variability during tDCS, also other factors may be associated with local electrochemical skin reactions underneath the electrodes. In the HomeDC study, these reactions were related to inferior electrode material and its degradation by frequent use, but also insufficient humidification may have contributed to these phenomena. Thus, technical monitoring may be a useful approach for controlling tDCS quality and adherence, but electrode material should be thoroughly tested under manifold conditions, and instructions for application, training and supervision are mandatory for a safe application of at-home tDCS. Future research should investigate and establish equipment as well as general safety precautions for tDCS including long-term and frequent use (e.g. as in ‘spaced’ or ‘accelerated’ tDCS protocols [ 52 ]). CRediT authorship contribution statement Ulrike Vogelmann: Writing – review & editing, Writing – original draft, Validation, Supervision, Resources, Project administration, Methodology, Investigation, Funding acquisition, Formal analysis, Data curation, Conceptualization. Matthias Stadler: Visualization, Validation, Software, Methodology, Formal analysis, Data curation, Conceptualization. Aldo Soldini: Writing – review & editing, Investigation. Kai-Yen Chang: Writing – review & editing. Miaoxi Chen: Writing – review & editing, Investigation. Lucia Bulubas: Writing – review & editing. Esther Dechantsreiter: Writing – review & editing, Investigation. Christian Plewnia: Writing – review & editing, Investigation. Andreas Fallgatter: Writing – review & editing. Berthold Langguth: Writing – review & editing. Claus Normann: Writing – review & editing. Lukas Frase: Writing – review & editing. Peter Zwanzger: Writing – review & editing. Thomas Kammer: Writing – review & editing. Carlos Schönfeldt-Lecuona: Writing – review & editing. Daniel Kamp: Writing – review & editing. Malek Bajbouj: Writing – review & editing. Alexander Hunold: Writing – review & editing, Software, Methodology, Data curation. Severin Schramm: Writing – review & editing. Josef Priller: Writing – review & editing. Ulrich Palm: Writing – review & editing. Leigh Charvet: Writing – review & editing. Daniel Keeser: Writing – review & editing. Gerrit Burkhardt: Writing – review & editing, Supervision, Investigation, Data curation, Conceptualization. Frank Padberg: Writing – review & editing, Validation, Supervision, Resources, Project administration, Methodology, Investigation, Funding acquisition, Data curation, Conceptualization. Data availability statement The de-identified individual patient data in this paper will be made accessible after its publication for non-commercial academic projects that have a legitimate research topic and a clearly stated hypothesis. In the event that the application is accepted, researchers will be asked to get the study approved by their institution's ethics board. The authors will subsequently provide the de-identified data sets via a safe data transfer system. Funding The DepressionDC trial was a project of the German Center for Brain Stimulation (GCBS) research consortium, funded by the German Federal Ministry of Education and Research (BMBF; grant number FKZ 01EE1403G). The project was also supported within the initial phase of the German Center for Mental Health (Deutsches Zentrum für Psychische Gesundheit [DZPG], grant 01EE2303A). The HomeDC study was funded by the FöFoLe grant of the Ludwig-Maximilians-University (U. Vogelmann, née Kumpf, grant number: 1106). Declaration of competing interest The authors declare the following financial interests/personal relationships which may be considered as potential competing interests: Frank Padberg reports financial support was provided by German Federal Ministry of Education and Research ( BMBF ). Ulrike Vogelmann reports financial support was provided by LMU Munich Faculty of Medicine . Frank Padberg reports a relationship with Brainsway Inc. (Jerusalem, Israel) that includes: board membership, consulting or advisory, and speaking and lecture fees. Frank Padberg reports a relationship with Sooma medical (Helsinki, Finland) that includes: board membership and consulting or advisory. Frank Padberg reports a relationship with Mag & More GmbH (Munich, Germany) that includes: speaking and lecture fees. Frank Padberg reports a relationship with neuroCare Group GmbH that includes: speaking and lecture fees. Leigh Charvet reports a relationship with Soterix Medical Inc that includes: speaking and lecture fees. Leigh Charvet reports a relationship with Neuroelectrics Corp that includes: consulting or advisory. Peter Zwanzger reports a relationship with MagVenture Inc that includes: consulting or advisory and speaking and lecture fees. Alexander Hunold reports a relationship with neuroConn GmbH that includes: employment. If there are other authors, they declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Appendix A Supplementary data The following are the Supplementary data to this article: Multimedia component 1 Multimedia component 1 Multimedia component 2 Multimedia component 2 Multimedia component 3 Multimedia component 3 Appendix A Supplementary data Supplementary data to this article can be found online at https://doi.org/10.1016/j.brs.2025.05.103 .
REFERENCES:
1. LEFAUCHEUR J (2020)
2. LEFAUCHEUR J (2017)
3. FREGNI F (2021)
4. BIKSON M (2016)
5. BRUNONI A (2013)
6. BRUNONI A (2017)
7. RAZZA L (2020)
8. KLOCKE L (2022)
9. SANDRAN N (2019)
10. PALM U (2018)
11. CHARVET L (2020)
12. BIKSON M (2020)
13. SHAW M (2020)
14. RIGGS A (2018)
15. PILLONI G (2019)
16. DOBBS B (2018)
17. ALONZO A (2019)
18. OH J (2022)
19. BORRIONE L (2022)
20. CHARVET L (2023)
21. BAJBOUJ M (2014)
22. SATHAPPAN A (2019)
23. WOODHAM R (2024)
24. BORRIONE L (2024)
25. KUMPF U (2023)
26. KUMPF U (2023)
27. BURKHARDT G (2023)
28. KUMPF U (2021)
29. PADBERG F (2017)
30. HUNOLD A (2020)
31. PALM U (2014)
32. (2023)
33. GIORGINO T (2009)
34. FRANK E (2010)
35. LAGOPOULOS J (2008)
36. SWARTZ C (1989)
37. MINHAS P (2011)
38. MINHAS P (2010)
39. MERRILL D (2005)
40. KHADKA N (2018)
41. DATTA A (2009)
42. AMBRUS G (2011)
43. KHADKA N (2015)
44. PALM U (2008)
45. RODRIGUEZ N (2014)
46. WANG J (2015)
47. KORTTEENNIEMI A (2019)
48. LU H (2019)
49. PALM U (2020)
50. PILLONI G (2022)
51. PILLONI G (2021)
52. BRUNONI A (2025)
|
10.1016_j.rio.2025.100884.txt
|
TITLE: Three-dimensional profilometry based on two-step generalized phase shift for phase measurement of grating projections
AUTHORS:
- Qi, Jiacheng
- Qin, Hongjin
- Sun, Ang
- Lai, Youpan
- Li, Jie
ABSTRACT:
An algorithm is proposed to extract the phase from deformed grating images using a two-step generalized phase-shifting technique for fringe projection profilometry (FPP). This method is based on the principle of the least squares iterative method, which directly extracts unknown phase shift from two deformed grating images that have been modulated by an object and then reconstruct the object wave. Compared to traditional phase-shifting technique, it does not require pre-setting and precise control of the phase shifter, significantly reducing the dependence on the precise calibration of the phase shifter which is particularly beneficial in environments affected by factors such as air turbulence and mechanical vibrations. Moreover, it reduces the number of projected grating images, and increases the measurement speed. Computer simulations and optical experiments demonstrate the effectiveness and correctness of this method.
BODY:
1 Introduction Fringe projection profilometry (FPP) is characterized by fast measurement speed, high precision, large field of view, portability, non-contact and the ability to obtain a wealth of data. It is widely used in fields such as automated production, reverse engineering, machine vision, virtual reality and medical imaging diagnostics ( Geng, 2011; Zuo et al., 2012; Zhang, 2009; Huang and Zhang, 2006; Gorthi and Rastogi, 2009 ). Researchers have continuously improved the FPP measurement strategy. For example, Zhang et al. ( Zhang et al., 2024 ) proposed a multi-view FPP system that addressed issues such as shadows, occlusion and local reflections when measuring complex structures and high-dynamic-range surfaces by employing telecentric projection and tilted-shift camera techniques. Wu et al. ( Wu et al., 2021 ) introduced a DIC-assisted fringe projection profilometry (FPP) method, which combined gray-code patterns and three-wavelength phase unwrapping (Tri-PU) to achieve high-speed 3D reconstruction of discontinuous surfaces at 542 frames per second. Huang ( Huang et al., 2023 ) proposed a deep learning-based pixel-wise phase unwrapping method (DL-PWPU), which utilized a single wrapped phase map and two auxiliary fringe patterns to achieve high-precision measurements of complex free-form surfaces. The Fourier Transform profilometry (FTP) ( Su and Chen, 2001; Iwata et al., 2008 ) and phase measurement profilometry (PMP) ( Zhang, 2009; Huang and Zhang, 2006; Fu and Luo, 2011 ) are usually used for phase extraction in FPP. The FTP requires single grating image, making it fast for measurement and suitable for dynamic measurement. However, it necessitates filtering operations, which limits the measurement range and results in relatively lower measurement precision ( Guo, 2013; Cong et al., 2015 ). Traditional PMP that utilizes multiple images for phase extraction is characterized by its non-contact nature, full-field measurement capability, high resolution, and high precision. However, traditional PMP uses fixed or equal step phase-shifting techniques ( Huang and Zhang, 2006; Wu et al., 2022; Tao et al., 2018 ) to extract phase, which demand high precision for the phase shifter and are particularly susceptible to environmental factors such as mechanical vibration and air turbulence. To improve the measurement accuracy and reduce measurement time, researchers have proposed two-step phase-shifting algorithms based on specific phase shifts. For instance, Mao and Yin ( Mao et al., 2018; Yin et al., 2021 ) introduced a phase shift of 3π/2 to obtain two deformed patterns, simplifying the relationship between intensity and phase into a line-circle model, thereby streamlining the phase calculation process. Yang ( Yang and He, 2007 ) introduced a phase shift of π/2 to obtain two deformed patterns, using intensity difference algorithms to eliminate the slow-changing background intensity of the stripe pattern and subsequently extracting the phase. In the work of Quan ( Quan et al., 2003 ), a phase shift of π was introduced to obtain two deformed fringe patterns, enabling the calculation of background intensity and fringe contrast. The continuous phase value was directly determined through the application of the inverse cosine function. Additionally, several phase demodulation methods for two-step phase-shifting images applicable with arbitrary phase shifts have been proposed. Zhang ( Zhang et al., 2019 ) proposed an interference extremum (EVI) algorithm based on Hilbert-Huang filter, which determines the phase shift between the interference patterns by searching the maximum and minimum values of the normalized interference pattern, and then obtains the measured phase through the inverse tangent function. Yin ( Yin et al., 2021 ) proposed and established a phase iteration technique for arbitrary phase shifts, which formulates a target function based on the intensity distribution of two deformed fringe patterns, transforming the phase recovery problem into a nonlinear optimization problem. Most of the existing two-step phase-shifting algorithms are based on specific phase shifts. However, due to air disturbances and mechanical vibrations etc., it is challenging to obtain accurate phase shifts, leading to inaccuracies in phase extraction. To overcome these shortcomings, we propose an algorithm that utilizes two deformed grating images to extract arbitrary unknown phase shifts. The phase shift and modulation coefficient are estimated before calculation, and the least square method is used to iteratively optimize the phase shift value and modulation coefficient to match the intensity distribution of the two interferogram. The iteration process ends until the difference between two adjacent output values is less than the preset error. Simulations and experiments have demonstrated the validity and precision of this method. 2 Basic Principles Since the process of extracting the phase principal value for both the reference plane and the measured surface is the same, we will only describe the process for the measured surface below. The fringe image of the grating projected onto the surface of the object, captured by the camera, can be expressed as: (1) I ( x , y ) = a ( x , y ) + b ( x , y ) cos 2 π f 0 x + φ ( x , y ) Where I represents the recorded intensity distribution of the deformed grating, a ( x,y ) is the background light intensity, b ( x,y ) is the modulation coefficient, f 0 is the spatial frequency of the grating pattern projected onto the object. Step 1: Calculate the background intensity a ( x,y ). (2) I 1 ( x , y ) = a ( x , y ) + b ( x , y ) cos 2 π f 0 x + φ ( x , y ) (3) I 2 ( x , y ) = a ( x , y ) + b ( x , y ) cos 2 π f 0 x + φ ( x , y ) - α (4) I 2 - I 1 = b c o s 2 π f 0 x + φ - α - c o s 2 π f 0 x + φ = - 2 b s i n 2 π f 0 x + φ - α 2 s i n α 2 (5) I 2 + I 1 = 2 a + b c o s 2 π f 0 x + φ - α + c o s 2 π f 0 x + φ = 2 a + 2 b c o s 2 π f 0 x + φ - α 2 c o s α 2 Using , the equations for intensity and phase from Equation sin 2 ( 2 π f 0 x + φ - α 2 ) + cos 2 ( 2 π f 0 x + φ - α 2 ) = 1 (4) and Equation (5) can be derived as follows: (6) ( I 2 - I 1 ) 2 / tan 2 α 2 + I 2 + I 1 - 2 a 2 = 4 b 2 cos 2 α 2 (7) a = ( I 2 + I 1 ) ∓ ( 4 b 2 cos 2 ( α / 2 ) - ( I 2 - I 1 ) 2 / tan 2 ( α / 2 ) ) 1 / 2 / 2 Step 2: Calculate the phase principal value φ . According to Equation (2) , it can be derived that: (8) cos ( 2 π f o x + φ ) = ( I 1 - a ) / b From Equation (3) it can be derived that: (9) I 2 = a + b cos ( 2 π f 0 + φ ) cos α + sin ( 2 π f 0 + φ ) sin α Substituting Equation (8) into Equation (9) , it can be derived that: (10) sin ( 2 π f 0 x + φ ) = ( I 2 - a ) - ( I 1 - a ) cos α / ( b sin α ) The follow expression can be derived from Equation (8) and Equation (10) . (11) 2 π f 0 x + φ = tan - 1 [ sin ( 2 π f 0 x + φ ) / cos ( 2 π f 0 x + φ ) ] = tan - 1 ( I 2 - a ) - ( I 1 - a ) cos α / ( b sin α ) ( I 1 - a ) / b Step 3: Calculate the modulation coefficient b ( x,y ) and the phase shift α . Rewrite Equation (3) as: (12) I 2 ( x , y ) = a + c cos ( 2 π f 0 x + φ ) + d sin ( 2 π f 0 x + φ ) Where (13). c = b cos α , d = b sin α Using the least squares method, we can determine: (14) a c d = M × N ∑ c o s 2 π f 0 x + φ ∑ s i n 2 π f 0 x + φ ∑ c o s 2 π f 0 x + φ ∑ c o s 2 π f 0 x + φ ∑ c o s 2 π f 0 x + φ ∑ s i n 2 π f 0 x + φ ∑ c o s 2 π f 0 x + φ s i n 2 π f 0 x + φ ∑ s i n 2 2 π f 0 x + φ - 1 ∑ I 2 ∑ I 2 c o s 2 π f 0 x + φ ∑ I 2 s i n 2 π f 0 x + φ Where the summation symbol ∑ represents the sum over all M × N pixels in the image. Solving Equation (14) , the parameters c and d can be computed. Subsequently, the phase shift α can be obtained. (15) α = tan - 1 ( d / c ) Using , it can be derived as follows: cos 2 α + sin 2 α = 1 (16) b 2 = c 2 + d 2 Given that b ( x,y ) represents the modulation coefficient, it is imperative that b ( x,y ) is a positive value, that is: (17) b = c 2 + d 2 Among the aforementioned variables, only the intensities (grayscale values) of the two deformed grating images can be recorded through the camera. To determine the four variables in the fringe image representation function, it is necessary to estimate and conjecture some variables before the start of the iteration. We first choose π/2 as the initial estimated value for the phase shift (due to device errors and adjustment errors in the projector, there will be a small range of deviation between the estimated initial value and the actual phase shift, but setting the initial estimated value close to the actual phase shift helps reduce the number of iterations) and use the root mean square of the grayscale values of the first deformed grating image as the initial estimated value for the modulation coefficient b ( x,y ). Then, through equations Equation (6) to Equation (8) , the background intensity a ( x,y ) and the phase principal value φ can be calculated. After that, the optimized modulation coefficient b ( x,y ) and phase shift α can be determined through Equations (14) to (17) . This is a complete iteration cycle. The iteration continues until the extracted phase shift is less than the preset error function ε , at which point we consider the phase shift to be accurate. The error function we set can be expressed as: (18) ε = ∑ { | I 1 - ( A 0 2 + A r 2 +2 A 0 A r cos φ ) | + | I 2 - [ A 0 2 + A r 2 +2 A 0 A r cos( φ - α ) ] | } In a similar way, the phase principal value φ 0 of the reference plane can be obtained. Utilizing a phase unwrapping algorithm based on the Discrete Cosine Transform (DCT) ( Yongjian et al., 2007 ), the phase principal values of both the object and the reference surface are unwrapped, yielding their true phase. The height of the object is then derived from the phase difference between the reference surface and the object Fig. 1 . 3 Computer Simulation To verify the proposed algorithm, computer simulations are conducted with MATLAB. First, an irregular object surface is generated with peaks function and shown in Fig. 2 . Then project the sinusoidal grating onto the reference plane and the object. Fig. 3 (a) and (b) show the two recorded grating images of the reference plane, respectively, with a phase shift of 0.5100 rad between the two images. Fig. 3 (c) and (d) show the two deformed gratings modulated by the object, with a phase shift of 0.5100 rad. The initial phase shift is chosen as 1.5 rad. From Table 1 we can see that after 28 times iterations, the phase shift between the two deformed grating images can be accurately determined. This phase shift is then substituted into Equations (1) − (11) to obtain the corresponding phase principal value. Fig. 4 (a) and (b) shows the two-dimensional distributions of the reference surface phase and object surface phase principal values extracted using our method, respectively. Fig. 5 (a) and (b) present the 3D phase contour distributions of the reference plane and the object, respectively, which is obtained by applying a spatial phase unwrapping algorithm to Fig. 4 . Fig. 6 shows the reconstructed object. The relative error between the reconstructed 3D contour and the actual object is shown in Fig. 7 . The error magnitude is around 10 -5 . Further analysis reveals that the maximum relative error is 1.0625 × 10 -5 and the average relative error is 6.5480 × 10 -11 . To further verify and analyze the accuracy of the phase principal value extraction algorithm we designed, the phase principal value of the object is directly extracted without considering the reference plane and compared with the true value of the object phase principal value. Fig. 8 (a) and (b) show the preset and reconstructed wrapped phase distributions of the object, respectively. Fig. 9 (a) and (b) show the absolute error and relative error distribution between the preset and reconstructed wrapped phase of the object, respectively. The maximum of absolute error between the preset and reconstructed phases is 1.7679 × 10 -11 rad and the average value is 2.7390 × 10 -11 rad. The maximum relative error between the preset and reconstructed wrapped phases is 8.2360 × 10 -5 and the average relative error is 7.7065 × 10 -10 . The phase principal value extraction algorithm we have developed is capable of accurately reconstructing the 3D morphology of objects, as evidenced by the data presented above. 4 Optical Experiment Based on the proposed measurement method and technology, a single-camera grating projection phase-measuring 3D contour system has been constructed and assembled. To verify the feasibility of the system, measurements were conducted on two objects respectively. Utilizing a digital light projector (DLP), two phase-shifted grating patterns are sequentially projected onto the object under test. The deformed grating patterns modulated by the object are captured and stored in the computer. The first object is an inclined plane. Fig. 10 is the three-dimensional (3D) representation of the object. As can be seen from the figure, the final 3D model of the object is consistent with the original object. This validates the feasibility of the system. Following the sequential extraction of the principal phase values and phase unwrapping for the second object, the resulting 3D image of the object is depicted in Fig. 11 . It can be observed from the figure that the reconstructed 3D shape of the object matches the actual shape of the object, thereby validating the feasibility of the 3D measurement system we have constructed. 5 Conclusion we present a two-step generalized phase-shifting algorithm for extracting arbitrary unknown phase shifts from two deformed grating patterns. By providing an initial estimated set of phase shifts and modulation coefficients, and based on the intensity distribution of the two recorded deformed grating images, the algorithm employs a least squares iterative method to directly extract any unknown phase shifts from the grating images modulated by the object, thereby obtaining precise principal phase values and ultimately reconstructing the object wave. Compared with two-step phase-shifting algorithms based on specific phase shifts, this method does not require pre-setting and precise control of the phase shifter, significantly reducing the dependence on the precise calibration of the phase shifter and mitigating the impact of atmospheric turbulence and mechanical vibrations on measurement outcomes. Through computer simulations and optical experiments, it was found that the maximum relative error between the preset and reconstructed wrapped phase is 8.2360 × 10 -5 , and the average relative error is 7.7065 × 10 -10 . The experimental results demonstrate that the method is accurate, enhancing the applicability of the two-step phase-shifting method. CRediT authorship contribution statement Jiacheng Qi: Writing – review & editing, Writing – original draft, Validation, Software, Methodology, Conceptualization. Hongjin Qin: Writing – review & editing, Writing – original draft, Visualization, Software. Ang Sun: Resources, Formal analysis, Data curation. Youpan Lai: Investigation, Data curation. Jie Li: Supervision, Resources, Project administration, Methodology, Funding acquisition, Conceptualization. Declaration of competing interest The authors declare the following financial interests/personal relationships which may be considered as potential competing interests: Jie Li reports financial support was provided by Department of Science and Technology of Shandong Province. If there are other authors, they declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
REFERENCES:
1. GENG J (2011)
2. ZUO C (2012)
3. ZHANG S (2009)
4. HUANG P (2006)
5. GORTHI S (2009)
6. ZHANG G (2024)
7. WU Z (2021)
8. HUANG W (2023)
9. SU X (2001)
10. IWATA K (2008)
11. FU Y (2011)
12.
13. CONG P (2015)
14. WU G (2022)
15. TAO T (2018)
16.
17. YIN Y (2021)
18. YANG F (2007)
19. QUAN C (2003)
20.
21. YIN Z (2021)
22. YONGJIAN Z (2007)
|
10.1016_j.psj.2023.102841.txt
|
TITLE: Hyperimmune egg yolk antibodies developed against Clostridium perfringens antigens protect against necrotic enteritis
AUTHORS:
- Goo, D.
- Gadde, U.D.
- Kim, W.K.
- Gay, C.G.
- Porta, E.W.
- Jones, S.W.
- Walker, S.
- Lillehoj, H.S.
ABSTRACT:
Necrotic enteritis (NE) is a widespread infectious disease caused by Clostridium perfringens that inflicts major economic losses on the global poultry industry. Due to regulations on antibiotic use in poultry production, there is an urgent need for alternative strategies to mitigate the negative effects of NE. This paper presents a passive immunization technology that utilizes hyperimmune egg yolk immunoglobulin Y (IgY) specific to the major immunodominant antigens of C. perfringens. Egg yolk IgYs were generated by immunizing hens with 4 different recombinant C. perfringens antigens, and their protective effects against NE were evaluated in commercial broilers. Six different spray-dried egg powders were produced using recombinant C. perfringens antigens: α-toxin, NE B-like toxin (NetB; EB), elongation factor-Tu (ET), pyruvate:ferredoxin oxidoreductase, a mixture of 4 antigens (EM-1), and a nonimmunized control (EC). The challenged groups were either provided with different egg powders at a 1% level or no egg powders (EN). The NE challenge model based on Eimeria maxima and C. perfringens dual infection was used. In Experiments 1 and 2, the EB and ET groups exhibited increased body weight gain (BWG; P < 0.01), decreased NE lesion scores (P < 0.001), and reduced serum NetB levels (P < 0.01) compared to the EN and EC groups. IgY against NetB significantly reduced Leghorn male hepatocellular cytotoxicity in an in vitro test (P < 0.01). In Experiment 3, the protective effect of the IgYs mixture (EM-2) against C. perfringens antigens (NetB and EFTu) and Eimeria antigens (elongation factor-1-alpha: EF1α and Eimeria profilin: 3-1E) was tested. The EM-2 group showed similar body weight, BWG, and feed intake from d 7 to 22 compared to the NC group (P < 0.05). On d 20, the EM-2 group showed comparable intestinal permeability, NE lesion scores, and jejunal NetB and collagen adhesion protein levels to the NC group (P < 0.05). In conclusion, dietary mixture containing antibodies to NetB and EFTu provides protection against experimental NE in chickens through passive immunization.
BODY:
INTRODUCTION Necrotic enteritis ( NE ), caused by Clostridium perfringens , is a widespread infectious disease inflicting great economic losses of more than $6 billion globally on the poultry industry worldwide ( Van der Sluis, 2000 ; Wade and Keyburn, 2015 ). NE usually occurs in broiler chickens at 2 to 6 wk of age and may present as an acute clinical disease or subclinical infection. Acute infection is characterized by a sudden onset of mortality with few clinical signs, while subclinical NE causes a decrease in growth performance by about 12% compared to healthy chickens, accounting for a major portion of the economic loss caused by NE ( Skinner et al., 2010 ). In the past few decades, prophylactic supplementation of in-feed antibiotics has been used as a major strategy to mitigate the impact of NE. However, with the ban on the use of antibiotics for growth promotion in the European Union and the increasing regulatory restrictions on the use of antibiotics in the United States, the incidence and severity of NE outbreaks have increased in recent years ( Casewell et al., 2003 ; Gaucher et al., 2015 ). Therefore, there is a timely need to develop antibiotic-alternative strategies to mitigate NE ( Seal et al., 2013 ). One potential alternative strategy in the prevention of NE is passive immunization using antigen-specific hyperimmune egg yolk antibodies, also known as immunoglobulin Y ( IgY ). IgY from egg yolks collected after repeated immunization of laying hens with specific antigens has previously been shown to be effective in the prevention and treatment of intestinal infectious diseases ( Gadde et al., 2015 ). One of the advantages of using IgY as an antibiotic alternative in the control of NE is the high stability of egg yolk IgY ( Gadde et al., 2015 ). Spray-dried egg yolk IgY can be stored at room temperature for approximately 6 mo and for considerably longer periods when stored under refrigeration or freezing conditions ( Fu et al., 2006 ; Nilsson et al., 2012 ). Importantly, IgY is also known to be stable when processed under high heat and pressure as a feed additive ( Shimizu et al., 1992 , 1994 ). The mechanism of action of IgY is mainly through an antigen-antibody reaction, resulting from antigen-specific immunoglobulins binding to the pathogen to induce various antibacterial effects ( Rahman et al., 2013 ). For example, IgY binding to bacterial structures such as flagella and pili inhibits bacterial adhesion to the intestinal wall, thereby reducing bacterial growth and colonization in the intestine ( Jin et al., 1998 ). In addition, IgY can interfere with bacterial growth and toxin production in a variety of ways including bacterial aggregation, toxin neutralization, inhibition of enzyme activity, and reduction of bacterial signaling cascades ( Wang et al., 2011 ; Xu et al., 2011 ; Rahman et al., 2013 ). Another important characteristic of IgY-mediated passive immunization is its immediate effects compared to active immunization which can take several days or longer to induce an antigen-specific immune response ( Rahman et al., 2014 ). Additionally, egg yolk IgY antibodies that target C. perfringens can more effectively defend against enteric bacterial diseases through passive immunization. Some studies have shown no significant effects of dietary IgY antibodies against C. perfringens . Wilkie et al. (2006) reported that egg yolk IgY did not affect the level of colonization of C. perfringens whereas Tamilzarasan et al. (2009) reported that the mortality rate of chickens infected with C. perfringens was reduced by egg yolk IgY. These varying observations could be due to many factors including the specificity and dose of IgY antibodies as well as the type of NE infection model used. For example, pathogenic and toxin-producing C. perfringens strains can induce NE, however, in most field NE cases, coccidiosis has been shown to be an important predisposing factor for NE infection. This is because intracellular development of Eimeria parasites in the gut damages the intestinal epithelium and facilitates the colonization and proliferation of C. perfringens ( Van Immerseel et al., 2009 ). Physically damaged epithelial cells by Eimeria can result in the leakage of plasma proteins, promoting C. perfringens growth ( Van Immerseel et al., 2004 ). In addition, damaged epithelial cells expose certain types of collagens within the extracellular matrix ( ECM ) to the lumen. As a result, C. perfringens with collagen adhesin protein ( CNA ) efficiently binds to collagen, promoting colonization ( Lepp et al., 2021 ; Goo et al., 2023 ). Previous studies have reported that Eimeria -specific IgYs are likely to mitigate the effects of coccidiosis ( Lee et al., 2009a , b ). Therefore, the combination of Eimeria -specific and C. perfringens -specific IgY antibodies can effectively synergize to mitigate NE infection. The objective of the current study was to develop egg yolk IgY antibodies against the major immunodominant antigens of C. perfringens and Eimeria to investigate their combined protective effect against experimental NE through passive immunization. MATERIALS AND METHODS Cloning, Expression, and Purification of Recombinant C. Perfringens and Eimeria Proteins The method for the production of recombinant proteins for immunization of hens was previously described ( Lee et al., 2010 , 2011 ; Jang et al., 2012 ; Lin et al., 2017 ). Briefly, full-length coding sequences of C. perfringens α-toxin, NE B-like toxin ( NetB ), C. perfringens elongation factor Tu ( EFTu ), and a partial sequence of pyruvate:ferredoxin oxidoreductase ( PFO ), as well as full-length coding sequences of Eimeria elongation factor 1 alpha ( EF1α ) and 3-1E ( Eimeria recombinant profilin protein), were cloned into the pET32a (+) vector with an NH 2 -terminal polyhistidine tag and transformed into Escherichia coli . Transformed E. coli DH5α bacteria were cultured for 16 h at 37°C and induced with 1.0 mM of isopropyl-β-d-thiogalactopyranoside (Amresco, Cleveland, OH) for 5 h at 37°C. The bacteria were then harvested by centrifugation and disrupted by sonication on ice (Misonix, Farmingdale, NY). The supernatant was incubated with Ni-NTA agarose (Qiagen, Valencia, CA) for 1 h at room temperature, and the resin was washed with phosphate-buffered saline ( PBS ). Purified proteins were eluted, and their purity was confirmed on Coomassie blue-stained SDS-acrylamide gels. Production of C. Perfringens and Eimeria -Specific Egg Yolk IgY Laying hens (25–30 wk of age, Brown Leghorn, Slonaker Farms, Harrisonburg, VA) were immunized with 50 to 100 µg of the purified recombinant C. perfringens or Eimeria antigens: 1) AgA (α-toxin antigen); 2) AgB (NetB antigen); 3) AgT (EFTu antigen); 4) AgP (PFO antigen); 5) AgM-1 (a mixture of AgA, AgB, AgT, and AgP); 6) AgM-2 (a mixture of AgB, AgT, EF1α antigen, and 3-1E antigen) by an intramuscular injection into the breast muscle. Freund's complete adjuvant ( FCA ) was used for the first injection, and Freund's incomplete adjuvant ( FIA ) was used for the boost injection. For the primary immunization, 0.5 mL was injected into each breast muscle (total of 1.0 mL injected), and for the boost immunization, 0.5 mL was injected into one breast muscle (total of 0.5 mL injected). The second immunization was administered 4 wk after the first immunization, with subsequent boosts given every 4 wk. Egg collection began 1 wk after the first boost, and the antibody titers were monitored by enzyme-linked immunosorbent assay ( ELISA ) at regular intervals. When the egg yolk antibody titers reached the peak response, the eggs were collected, homogenized, and then spray-dried. The resultant egg powders were used as a source of protective antibodies and control egg powder was obtained from the nonimmunized hens. The different egg powders produced include 1) EA (antibody against AgA); 2) EB (antibody against AgB); 3) ET (antibody against AgT); 4) EP (antibody against AgP); 5) EM-1 (antibody against AgM-1); 6) EM-2 (antibody against AgM-2); 7) EC (nonimmunized control hens). Experiment 1 Determination of IgY Levels in Egg Yolk and Egg Powder Egg samples collected from immunized and nonimmunized hens at regular intervals were used to monitor the specific antibody levels. Total IgY was extracted from egg yolks using the Pierce Chicken IgY Purification Kit (Thermo Fisher Scientific, Waltham, MA). Briefly, 2 mL of egg yolk contents was mixed with 10 mL of delipidation reagent, and IgY was purified following the manufacturer's instructions. Spray-dried egg powder samples were reconstituted in sterile PBS at a concentration of 1 mg/mL and filtered through a 0.22 µm membrane filter. Specific IgY levels in the egg yolk or egg powder samples were measured by indirect ELISA. Flat-bottom, 96-well microplates (Corning Costar, Corning, NY) were coated with 10 µg/mL purified recombinant proteins in carbonate buffer (BupH Carbonate-Bicarbonate buffer packs, Thermo Scientific, Rockford, IL) and incubated overnight at 4°C. The plates were washed twice with PBS containing 0.05% Tween 20 ( PBS-T ) (Sigma-Aldrich, St. Louis, MO) and blocked with 100 µL of PBS containing 1% bovine serum albumin ( BSA ), incubating for 1 h at room temperature. One hundred µL of IgY samples diluted in PBS with 0.1% BSA from egg yolk and egg powder were then added to the plates in triplicate and incubated for 2 h at room temperature with constant shaking. As a blank control, PBS with 0.1% BSA was used. The plates were then washed with PBS-T and treated with peroxidase-conjugated rabbit antichicken IgY (IgG) (1:500; Sigma-Aldrich, St. Louis, MO), incubated for 30 min, followed by color development for 10 min with 0.01% tetramethylbenzidine ( TMB ) substrate (Sigma-Aldrich, St. Louis, MO) in 0.05 M pH 5.0 phosphate-citrate buffer. Bound antibodies were detected by measuring the optical density at 450 nm ( OD ) using a microplate reader (Bio-Rad, Richmond, CA). 450 Chickens and Experiment Design Experiment 1 was approved by the Beltsville Agriculture Research Center Small Animal Care and Use Committee and the husbandry followed guidelines for the care and use of animals in agriculture research ( FASS, 1999 ). A total of 120 one-day-old broiler chickens (Ross 708, Longenecker's Hatchery, Elizabethtown, PA) were obtained and housed in brooder units in an Eimeria -free facility for 2 wk. The chickens were then transferred to finisher cages where they were infected and kept until the end of the experimental period. Feed and water were provided ad libitum. At 17 d of age, 120 chickens were randomly assigned to 1 of the 8 treatments ( n = 15). Chickens in the control ( NC ) group were noninfected and given a nonsupplemented basal diet. Chickens in the other treatment groups were experimentally coinfected with Eimeria maxima + C. perfringens to induce NE. The treatments consisted of a nonsupplemented egg powder diet ( EN ), diet supplemented with EC, diets supplemented with 5 different immunized egg powders (EA, EB, ET, EP, and EM-1) at a 1% level. The experimental model used for NE induction included oral inoculation of chickens at 17 d of age with E. maxima strain 41A (1 × 10 4 oocysts/chicken) followed by oral administration with C. perfringens strain Del-1 (1 × 10 9 colony forming unit ( cfu )/chicken) 4 d after E. maxima infection (d 21) ( Park et al., 2008 ; Jang et al., 2013 ; Lee et al., 2013 ). To facilitate the development of NE, all the chickens were given an antibiotic-free starter diet containing a low level (18%) of crude protein from d 1 to 20 and then switched to a standard grower diet with high crude protein levels (24%) from d 21 to 28 ( Table 1 ). All chickens were weighed individually on d 17 (inoculation day of E. maxima ) and on d 28 (7 days postinoculation ( dpi ) of C. perfringens inoculation and 11 dpi of E. maxima inoculation) to calculate body weight gain ( BWG ). Jejunal Necrotic Enteritis Lesion Scores Three chickens per treatment group were randomly selected, euthanized, and approximately 20 cm intestinal segments extending 10 cm anterior and posterior from the Meckel's diverticulum were obtained on d 23 (2 dpi of C. perfringens inoculation). Intestinal sections were scored for NE lesions on a scale of 0 (none) to 4 (high) by 3 independent observers ( Shojadoost et al., 2012 ). Experiment 2 Chickens and Experiment Design Experiment 2 was approved by the Beltsville Agriculture Research Center Small Animal Care and Use Committee, and the husbandry followed guidelines for the care and use of animals in agriculture research ( FASS, 1999 ). A total of 50 broiler chickens were randomly assigned to 1 of the 5 treatments ( n = 10) at d 17. The treatments consisted of NC, EN, EC, EB, and ET. The procedures for the induction of NE and the experimental diets were the same as those described for Experiment 1. All chickens were weighed individually on d 17 (inoculation day of E. maxima ) and d 28 (7 dpi of C. perfringens inoculation and 11 dpi of E. maxima inoculation) to calculate BWG. Sandwich ELISAs for Determination of Serum α-Toxin and NetB Levels On d 21, 3 blood samples per treatment were collected from the wing vein 6 h after C. perfringens inoculation. The sera were separated by centrifugation at 1,000 × g for 20 min to determine the levels of α-toxin and NetB by sandwich ELISA as previously described ( Lee et al., 2013 ). Briefly, α-toxin and NetB monoclonal antibodies ( mAbs ) were coated onto 96-well microplates at a concentration of 5 µg/mL using carbonate buffer (BupH Carbonate-Bicarbonate buffer packs, Thermo Scientific, Rockford, IL) and incubated overnight at 4°C. The plates were washed and blocked as described previously. Serum samples (100 µL) were added to the microplates, and the plates were incubated at 4°C by overnight. Following incubation, the plates were washed and treated with 2 µg/mL unconjugated rabbit polyclonal antibody to α-toxin and NetB, incubated at room temperature for 30 min. After washing the plates for 5 times with PBS-T, 1 mL of a 1:10,000 dilution of antirabbit IgG horseradish peroxidase ( HRP )-conjugated second detection antibody was added and incubated for 30 min. After incubation, the plates were washed and developed with 100 µL of TMB substrate (Sigma-Aldrich, St. Louis, MO) for 10 min, and followed by the addition of 2 N H 2 SO 4 stop solution. The plates were read at OD 450 using a microplate reader (Bio-Rad, Richmond, CA). IgY-NetB Neutralization Assay The Leghorn male hepatocellular ( LMH ) cell cytotoxicity assay, as outlined by Keyburn et al. (2008) , was used to assess the neutralizing activity of anti-NetB IgY against recombinant NetB protein. LMH cells (LMH, CRL-2117, ATCC, Manassas, VA) were added onto 96-well tissue culture plates (Corning) at a density of 5 × 10 3 cells in Waymouth's medium. The cells were preincubated for 24 h at 37°C and 5% CO 2 . IgY extracted from the egg yolks of control nonimmunized hens (AgC) and IgY from hyperimmunized hens with AgB were incubated with recombinant NetB protein at a ratio of NetB:IgY = 1:20 for 1 h at room temperature. The preincubated IgY-NetB mixtures and NetB (390 pg) were added to the LMH cells in triplicate wells and incubated for 4 h at 37°C. The dehydrogenase activity in the viable cells was measured using the Cell Counting Kit-8 (Dojindo Molecular Technologies, Rockville, MD) and used to calculate LMH cell cytotoxicity. C. Perfringens Growth Inhibition Assay The efficacy of IgY from hens hyperimmunized with AgT in inhibiting the growth of C. perfringens in culture was investigated and the results were compared to those of the AgC group. The C. perfringens Del-1 strain was cultured anaerobically in brain heart infusion ( BHI , Becton Dickinson, NJ) broth overnight at 37°C. Specific and nonspecific egg yolk IgY solutions were sterilized by filtering through a 0.22 µm membrane filter. Five milliliters of each IgY solution were then added to an equal volume of C. perfringens culture (2.4 × 10 7 cfu/mL) and incubated in anaerobic conditions at 37°C. The final concentration of the IgY tested was 1 mg/mL. Samples (1 mL) were collected at 0, 2, 4, 6 and 24 h, and serial dilutions were plated on Perfringens agar plates (Thermo Scientific, Lenexa, KS) in triplicate. The inoculated plates were incubated at 37°C for 24 h and the colonies were counted to determine the cfu. Experiment 3 Chickens and Experiment Design Experiment 3 was conducted at the Poultry Research Center, University of Georgia, following the approved protocol by the Institutional Animal Care and Use Committee (A2020 01-018). The animal husbandry followed the Cobb 2018 nutritional and management guidelines ( Cobb-Vantress, 2018 ). A total of 200 zero-day-old Cobb 500 broiler chickens were obtained and raised in battery cages, with feed and water provided ad libitum. On d 7, the chickens were randomly assigned to 4 treatments with 5 replicates, and each replicate consisted of 10 chickens. The 4 treatments included NC, EN, EC, and EM-2, with EC and EM-2 were provided at the 1% level of diet. The experimental NE infection model used oral inoculation of chickens on d 14 with E. maxima strain 41A (7.5 × 10 3 oocysts/chicken) followed by oral administration of C. perfringens strain Del-1 (1 × 10 9 cfu/chicken) on d 18 (4 dpi). To facilitate the development of NE, all the chickens were fed with a starter diet containing 21% crude protein diet from d 0 to 17, and then switched to a 24% high crude protein diet from d 18 to 22 ( Table 2 ). Individual body weight ( BW ), BWG, feed intake ( FI ), and feed conversion ratio ( FCR ) were recorded for all chickens on d 7 and 22. Intestinal Permeability Intestinal permeability was assessed on d 20 (6 dpi) using fluorescein isothiocyanate-dextran (FITC-d ; molecular weight 4,000; Sigma-Aldrich, Canada) following a modified version of previous experiments ( Teng et al., 2020 ; Choi et al., 2022 ). In brief, a solution of FITC-d with a concentration of 2.2 mg/mL was prepared in PBS under dark condition. One chicken per cage was orally administered the FITC-d solution. Two hours after administration, the chickens were euthanized by CO 2 asphyxiation, and blood samples were collected. The collected blood samples were stored in a completely dark room for 2 h and then centrifuged at 2,000 × g for 12 min to obtain serum. To determine the FITC-d level, a standard curve was generated by serial dilution of serum samples extracted from 5 nonexperimental chickens. Subsequently, 100 µL of the serum samples were transferred to 96-well dark plates, and the fluorescence was measured at OD 485/525 using a Spectra Max 5 microplate reader (Molecular Devices, Sunnyvale, CA). Jejunal Necrotic Enteritis Lesion Scores On d 20 (6 dpi), 3 chickens per cage were randomly selected and euthanized to collect approximately 30 cm intestinal segments extending 15 cm anterior and posterior from the Meckel's diverticulum. The intestinal segments were then examined for NE lesions by 2 independent observers. The severity of the lesions was assessed on a scale ranging from 0 (no lesions) to 4 (severe lesions) as described in the previous study ( Shojadoost et al., 2012 ). Fecal Oocyst Counting To perform E. maxima oocyst counting, clean trays were placed under the cages on d 19 (1 d before sample collection). On d 20, approximately 100 g of fresh fecal samples were collected, homogenized, and stored at 4°C for further analysis. The oocyst counting was carried out as previously described ( Choi et al., 2022 ) with slight modifications. In brief, 5 g of feces was mixed with 30 mL of tap water and vigorously vortexed. After vortexing, 1 mL of fecal sample was mixed with 10 mL of a saturated salt solution and vortexed again. Then, 650 µL of the feces mixture with the saturated salt solution was added to a McMaster chamber (Vetlab Supply, Palmetto Bay, FL). The oocysts were counted by 3 different individuals. Total number of E. maxima oocysts per gram of feces was expressed as log 10 . Sandwich ELISAs for Determination of NetB and CNA in Jejunal Digesta On d 20, 2 jejunal digesta samples per treatment were collected and diluted with sterile PBS at a ratio of 1:10. The diluted digesta samples were then centrifuged at 2,000 × g for 10 min, and the supernatants were collected to determine the levels of NetB and CNA using sandwich ELISA. The sandwich ELISAs were performed following a method previously described by Goo et al. (2023) with slight modifications. In summary, NetB and CNA capture mAbs were coated onto 96-well microplates at a concentration of 5 µg/mL using carbonate buffer (BupH Carbonate-Bicarbonate buffer packs, Thermo Scientific, Rockford, IL) and incubated overnight at 4°C. The plates were washed twice with PBS-T and then blocked with blocking buffer (Superblock Blocking Buffer, Thermo Scientific, Rockford, IL). Next, diluted digesta samples (100 µL) were added to the microplates and incubated for 2 h. After the incubation, the plates were washed 6 times with PBS-T, and HRP-conjugated NetB and CNA detection mAbs, at a concentration of 0.33 µg/mL, were added and incubated at room temperature for 1 h. After washing the plates again for 6 times with PBS-T, 100 µL of TMB substrate (Sigma-Aldrich, St. Louis, MO) was added to the each well and incubated at room temperature for 5 min. The color development reaction was stopped by adding 50 µL of 2 M H 2 SO 4 stop solution. The fluorescence values were then measured at OD 450 using a microplate reader (Bio-Rad, Richmond, CA). Statistical Analysis The statistical analysis was conducted using SAS software (version 9.4, SAS Institute Inc., Cary, NC; SAS Institute Inc., 2013 ). The data were expressed as mean ± standard error of the mean ( SEM ) for each treatment. All experiments involving ELISA, cell neutralization, and C. perfringens growth inhibition assays were performed in triplicate. For data analysis, a 1-way analysis of variance ( ANOVA ) was applied, and if the P value was less than 0.05 ( P < 0.05), indicating a significant difference, Tukey's honestly significant difference ( HSD ) test was used to determine the differences among treatments. RESULTS Experiment 1 Antibody Levels of Egg Yolk Antibodies and Spray-Dried Egg Powder From Hyperimmunized Hens The average antibody levels in the egg yolks of hyperimmunized hens are shown in Figure 1 . The chicken egg yolks exhibited significantly higher antibody levels to the respective immunizing antigens compared to those from the nonimmunized hens. The specific antibody levels of the spray-dried egg powder as determined by indirect ELISA, are shown in Figure 2 . All the tested egg powders, including EA, EB, ET, and EP, showed significantly higher antibody levels compared to that of the EC. Body Weight Gain The result of dietary egg powder supplementation on BWG from d 17 to 28 is shown in Figure 3 . The EN, EC, EA, and EP groups showed significantly decreased BWG compared to the NC group ( P < 0.001). Dietary supplementation with EB, ET, and EM-1 significantly increased BWG compared to the EN and EC groups. The BWG of the EB, ET, and EM-1 groups did not show any statistical difference compared to the NC group. Jejunal Necrotic Enteritis Lesion Scores The result of dietary EP supplementation on NE lesion scores on d 23 (6 dpi) is presented in Figure 4 . All groups showed significantly increased NE lesion scores compared to the NC group ( P < 0.001). The EB and ET groups showed significantly decreased NE lesion scores compared to the EN group. No significant differences were observed in NE lesion scores of EC, EA, EP, and EM-1 groups compared to the EN group. Experiment 2 Body Weight Gain The result of BWG in Experiment 2 is shown in Figure 5 . The BWG of chickens in the EB and ET groups was significantly higher compared to that of the EN and EC groups ( P < 0.01). There were no significant differences in BWG between the EN and EC groups. Both the EN and EC groups showed significantly decreased BWG compared to the NC group. Serum α-Toxin and NetB Levels The results of serum α-toxin and NetB levels are shown in Figure 6 . No significant levels of α-toxin and NetB were detected in the serum of the NC group. The levels of both α-toxin and NetB in the serum of EB and ET groups were significantly lower compared to those of the EN group ( P < 0.01). However, α-toxin and NetB levels were also significantly decreased in the EC group compared to the EN group. In Vitro NetB Neutralization and C. Perfringens Inhibition Assay The result of in vitro NetB neutralization assay of egg yolk IgY against AgB is shown in Figure 7 . NetB-specific hyperimmune IgY significantly neutralized the cytotoxic effect of NetB on LMH cells, reducing it from 66% (control group without IgY) to 12% ( P < 0.01). The NC group did not exhibit any neutralizing effect on NetB. The result of in vitro C. perfringens growth inhibition assay is shown in Figure 8 . Neither the NC group nor the egg yolk IgY against AgT showed any inhibitory effect on the growth of C. perfringens . Experiment 3 Growth Performance The growth performance results from d 7 to 22 are shown in Figure 9 . The chickens in the EN and EC groups showed significantly decreased BW, BWG, and FI compared to the NC group ( P < 0.05). The BW, BWG, and FI of chickens in the EM-2 group did not show differences compared to the NC group ( P < 0.05). No statistical differences were observed in FCR throughout the experimental period. Intestinal Permeability The result of intestinal permeability on d 20 (6 dpi) is shown in Figure 10 . The chickens in the EN and EC groups showed significantly increased intestinal permeability compared to the NC group ( P < 0.05). No significant differences in intestinal permeability were observed between the EN and EC groups. The chickens in EM-2 group did not exhibit differences in intestinal permeability compared to the NC group ( P < 0.05). Jejunal Necrotic Enteritis Lesion Scores The result of the NE lesion score on d 20 (6 dpi) is presented in Figure 11 . The EN and EC groups exhibited a significant increase in NE lesion score compared to the NC group ( P < 0.01). There were no significant differences in the NE lesion score between the EN and EC groups. The chickens in the EM-2 group showed a similar NE lesion score compared to the NC group ( P < 0.05). Fecal E. Maxima Oocyst Counting The result of E. maxima oocyst counting on d 20 (6 dpi) is presented in Figure 12 . No E. maxima oocysts were detected in the NC group, while all the NE-infected groups (EN, EC, and EM-2) showed a significant increase in E. maxima counts compared to the NC group ( P < 0.001). There were no significant differences observed among the NE-infected groups. NetB and CNA Levels in Jejunal Digesta The results of the levels of NetB and CNA in jejunal digesta are presented in Figure 13 . NetB and CNA were detected in very small amounts in all samples from the NC group. The levels of NetB in jejunal digesta on d 20 and 22 (6 and 8 dpi) were significantly higher in the EN group compared to the NC group, while no differences were found between the EM-2 and NC groups on d 20 and 22 ( P < 0.001). The EN and EC groups showed significantly increased CNA levels in jejunal digesta on d 20 compared to the NC group ( P < 0.05). The EM-2 group showed similar CNA levels compared to the NC group ( P < 0.05). No significant difference in CNA levels was found in jejunal digesta on d 22. DISCUSSION Six different egg powders containing specific IgY antibodies detecting immunodominant C. perfringens and Eimeria antigens were produced by hyperimmunizing layers with immunodominant antigens of pathogenic C. perfringens and/or Eimeria (AgA, AgB, AgT, AgP, AgM-1, and AgM-2). C. perfringens strains can be grouped into 7 toxin types (A–G) based on the type of toxins (α-toxin, β-toxin, ε-toxin, ι-toxin, enterotoxin, and NetB) ( Lee and Lillehoj, 2022 ). A zinc metalloenzyme phospholipase C sphingomyelinase, α-toxin, has been considered a major virulence factor in the pathogenesis of NE in chickens for more than 20 yr ( Van Immerseel et al., 2009 ), and α-toxin plays a role in host cell membrane damage. NetB, a pore-forming toxin, is a 33 kDa beta-barrel toxin that forms small or large pores in the cell membrane ( Lee and Lillehoj, 2022 ). Keyburn et al. (2006) showed that α-toxin is not essential for generating NE pathogenesis and provided strong evidence to show that a novel pore-forming NetB protein is the major cause of NE pathogenesis ( Keyburn et al., 2010 ). EFTu is a component of the prokaryotic mRNA translation apparatus that has a role in the elongation cycle of protein synthesis ( Schirmer et al., 2002 ). PFO is a metabolic enzyme that catalyzes the conversion of pyruvate to acetyl-CoA and has been associated with most anaerobic bacteria including C. perfringens ( Kulkarni et al., 2007 ; Lee et al., 2011 ). The selection of these 4 proteins (α-toxin, NetB, EFTu, and PFO) of C. perfringens as immunizing antigens for hyperimmunization in this study is based on our previous finding that showed strong immunogenicity of these antigens in experimentally induced NE infection ( Lee et al., 2011 ). Indeed, hyperimmunized IgY serum and spray-dried egg powders from hens injected with selected C. perfringens antigens showed high antibody levels in indirect ELISA. Therefore, we conducted a series of experiments to investigate the protective effects of passive immunization against an experimental NE model using these hyperimmunized IgY antibodies ( Lee et al., 2011 ) in commercial broiler chickens. In Experiment 1, dietary supplementation of young chickens with EB, ET, and EM-1 significantly increased BWG compared to the control groups (EN and EC). However, supplementation with EA and EP showed no significant differences compared to the groups treated with EN and EC. NE lesion scores were significantly reduced in both the EB and ET groups compared to the EN group, whereas the EM-1 group showed no difference in NE lesion scores. Several studies have been published on the effectiveness of recombinant vaccination with native or recombinant α-toxin in protecting chickens from NE challenge ( Kulkarni et al., 2007 ; Zekarias et al., 2008 ; Valipouri et al., 2022 ). In our study, α-toxin in egg powder failed to protect chickens from NE challenge. Effective protection against NE following immunization with the NetB protein has been well documented previously. Keyburn et al. (2013a) reported that vaccination with subcutaneous injection of recombinant NetB vaccine partially protected broiler chickens from a mild challenge with a virulent C. perfringens isolate. Similar results were reported by Fernandes da Costa et al. (2013) , who showed that immunization with NetB toxoid increased serum antibody levels and provided partial protection against NE. Jang et al. (2012) showed that chickens vaccinated with recombinant NetB emulsified in ISA 71 VG adjuvant induced a significant level of protection against NE challenge, as demonstrated by increased BWG and reduced gut lesion scores. Furthermore, maternal immunization with NetB toxoid vaccine induced a strong serum IgY response and protected the progeny from subclinical NE ( Keyburn et al., 2013b ). Our results are consistent with these previously published studies that demonstrated the protective effects of NetB-induced protective immunity against NE. Our studies clearly showed that dietary treatment of young chickens with egg yolk IgYs detecting immunodominant antigens of C. perfringens protects from NE. The protective effect of EFTu and PFO immunization against NE was shown in our previous work ( Jang et al., 2012 ), which demonstrated effective protection following intramuscular vaccination against NE using recombinant EFTu or PFO in ISA 71 VG adjuvant. Both EFTu and PFO vaccination reduced NE lesion scores in chickens following NE infection, but only PFO resulted in increased BWG. In the current study, however, dietary supplementation of EFTu IgY (ET group) increased BWG and decreased NE lesion scores compared to the EN group, but PFO IgY (EP group) showed no difference in both BWG and NE lesion scores compared to the control groups (EN and EC). In Experiment 2, we confirmed the protective effects of EB and ET IgY antibodies. The experimental results showed that dietary supplementation with EB and ET IgYs significantly increased BWG compared to the EN and EC groups following NE challenge. Reduced serum levels of both α-toxin and NetB were found in the NE-challenged chickens following dietary treatments with EB and ET IgY antibodies. Since EFTu is expressed intracellularly and appears on the bacterial cell surface, treatment with IgY against EFTu may reduce the adhesion of bacteria to intestinal epithelial cells ( Severin et al., 2007 ; Lee et al., 2011 ). To understand the mechanism of action of C. perfringens -specific IgYs in protection against NE, we performed an in vitro toxin neutralization assay using anti-NetB IgY and C. perfringens growth inhibition assays using anti-EFTu IgY. As shown by the results of the toxin neutralization assay, the protective effect mediated by anti-NetB IgY antibodies showed a strong toxin neutralization effect in LMH assay. In the current study, we used the Del-1 strain, which expresses both α-toxin and NetB ( Gu et al., 2019 ), and we speculate that anti-NetB IgY neutralized the biological activity of NetB, limiting its biological function ( Gadde et al., 2015 ). This may also explain the reduced NetB level in the serum EB-treated chickens. The reason for the reduced α-toxin level with NetB in the serum following EB IgY treatment is not clear, but EB IgY may be the reason for the reduced activity of C. perfringens by neutralizing the NetB antigen. An in vitro bacterial growth inhibition assay was also performed to investigate whether anti-EFTu IgY reduces C. perfringens growth; however, in the current experiment, the anti- C. perfringens activity of EFTu IgY was not demonstrable. In Experiment 3, C. perfringens -specific NetB and EFTu IgYs and Eimeria -specific EF1α and 3-1E IgYs were combined and tested. EF1α, an evolutionarily conserved protein, commonly found in eukaryotic cells ( Sasikumar et al., 2012 ), plays a key role in protein synthesis by mediating aminoacyl-tRNA loads in the A site of the 80S ribosome ( Lin et al., 2017 ). In addition, EF-1α is an essential component of parasitic invasion, as it is associated with the cytoskeleton of the apical region ( Matsubayashi et al., 2013 ) and regulates assembly, cross-linking, and binding to actin filaments ( Doyle et al., 2011 ). Another Eimeria immunodominant antigen, 3-1E, is expressed in the posterior cytoplasm of merozoites and sporozoites by Eimeria profilin, which has previously been used to induce protective immunity against coccidiosis through vaccination ( Lillehoj et al., 2005 ; Lee et al., 2007 ). Therefore, the combination of these C. perfringens and Eimeria antigens is expected to engender a strong protective IgY antibody response. As a result, both control groups (EN and EC) exhibited a decrease in BWG and FI. However, the EM-2 group, which received treatment with a mixture of NetB, EFTu, EF1α, and 3-1E, did not show any reduction in BWG or FI compared to the NC group. This result was consistent throughout Experiments 1, 2, and 3 and indicates that chickens treated with egg powder containing NetB and EFTu IgY were not statistically different from the NC group. Additionally, both intestinal permeability and NE lesion scores showed that the EM-2 group was statistically similar to the NC group. The recurring results with reduced NE lesion scores in groups including EB are similar to those of several previous studies that showed recombinant NetB immunization reduced NE lesion scores in chickens infected with NE ( Jang et al., 2012 ; Keyburn et al., 2013a , b ; Shamshirgaran et al., 2022 ). To date, there are no reports that show dietary effects of IgY antibodies affecting intestinal permeability. Several toxins in C. perfringens are known to increase intestinal permeability, especially α-toxin or enterotoxin, which damages the intestinal barrier and reduces the expression of claudin or occludin ( Awad et al., 2017 ). This is the first report to show the protective effect of NetB IgY antibody dietary treatment that reduced NetB toxin and decreased intestinal permeability. Interestingly, no significant reduction in Eimeria oocyst production was seen in the EM-2 group which was treated with anti- Eimeria antibodies. Similar to Experiment 2, the levels of NetB and CNA were decreased in the jejunal digesta in the EM-2 group in Experiment 3. CNA is a bacterial cell wall-anchored protein and has the key ability to attach to the host cell wall ( Arora et al., 2021 ). Collagen is one of the essential components of the extracellular matrix molecules, and for most pathogenic gram-positive bacteria, attachment to the host cell wall with their specific bacterial adhesin is the key step for colonization ( Krogfelt, 1991 ; Klemm et al., 2007 ; Martin and Smyth, 2010 ). Recently, CNA has been reported in some C. perfringens strains that were implicated in NE in chickens ( Wade et al., 2015 ). In addition, it has been reported that a CNA-deleted C. perfringens strain does not cause NE lesions ( Wade et al., 2016 ). In the current study, there was no significant difference in the levels of CNA and NetB, and the NE lesion scores of the EM-2 group compared to the NC group. These results support the protective effects of C. perfringens -specific IgY of the EM-2 group, which binds to NetB and/or EFTu antigens of C. perfringens in chicken intestines, and decreases the CNA level as we have previously shown a close correlation between CNA and NetB levels ( Goo et al., 2023 ). Unlike active immunity achieved by vaccination or exposure to pathogens, passive immunization relies on the transfer of preformed antibodies, and is short-lived ( Baxter, 2007 ). Maternally derived antibodies (from hen to chicken through embryonic circulation) protect chickens in the early stages of life, but their levels decrease within 1 to 2 wk after hatching ( Szabó, 2012 ). In contrast, high levels of protective antibodies can be maintained in the intestine with continuous feeding of hyperimmune egg yolk IgY in the diet by passive immunization ( Lee et al., 2009b ; Gadde et al., 2015 ). The main functions of egg yolk IgY, including inhibition of bacterial enzymes, blocking the attachment of pathogenic microorganisms, and toxin neutralization ( Müller et al., 2015 ), can all be performed effectively in the intestinal environment. To enhance the stability of egg yolk IgYs in the intestine ( Rahman et al., 2013 ; Mitragotri et al., 2014 ), encapsulation methods can be used to maximize IgY stability, which can increase the activity of IgY in the intestinal tract, further enhancing passive immunization ( Xia et al., 2022 ). Pathogen-specific egg yolk IgYs have been successfully employed in the prevention and treatment of various enteric infections in swine ( Marquardt et al., 1999 ; Kweon et al., 2000 ; Zuo et al., 2009 ) and cattle ( Ikemori et al., 1997 ; Vega et al., 2011 ). However, studies on the development and use of hyperimmune egg yolk IgY to prevent NE in chickens are still insufficient. In conclusion, passive immunization of newly hatched chickens with hyperimmune egg yolk antibodies specific against protective antigens of C. perfringens reduced gut lesion, protected gut damage from toxins and mitigated growth retardation caused by NE, and represents an effective antibiotic-independent strategy to mitigate the negative effects of NE in commercial broiler chickens. Further studies are necessary to enhance the effectiveness of oral delivery strategies to maintain the stability of egg yolk IgY antibodies in commercial application. ACKNOWLEDGMENTS This research was partially supported by USDA/NIFA SAS grant 2020-69012-31823 and partly by ARS in-house project # 8042-32000-115-00D . DISCLOSURES There is no conflict of interest.
REFERENCES:
1. ARORA S (2021)
2. AWAD W (2017)
3. BAXTER D (2007)
4. CASEWELL M (2003)
5. CHOI J (2022)
6.
7. DOYLE A (2011)
8. (1999)
9. FERNANDESDACOSTA S (2013)
10. FU C (2006)
11. GADDE U (2015)
12. GAUCHER M (2015)
13. GOO D (2023)
14. GU C (2019)
15. IKEMORI Y (1997)
16. JANG S (2012)
17. JANG S (2013)
18. JIN L (1998)
19. KEYBURN A (2010)
20. KEYBURN A (2008)
21. KEYBURN A (2013)
22. KEYBURN A (2013)
23. KEYBURN A (2006)
24. KLEMM P (2007)
25. KROGFELT K (1991)
26. KULKARNI R (2007)
27. KWEON C (2000)
28. LEE K (2022)
29. LEE K (2010)
30. LEE S (2013)
31. LEE K (2011)
32. LEE S (2007)
33. LEE S (2009)
34. LEE S (2009)
35. LEPP D (2021)
36. LILLEHOJ H (2005)
37. LIN R (2017)
38. MARQUARDT R (1999)
39. MARTIN T (2010)
40. MATSUBAYASHI M (2013)
41. MITRAGOTRI S (2014)
42. MULLER S (2015)
43. NILSSON E (2012)
44. PARK S (2008)
45. RAHMAN S (2014)
46. RAHMAN S (2013)
47. SASIKUMAR A (2012)
48. (2013)
49. SCHIRMER J (2002)
50. SEAL B (2013)
51. SEVERIN A (2007)
52. SHAMSHIRGARAN M (2022)
53. SHIMIZU M (1994)
54. SHIMIZU M (1992)
55. SHOJADOOST B (2012)
56. SKINNER J (2010)
57. SZABO C (2012)
58. TAMILZARASAN K (2009)
59. TENG P (2020)
60. VALIPOURI A (2022)
61. VANDERSLUIS W (2000)
62. VANIMMERSEEL F (2004)
63. VANIMMERSEEL F (2009)
64. VEGA C (2011)
65. WADE B (2015)
66. WADE B (2016)
67. WADE B (2015)
68. WANG L (2011)
69. WILKIE D (2006)
70. XIA M (2022)
71. XU Y (2011)
72. ZEKARIAS B (2008)
73. ZUO Y (2009)
|
10.1016_j.apsb.2024.07.023.txt
|
TITLE:
l-[5-11C]Glutamine PET imaging noninvasively tracks dynamic responses of glutaminolysis in non-alcoholic steatohepatitis
AUTHORS:
- Zhang, Yiding
- Xie, Lin
- Fujinaga, Masayuki
- Kurihara, Yusuke
- Ogawa, Masanao
- Kumata, Katsushi
- Mori, Wakana
- Kokufuta, Tomomi
- Nengaki, Nobuki
- Wakizaka, Hidekatsu
- Luo, Rui
- Wang, Feng
- Hu, Kuan
- Zhang, Ming-Rong
ABSTRACT:
Inhibiting glutamine metabolism has been proposed as a potential treatment strategy for improving non-alcoholic steatohepatitis (NASH). However, effective methods for assessing dynamic metabolic responses during interventions targeting glutaminolysis have not yet emerged. Here, we developed a positron emission tomography (PET) imaging platform using l-[5-11C]glutamine ([11C]Gln) and evaluated its efficacy in NASH mice undergoing metabolic therapy with bis-2-(5-phenylacetamido-1,3,4-thiadiazol-2-yl)ethyl sulfide (BPTES), a glutaminase 1 (GLS1) inhibitor that intervenes in the first and rate-limiting step of glutaminolysis. PET imaging with [11C]Gln effectively delineated the pharmacokinetics of l-glutamine, capturing its temporal-spatial pattern of action within the body. Furthermore, [11C]Gln PET imaging revealed a significant increase in hepatic uptake in methionine and choline deficient (MCD)-fed NASH mice, whereas systemic therapeutic interventions with BPTES reduced the hepatic avidity of [11C]Gln in MCD-fed mice. This reduction in [11C]Gln uptake correlated with a decrease in GLS1 burden and improvements in liver damage, indicating the efficacy of BPTES in mitigating NASH-related metabolic abnormalities. These results suggest that [11C]Gln PET imaging can serve as a noninvasive diagnostic platform for whole-body, real-time tracking of responses of glutaminolysis to GLS1 manipulation in NASH, and it may be a valuable tool for the clinical management of patients with NASH undergoing glutaminolysis-based metabolic therapy.
BODY:
1 Introduction l -Glutamine, a nonessential/conditionally essential amino acid, plays crucial roles in various metabolic pathways, including energy production, amino acid synthesis, and the regulation of oxidative stress 1 , . Dysfunctional glutamine metabolism is recognized as a hallmark of non-alcoholic steatohepatitis (NASH), a significant global health concern and a risk factor for cirrhosis, liver cancer, and cardiovascular diseases, with no currently approved therapies. This suggests that interventions targeting glutaminolysis could hold promise as anti-NASH therapies 2 3 , . Furthermore, recent observations have highlighted the protective effects of glutaminolysis-based metabolic therapy in liver disease 4 4 , , cardiovascular disease 5 6 , , and cancer 7 . Glutaminase (GLS), a rate-limiting enzyme responsible for converting 8–10 l -glutamine to glutamate and ammonia, exhibits two distinct isoforms in mammalian tissues, namely GLS1 and GLS2, originating from separate but structurally related genes. In healthy individuals, GLS1 is prevalent across various extra-hepatic tissues, while GLS2 is notably abundant in the normal adult liver . Notably, a metabolic switch from GLS2 to GLS1 occurs in live fibrosis 11 , cirrhosis and liver cancer 5 , as evidenced by studies involving hepatic biopsies from NASH patients and murine NASH models 12–14 . Furthermore, inhibiting GLS1, the first and rate-limiting step of glutaminolysis, reportedly prevents the activation of hepatic stellate cells and halts fibrosis progression in preclinical murine models induced with carbon tetrachloride (CCl 4 4 ) or methionine and choline deficient (MCD) diet . Additionally, scavenging ammonia, a toxic byproduct of glutaminolysis, reportedly plays a crucial role against the development or progression of liver fibrosis 5 . Therefore, targeting glutamine metabolic processes presents an attractive and feasible strategy for the treatment of NASH. 15 The development of small-molecule inhibitors, such as bis-2-(5-phenylacetamido-1,3,4-thiadiazol-2-yl)ethyl sulfide (BPTES) , CB-839 16 , and Compound 968 17 has paved new avenues for metabolism-targeted therapies that modulate GLS1 activity to regulate glutaminolysis. However, it is important to acknowledge the potential concerns associated with the use of the metabolic therapies in the management of NASH, because of a lack of understanding of metabolic adaptation and reprogramming in living bodies during interventions targeting glutaminolysis. Functional positron emission tomography (PET) imaging is a potentially ideal tool for capturing real-time variations of glutaminolysis in NASH, especially following GLS1 intervention in a therapeutic setting. It offers non-invasive, highly sensitive, repetitive, and quantitative imaging using positron-emitting probes that specifically target the metabolic processes of interest. A notable example of PET imaging regarding metabolism is the assessment of glucose uptake using 2-[ 18 18 F]fluoro-2-deoxy-d-glucose ([ 18 F]FDG), a 18 F-labelled glucose analogue. PET imaging with [ 18 F]FDG is routinely used in clinical settings worldwide to evaluate glucose uptake as a surrogate for the Warburg effect in cancer diagnosis, staging, and monitoring 19 , . The use of glutamine-based PET imaging to study the relationship between glutaminolysis dynamics and metabolic therapy in NASH has never been reported, despite several glutamine analogues being radiolabelled for use in pharmacokinetic studies and cancer imaging 20 . 21–24 Given the crucial role of glutaminolysis in various processes of NASH, we envisioned that l -glutamine-based PET has significant potential as an effective imaging platform for gaining valuable insights into whole-body glutamine metabolism. This could guide the development of clinically relevant therapeutic interventions using potent metabolic modulators for the treatment of NASH. To test this hypothesis, we utilised a 11 C-labelled l -glutamine ( l -[5- 11 C]glutamine, [ 11 C]Gln, Fig. 1 A) PET probe to investigate the global pharmacokinetics of 22 l -glutamine and its relationship with glutaminolysis and therapeutic response under both healthy and disease conditions, as well as following single and systemic administration of a uncompetitive GLS1 inhibitor, BPTES . In advance, we developed a simple and rapid method for synthesizing [ 16 11 C]Gln, which can be easily automated and translated for clinical use . Specifically, we focused on the construction of an imaging platform with [ 25 11 C]Gln PET and verified its use in NASH, particularly in the context of treatment with BPTES. This study aimed to demonstrate that [ 11 C]Gln PET can serve as a valuable imaging platform for assessing glutamine metabolism in whole-body and real-time settings. By doing so, we sought to provide comprehensive insights into the reprogrammed metabolic responses associated with NASH and glutaminolysis-based metabolic intervention. 2 Materials and methods 2.1 Radiosynthesis of [ 11 C]Gln [ 11 C]Gln was synthesized using hydrogen [ 11 C]cyanide ([ 11 C]HCN) as a labelling agent, equipped with a fully automated synthesis system developed in-house . [ 26 11 C]CO 2 was generated under a cyclotron (CYPRIS HM-18; Sumitomo Heavy Industries, Tokyo, Japan), and was then passed through a nickel wire tube to obtain a mixture of [ 11 C]methane ([ 11 C]CH 4 ) in carrier gas. The resulted [ 11 C]CH 4 was mixed with NH 3 gas and passed through a heated platinum furnace at 950 °C to produce [ 11 C]HCN. During the production of [ 11 C]HCN, a mixture of 18-crown-6 (8 mg in 900 μL of CH 3 CN) and Cs 2 CO 3 (3 mg in 150 μL of water) was azeotropically dried and then added to CH 3 CN (300 μL). The automatically produced [ 11 C]HCN was trapped into the CH 3 CN solution to yield [ 11 C]CsCN. Precursor 1 (3.5 mg) in CH 3 CN (300 μL) was added the [ 11 C]CsCN solution and the reaction mixture was heated at 90 °C for 8 min, followed by complete removal of the reaction solvent. After the [ 11 C]cyanation, CF 3 COOH (TFA) and H 2 SO 4 (4:1, 500 μL) was added to the residue and this reaction mixture was heated at 80 °C for 5 min. After the reaction, the mixture was diluted with diethyl ether (1.5 mL) and passed through a silica plus Sep-Pak cartridge to trap [ 11 C]Gln. The cartridge was washed with diethyl ether (10 mL) and then eluted with phosphate buffer (5 mL). The [ 11 C]Gln solution was obtained by removal of diethyl ether from the eluant, followed by addition of phosphate buffer (4 mL) for use. 2.2 Studies in mice, mouse models, ethics statements and drug treatment Animal experiments were conducted on 6−8-week-old male C57BL/6 mice (body weight, 23.1 ± 0.24 g) (Japan SLC, Shizuoka, Japan). All the animal procedures were approved by the Animal Ethics Committee of National Institutes for Quantum Science and Technology (QST, approval number: 16-1005 and 16-1006). The mice were housed and handled under specific pathogen-free conditions, including a 12-h light/dark cycle, 50% relative humidity, and temperatures between 25 and 27 °C. They had free access to tap water and were fed either a chow diet (as a control) or a MCD diet (Cat# 518810, Dyets, Bethlehem, PA, USA) for 8 weeks. During the latter half of a 4-week period, the MCD diet-fed mice were randomized to receive intraperitoneal injections of BPTES (12.5 mg/kg, Cat#SML0601, Sigma–Aldrich, St. Louis, MO, USA) three times a week , forming the BPTES treated group. A separate group of MCD diet-fed mice served as the NASH untreated group and received injections without BPTES. A single administration study was conducted on both normal mice and NASH mice by injecting excess BPTES (62.5 mg/kg) at 1 h before [ 3 11 C]Gln injection, creating the BPTES blocking group. This administration schedule adhered to the recommendations of the National Institute of Health and the institutional guidelines of the QST. All experiments were carried out as unblinded studies. Mice were fasted overnight with free access to water, and radioactivity uptake experiments were conducted at least 48 h after the last treatment administration. All in vitro experiments were conducted in triplicates (technical replicates) and repeated independently at least three times unless otherwise noted. The end point of in vivo experiments was set at 8 weeks after implementing specific diets. 2.3 Dynamic PET/computed tomography (CT) and PET data analysis PET scans were performed using a small-animal Inveon PET scanner (Siemens, Knoxville, TN, USA) after intravenous injection of [ 11 C]Gln (9.22–10.26 MBq/0.3 mL). The scanner provided 159 transaxial slices with 0.796 mm (centre-to-centre) spacing, a 10 cm transaxial field of view (FOV), and a 12.7 cm axial FOV. Emission scans were acquired in the three-dimensional list mode with an energy window of 350–650 keV under isoflurane anesthesia. Scans were performed from 0 to 90 min in normal mice and from 0 to 30 min in MCD diet-fed mice after [ 11 C]Gln injection. All list-mode acquisition data were sorted into three-dimensional sinograms, which underwent Fourier rebinning into two-dimensional sonograms. Corrections were applied for scanner dead time, randoms, and decay of the injected radioprobe. The dynamic images were reconstructed using filtered back-projection with a Hanning's filter and a Nyquist cutoff of 0.5 cycles/pixel. Immediately after the PET scans, contrast-enhanced CT scans were performed in the normal mice by injecting 0.4 mL of non-ionic contrast medium (Iopamiron 370, Bayer, Osaka, Japan) for 34 s, with the purpose of assisting in the segmentation of organs. Non-enhanced scans with the breath-holding model in MCD diet-fed mice were performed for 4 min. The scan conditions included radiation parameters of 200 μA, 90 kV, and a FOV of 60 mm using a small-animal CT system (R_mCT2; Rigaku, Tokyo, Japan). Averaged CT attenuation and dynamic PET images were reconstructed and fused using Siemens Inveon Research Workplace (IRW) software (version 4.0). Regions of interest (ROIs) in major organs and tissues were manually outlined in the PET/CT fusion images using IRW 4.0. The average radioactivity concentration was calculated using the mean pixel values in the ROI. The regional uptake of radioactivity was decay-corrected to the injection time, normalised to body weight, and expressed as a percentage of the injected dose per gram of body weight (%ID/g BW). Time–activity curves (TACs) of [ 11 C]Gln in individual organs and tissues were determined. The hepatic radioactivity values in each group of mice were compared at 25–30 min post-injection. 2.4 Ex vivo biodistribution studies The mice were sacrificed by cervical dislocation at specific time points after [ 11 C]Gln (1.70–1.85 MBq/0.1 mL, corresponding to 6.70–7.29 ng of l -Gln) injection. The major organs and tissues, including blood, heart, lungs, liver, pancreas, spleen, kidneys, intestines, muscle, and brain, were promptly removed, collected, and weighed. The radioactivity in each tissue sample was measured using a 2480 Wizard auto- γ scintillation counter (PerkinElmer, Waltham, MA, USA), and expressed as a percentage of the injected dose per gram of wet tissue weight (%ID/g tissue weight). All radioactivity measurements were corrected for decay. 2.5 Histopathology After completion of the PET and CT scans, the mice were euthanised via cervical dislocation. The livers were harvested and weighed on a microscale, fixed in 10% formalin, embedded in paraffin, and sectioned into 5 μm slices. The liver sections were stained with hematoxylin and eosin (H&E; Muto Pure Chemical, Tokyo, Japan) and masson's Trichrome (MT; Sigma–Aldrich), following the manufacturer's instructions. The histopathological assessment was scored in a blinded manner by three pathologists using the non-alcoholic fatty liver disease activity score (NAS) system ( Supporting Information Table S1 ) . Additionally, the liver tissue sections were deparaffinised in xylene and gradually rehydrated using graded alcohol. The tissue sections were then incubated overnight at 4 °C with a primary antibody rabbit anti-GLS1 antibody (1:100; Cat#12855-1-AP, Proteintech, Rosemont, IL, USA). The slides were then incubated with the secondary antibody Alexa Fluor® 488 goat anti-rabbit IgG (1:500; Cat#A27034, Invitrogen, Carlsbad, CA, USA), for 60 min at room temperature. Finally, the slides were mounted using a mounting medium containing with DAPI (Cat#H-1200, VectorLabs, Newark, CA, USA). Images were captured using a Keyence BZ-X710 microscope (Keyence, Osaka, Japan). The frequency of positively stained areas was quantified automatically using a specialized Hybrid Cell Count software (Keyence). The results are presented as the percentage of the total tissue area showing positive staining. Negative control slides were processed without the primary antibody, secondary antibody, or with an isotype control IgG to ensure specificity. 27 2.6 RNA preparation and quantitative real-time polymerase chain reaction (qRT-PCR) analysis The liver samples were flash-frozen in liquid nitrogen immediately after surgical removal and macro dissected prior to RNA extraction. Total RNA was extracted using the RNeasy Mini Kit (Qiagen, Valencia, CA, USA), following the manufacturer's instructions. The quality of the total RNA was measured using the 260/280 nm ratio with NanoDrop 2000 (Thermo Scientific, Wilmington, DE, USA). For qRT-PCR, a TaqMan system was employed on an Applied Biosystems StepOne™ machine (Carlsbad, CA, USA), according to the manufacturer's instructions. Target-specific primers and probes for mouse GLS1 (Mm01257297_m1), tumor necrosis factor-alpha ( TNF-α , Mm00443258_m1), interleukin-17 ( IL-17 , Mm00439618_m1), GLS2 (Mm01164862_m1), and 18S ribosomal RNA (18S rRNA, Hs99999901_s1 ) were purchased from Applied Biosystems. The normalised cycle threshold (C t ) value for each gene was determined by subtracting the C t value obtained for 18S rRNA. The fold change in the mRNA levels of each gene compared to the corresponding control levels was calculated. 2.7 Quantification and statistical analysis Quantitative data are reported as mean ± standard error of the mean (SEM). Intergroup comparisons were conducted using either an unpaired two-tailed Student's t -test or a one-way analysis of variance (ANOVA) followed by Tukey's multiple comparison test. Prism version 8.3 software (GraphPad Software, La Jolla, CA, USA) was used for statistical analyses. Pearson's correlation analysis was used to estimate the relationship between hepatic radioactivity, and GLS1 expression or NAS. The threshold for statistical significance was set at P < 0.05. 3 Results 3.1 Radiosynthesis of [ 11 C]Gln [ 11 C]Gln was synthesized by [ 11 C]cyanation of an iodine precursor ( 1 ) with [ 11 C]CsCN, followed by hydrolysis and deprotection of the radioactive intermediate [ 11 C] 2 with TFA and H 2 SO 4 , using an automated multi-purpose radiosynthesis apparatus ( Scheme 1 ). In the previous procedures, the intermediate [ 11 C] 2 was purified by solid phase extraction or semi-preparative HPLC purification 22 , 28 , . In our study, [ 29 11 C] 2 was not separated and the reaction mixture of [ 11 C]cyanation was directly treated with TFA and H 2 SO 4 to undergo hydrolysis and deprotection. Solid phase extraction for the final reaction mixture produced the [ 11 C]Gln injection. Compared with the previous procedures, our present protocol shortened the production time and simplified the procedures of [ 11 C]Gln. Starting from 37 GBq of [ 11 C]CO 2 , 2.3–4.4 GBq ( n = 20) of [ 11 C]Gln was produced at EOS. The average synthesis time was 33 min from the end of bombardment (EOB). The molar activity and radiochemical purity of [ 11 C]Gln in the final product solution were 70–120 GBq/μmol and >90%, respectively. Moreover, the enantiomeric purity of [ 11 C]Gln exceeded 95% enantiomeric excess at EOS. These analytical results were in compliance with our in-house quality control/assurance specifications. 3.2 PET imaging with [ 11 C]Gln visualizes and quantifies the whole-body behaviours of l-glutamine under healthy conditions and subsequent to GLS1 intervention To investigate the whole-body pharmacokinetics of l -glutamine and its response following GLS1 interventions under healthy conditions, we conducted dynamic PET/CT scans in normal mice from 0 to 90 min after radioinjection for the baseline. Furthermore, PET scans were obtained within 1 h after a single administration of BPTES (62.5 mg/kg) preceding [ 11 C]Gln injection ( Fig. 1 A and B). This dosage was 5 times higher than the treatment dose employed in previous experiments with NASH mouse models . 3 Fig. 1 C and Supporting Information Movie 1 present typical global pharmacokinetic images of [ 11 C]Gln. A rapid concentration of [ 11 C]Gln was seen in the kidneys during 0–16 min post-injection, followed by concentrated renal medulla and urine under baseline conditions ( Fig. 1 C upper). The liver showed moderate radioactivity during 10 min post-injection, followed by rapid washout. Administration of a single dose of the GLS1 inhibitor BPTES slowed down the washout rate in the kidneys and liver compared to baseline ( Fig. 1 C down), leading to higher signal of [ 11 C]Gln under acute blocking conditions in normal mice. The bladder showed the highest signal, correlating with the excretion of the radiotracer under both conditions. Supporting video related to this article can be found at https://doi.org/10.1016/j.apsb.2024.07.023 . The following is the Supporting video related to this article: Movie 1 Movie 1 The temporal distribution of l -glutamine in individual organs and tissues was quantified using dynamic [ 11 C]Gln PET scans. Fig. 1 D depicted the TACs of [ 11 C]Gln both at baseline and following GLS1 blocking with BPTES. The quantified results showed that the administration of the GLS1 inhibitor BPTES induced a significantly increased accumulation of [ 11 C]Gln compared to baseline in the main organs of normal mice, including the heart, lungs, liver, pancreas, and kidneys, within 90 min after [ 11 C]Gln injection. This increase was presumably due to acute inhibition of GLS1 activity by BPTES, forming an inactive tetramer, as describe 30 , , consequently inhibiting glutaminolysis. In contrast, BPTES exposure yielded similar TACs of [ 31 11 C]Gln in the brain under both conditions, remaining stable and at a low level throughout the dynamic scan. These results suggest that utilizing PET imaging with [ 11 C]Gln enables the visualization and quantification of the in vivo behaviors of l -glutamine, as well as the metabolic responses of glutaminolysis to GLS1 intervention under healthy conditions, in a temporal-spatial pattern. 3.3 [ 11 C]Gln provides direct evidence of accumulation in each organ and tissue To further explore the behavior of l -glutamine, we measured the distribution of [ 11 C]Gln in the major organs and tissues at specific time points after injection ( Fig. 2 A). Fig. 2 B depicted the biodistribution of l -glutamine in vivo . In normal physiology, high radioactivity of [ 11 C]Gln was verified in the main organs, including the blood, heart, lung, pancreas, and kidney during the initial 1 min after [ 11 C]Gln injection. Over time, blood levels of [ 11 C]Gln decreased rapidly, resulting in low blood activity at 60 min after injection. The liver exhibited a peak activity at 5 min, followed by a washout, whereas uptake in the pancreas increased up to 5 min and then reached a high plateau, as expected for a radiolabeled amino acid . Rapid uptake was observed in the kidneys, with quickly excreted through the bladder. Brain uptake exhibited a relatively low level, with slow washout throughout the 60 min experiment. The 23 ex vivo biodistribution analysis confirmed the findings of the pharmacokinetic PET/CT images, and provided direct evidence of [ 11 C]Gln accumulation in each organ and tissue. 3.4 GLS1 intervention metabolic therapy ameliorates NASH A radiolabelled l -glutamine probe could noninvasively and quantitatively track dynamic glutaminolysis, providing insights into the metabolic responses to GLS1-blockade therapy. To test this hypothesis, we first established a metabolic therapeutic model in NASH mice using intraperitoneal injections of the GLS1 inhibitor BPTES (12.5 mg/kg) three times per week for the latter half of a 4-week period ( Fig. 3 A). The selection of the MCD diet model, dose preparation and administration, and treatment duration were determined based on previous experiments 3 , . These studies had identified abnormal glutamine catabolism and increased GLS1 presence in hepatic biopsies of NASH patients and MCD diet-fed preclinical murine models for NASH 32–34 3 , . Our MCD fed mice exhibited increased hepatic GLS1 levels compared to those fed a chow diet ( 4 Fig. 3 B–D). Treatment with BPTES resulted in lower GLS1 expression, at both protein ( Fig. 3 B and C) and mRNA levels ( Fig. 3 D). Under these conditions, GLS2, typically distributed around the hepatic periportal compartment, was found to be decreased in the livers of untreated and BPTES treated NASH mice ( Supporting Information Fig. S1 ). These findings are consistent with the results reported in previous studies 3 , . 4 The mice subjected to the MCD diet for 8 weeks showed significant weight loss, increased liver weight, and a higher ratio of liver to body weight than those fed the chow diet ( Fig. 3 E–G), consistent with the finding of previous reports 32 , . Notably, BPTES treatment did not cause any differences in body weight ( 33 Fig. 3 E), liver weight ( Fig. 3 F), or the ratio of liver to body weight ( Fig. 3 G) in the MCD diet-fed mice. However, BPTES treatment did lead to a significantly decreased level of mRNA of hepatic pro-inflammatory cytokines, such as TNF-α ( Fig. 3 H) and IL-17 ( Fig. 3 I). TNF- α is an adipokine known to promote inflammation, insulin resistance, hepatocyte injury, and fibrosis , while IL-17 is implicated in the progression of nonalcoholic fatty liver disease 35 . To assess liver injury and fibrosis, H&E and MT staining were performed and evaluated using the NAS system ( 36 Table S1 ) . Using the NAS template, we observed that MCD diet-fed mice displayed several NASH-associated pathologies, while the pharmacological inhibition of GLS1 using BPTES effectively reduced the severity of these pathological findings ( 27 Fig. 3 J and Table 1 ). Untreated MCD diet-fed mice had a NAS of 9.25 ± 0.16, while BPTES-treated MCD diet-fed mice exhibited a significantly lower NAS of 2.41 ± 0.37. These findings highlight the improvement in liver damage achieved through GLS1-blockade metabolic therapy with BPTES, thereby validating the use of BPTES-treated MCD diet-fed mice as an appropriate preclinical mouse model for investigating glutamine metabolism and assessing therapeutic efficacy after GLS1 intervention in the context of NASH. 3.5 [ 11 C]Gln PET provides an imaging platform for tracking glutaminolysis in NASH and following GLS1 intervention therapy To test our hypothesis, we performed whole-body dynamic PET/CT imaging in mice subjected to the MCD diet to simulate NASH conditions and investigated the imaging alterations resulting from systemic therapeutic interventions with BPTES, from 0 to 30 min after injection of [ 11 C]Gln ( Fig. 4 A). Furthermore, to assess the direct impact of GLS1 intervention on hepatic [ 11 C]Gln imaging, we performed a single administration study by pre-injecting an excess of BPTES (62.5 mg/kg) at 1 h in untreated NASH mice. Fig. 4 B shows representative co-registered [ 11 C]Gln PET/CT images captured between 25 and 30 min after injection. This imaging timeframe was selected based on pharmacokinetic images of [ 11 C]Gln in normal mice, indicating that uptake of [ 11 C]Gln in livers remained at relatively stable levels after 20 min post-injection ( Fig. 1 C and D). In contrast to the pharmacokinetics of [ 11 C]Gln under healthy conditions, PET/CT imaging with [ 11 C]Gln revealed a significantly higher hepatic uptake in mice fed the MCD diet compared to those fed the chow diet. Interestingly, the heightened uptake exhibited minimal impact following a single GLS1 intervention with BPTES administration ( Fig. 4 B), suggesting glutamine metabolic adaptation and reprogramming in the context of NASH . Notably, systemic metabolic therapy with BPTES reduced the hepatic avidity of [ 3 11 C]Gln in the MCD diet-fed mice ( Fig. 4 B), aligning with the lower GLS1 expression observed in the treated NASH mice ( Fig. 3 B–D). We also quantified the TACs of [ 11 C]Gln in the livers ( Fig. 4 C) based on dynamic PET and corresponding CT images taken from 0 to 30 min after the injection. The tracer uptake in the livers of MCD diet-fed untreated mice, quantified as hepatic radioactivity at 25–30 min, was significantly higher than that of the control group and the BPTES-treated MCD diet-fed group ( Fig. 4 D). Similar to the tumor cell trapping of [ 18 F]FDG through upregulation of membrane-bound glucose transporter 1 and cytosolic hexokinase , we propose that the heightened expression of GLS1 in the NASH liver could lead to increased [ 37 11 C]Gln uptake. The results also support the notion that glutamine metabolism is reprogrammed under NASH conditions, suggesting the role of BPTES in the NASH treatment process beyond a simple inhibitor-response relationship 3 , . The correlation analysis revealed that there was indeed a strong correlation between hepatic radioactivity and GLS1 protein expression ( 4 Fig. 4 E; Pearson's r = 0.8985, P < 0.0001), as well as between the hepatic radioactivity and NAS ( Fig. 4 F; Pearson's r = 0.9599, P < 0.0001). These findings demonstrate that [ 11 C]Gln PET imaging can capture reprogrammed changes in glutamine metabolism under NASH conditions and during metabolic therapy targeting GLS1, potentially identifying the therapeutic efficacy of manipulating glutaminolysis with the GLS1 inhibitor in a living body. 4 Discussion Targeting glutamine metabolism has shown promising results in the treatment of liver related diseases. Recent clinical studies have aimed to identify effective methods for understanding metabolic responses, determining metabolic status, and predicting therapeutic efficacy in a living body during interventions targeting glutaminolysis . Quantitative PET imaging is an ideal tool for these purposes. In this study, we utilised [ 38–40 11 C]Gln to monitor the whole-body pharmacokinetics of l-glutamine and its alterations following GLS1 interventions under healthy conditions. [ 11 C]Gln PET captured the temporal-spatial pattern of distribution and action of l -glutamine, as well as the dynamic responses after a single administration of BPTES, a GLS1 inhibitor, within the body. Furthermore, we tracked the metabolic response of glutaminolysis in mice with NASH and during therapy with BPTES. [ 11 C]Gln PET imaging revealed a significant increase in hepatic uptake in NASH mice fed the MCD diet, which was minimally disrupted by a single BPTES administration under the specific disease conditions. However, systemic metabolic therapy with BPTES reduced the hepatic avidity of [ 11 C]Gln in MCD diet-fed mice. This reduction in [ 11 C]Gln uptake correlated with a decrease in the GLS1 burden and improvements in liver damage, suggesting the therapeutic efficacy of BPTES in mitigating NASH-related metabolic abnormalities. Our study highlights the potential of [ 11 C]Gln PET imaging as an unprecedented imaging platform for noninvasively tracking reprogrammed metabolic responses in NASH and during therapeutic interventions targeting glutaminolysis, as well as for evaluating the therapeutic efficacy of GLS1-tageting metabolic manipulation in a living body. In-depth investigations regarding the role of glutaminolysis in liver related diseases may be crucial for understanding the metabolic plasticity of NASH and developing metabolic therapeutic strategies. Using various radiotracers, PET imaging has become a crucial tool for assessing in vivo metabolism and has extensive clinical implications 19 , 23 , . Glutamine analogues labelled with radionuclides such as 41 18 F or 11 C, including (2 S ,4 R )-4-[ 18 F]fluoroglutamine, [ 18 F](2 S ,4 S )-4-(3-fluoropropyl)glutamine and [ 11 C]Gln, have been utilised in preclinical and clinical studies to investigate various cancers 22 , . These radiotracers specifically provide tumour metabolic imaging information, particularly regarding the l-glutamine uptake relevant to tumour pathology. Recently, we developed a simple and rapid method for synthesising [ 42–44 11 C]Gln . In this study, we demonstrated its utility as a metabolic imaging agent in the NASH. These advancements may facilitate the future use of [ 25 11 C]Gln and provide critical tools for understanding in vivo dysfunctional glutamine metabolism and its relationship with liver diseases. GLS1 expression is up-regulated in various cell types in response to liver diseases, and depleting GLS1 improves liver function 3 , 4 , . Our findings agree with those of the aforementioned studies, as we observed elevated expression of GLS1 in the livers of NASH mice, accompanied by increased levels of pro-inflammatory cytokines TNF- 6 α and IL-17. Inhibition of GLS1 with BPTES effectively mitigated NASH-associated pathologies, including hepatic steatosis, inflammation, and fibrosis, consistent with previous reports 4 , . We also used [ 5 11 C]Gln PET imaging to track the dynamic response of glutaminolysis in NASH mice treated with BPTES. We found a significant positive correlation between the hepatic uptake of [ 11 C]Gln and GLS1 expression, as well as the NAS. Our proposal is that heightened GLS1 expression enhances the conversion of l -glutamine to glutamate, leading to a compensatory increase in [ 11 C]Gln uptake to meet elevated metabolic demands. This mechanism is analogous to what is observed in tumor cells with [ 18 F]FDG uptake, facilitated by the upregulation of membrane-bound glucose transporter 1 and cytosolic hexokinase. Similarly, the increased expression of GLS1 in the NASH liver may enhance l -glutamine uptake, leading to the amplified accumulation of labeled l -glutamine probes [ 11 C]Gln within liver cells, enabling their visualization through PET imaging. These results highlight the potential of [ 11 C]Gln PET imaging as a platform for identifying metabolic responses and therapeutic efficacy during interventions targeting GLS1 in NASH and liver diseases. GLS1 inhibitors, including UPGL00004 , compound 968 45 , BPTES 18 , CB-839 16 , and IACS-6274 17 , are promising metabolic drugs for treating various cancers. Notably, IACS-6274 46 is currently undergoing phase I clinical trials ( 47 NCT05039801 ), and CB-839 has completed phase II clinical trials (NCT03163667). However, it is important to carefully consider the potential effects of strategies that affect whole-body glutamine metabolism. The [ 48 11 C]Gln PET imaging platform, which enables the visualization and quantification of glutaminolysis dynamics in a living body, can provide valuable insights into how glutamine metabolism changes with NASH and its contribution to the disease process, facilitating the development of GLS1 inhibitors and novel metabolic therapeutic strategies targeting NASH-related conditions, both in clinical and research settings. This study has some limitations. First, we utilized the [ 11 C]Gln PET imaging platform in animal models of NASH induced by a MCD diet, in which NASH developed rapidly within 8 weeks of dietary intervention. It would be essential to validate these findings in other models of NASH, particularly chronic pre-clinical models that closely resemble the progressive nature of human pathology, such as the novel Amylin liver NASH model. Additionally, studying the efficacy of various GLS1 inhibitors in these models and other age-related diseases would provide a more comprehensive understanding of the clinical value of this imaging approach. Following this proof-of-concept study, further investigations are underway to explore the potential clinical applications of [ 11 C]Gln PET imaging. 5 Conclusions In summary, our study successfully constructed a [ 11 C]Gln imaging platform and demonstrated its utility in monitoring the whole-body pharmacokinetics of l -glutamine under both healthy and NASH disease conditions. Furthermore, we tracked the dynamic responses of glutaminolysis in NASH mice during metabolic therapy with BPTES using [ 11 C]Gln PET imaging. Our findings highlight the potential of [ 11 C]Gln PET as a valuable tool for identifying and studying the metabolic responses and therapeutic efficacy during interventions targeting glutaminolysis. Author contributions Yiding Zhang: Writing – review & editing, Writing – original draft, Visualization, Validation, Project administration, Methodology, Formal analysis, Conceptualization. Lin Xie: Writing – review & editing, Writing – original draft, Visualization, Validation, Supervision, Project administration, Methodology, Investigation, Funding acquisition, Formal analysis, Data curation, Conceptualization. Masayuki Fujinaga: Writing – review & editing, Validation, Project administration, Methodology, Formal analysis. Yusuke Kurihara: Writing – review & editing, Visualization, Project administration, Methodology. Masanao Ogawa: Writing – review & editing, Visualization, Project administration, Data curation. Katsushi Kumata: Writing – review & editing, Validation, Project administration, Methodology. Wakana Mori: Writing – review & editing, Project administration, Methodology. Tomomi Kokufuta: Writing – review & editing, Visualization, Project administration, Methodology. Nobuki Nengaki: Writing – review & editing, Visualization, Project administration, Methodology. Hidekatsu Wakizaka: Writing – review & editing, Project administration, Methodology. Rui Luo: Writing – review & editing, Project administration, Methodology, Investigation. Feng Wang: Writing – review & editing, Project administration, Investigation. Kuan Hu: Writing – review & editing, Project administration, Investigation. Ming-Rong Zhang: Writing – review & editing, Writing – original draft, Visualization, Validation, Supervision, Project administration, Methodology, Investigation, Funding acquisition, Data curation, Conceptualization. Conflicts of interest The authors declare no conflicts of interest. Acknowledgments This work was supported in part by the Moonshot Research and Development Program (Grant No. 21zf0127003h001 , Japan), JSPS A3 Foresight Program (Grant No. JPJSA3F20230001 , Japan), and JSPS KAKENHI (Grants No. 23H02867 , 23H05487 , and 21K07659 , Japan). We would like to thank the staff of National Institute for Quantum Science and Technology for their technical support in the radiosynthesis and animal experiments. Appendix A Supporting information The following is the Supporting data to this article: Multimedia component 1 Multimedia component 1 Appendix A Supporting information Supporting information to this article can be found online at https://doi.org/10.1016/j.apsb.2024.07.023 .
REFERENCES:
1. PAVLOVA N (2022)
2. YOO H (2020)
3. JOHMURA Y (2021)
4. SIMON J (2020)
5. DU K (2018)
6. GIBB A (2022)
7. RITTERHOFF J (2023)
8. BRADLEY C (2019)
9. LI Y (2023)
10. WETTERSTEN H (2017)
11. DING L (2021)
12. YUNEVA M (2012)
13. YU D (2015)
14. LI B (2019)
15. CANBAY A (2019)
16. SAPPINGTON D (2016)
17. GROSS M (2014)
18. YUAN L (2016)
19. KELLOFF G (2005)
20. LODGE M (2017)
21. QU W (2011)
22. QU W (2012)
23. ZHU L (2017)
24. COHEN A (2022)
25. FUJINAGA M (2022)
26. KAWAMURA K (2022)
27. KLEINER D (2005)
28. ROSENBERG A (2018)
29. PADAKANTI P (2019)
30. HARTWICK E (2012)
31. SHUKLA K (2012)
32. HEBBARD L (2011)
33. XIE L (2012)
34. IBRAHIM S (2016)
35. SETHI J (2021)
36. HARLEY I (2014)
37. GILLIES R (2008)
38. WILSON M (2014)
39. GOSCHZIK T (2018)
40. VENNETI S (2015)
41. XIE L (2021)
42. WU Z (2014)
43. LIEBERMAN B (2011)
44. PLOESSL K (2012)
45. HUANG Q (2018)
46. SOTH M (2020)
47. YAP T (2021)
48. LEE C (2022)
|
10.1016_j.tbench.2023.100084.txt
|
TITLE: CpsMark+: A scenario-oriented benchmark system for office desktop performance evaluation in centralized procurement via simulating user experience
AUTHORS:
- Zhang, Yue
- Wu, Tong
ABSTRACT:
Rapid business expansion of various companies has placed growing demand on office desktops recent decades. However, improper evaluation of system performance and inexplicit awareness of practical use conditions often hamper the efforts to make a consummate selection among multiple alternatives. From the perspective of end users, to optimize the evaluation process of desktop performance in centralized procurement, we present CpsMark+, a coherent benchmark system that evaluates office desktop performance based on simulated user experience. Specifically, CpsMark+ includes scenario-oriented workloads portraying representative user behaviors modeled from the cooperative workflow in modern office routines, and flexibly adapted metrics properly reflecting end-user experience according to different task types. The contrast experiment between state-of-the-art benchmarks demonstrates high sensitivity of CpsMark+ to various hardware components, e.g., CPU, and high repeatability with a Coefficient of Variation less than 3%. In a practical case study, we also demonstrate the effectiveness of CpsMark+ in simulating user experience of tested computer systems under modern office-oriented scenarios for improving the quality of office desktop performance evaluation in centralized procurement.
BODY:
1 Introduction Computer performance used to be easily indicated by their hardware configurations. As computer architecture grows more sophisticated, nevertheless, using specifications as a metric will give an incomplete picture of overall computer performance in many practical scenarios [1] . Such an evaluation method is biased and thus cannot catch up with the rapid improvement of computer performance brought by thriving design philosophy. In addition, rapid expansion of computer markets makes it more difficult to identify the system performance. The above obstacle gives rise to the use of various computer benchmarks. However, most existing benchmarks are unable to meet the performance evaluation requirements in centralized procurement of office computers. Micro and kernel benchmarks are constructed by repeating monotonous operations or running pivotal algorithms from synthetic workloads. These benchmarks merely reflect partial performance of a certain component in a specific system and are primarily utilized by researchers or manufacturers to pursue innovative computer design. While some newer benchmarks, e.g., Business Applications Performance Corporation’s SYSmark and Futuremark’s PCMark, mainly consist of common business application workloads and are more representative of commercial use, while they fail to offer an overall and scenario-oriented evaluation for general end-user experience [2] . Furthermore, they are not open-source benchmarks, thus the opacity of scoring methodology and workload operations impairs their fairness and transparency, which are essential for centralized procurement. To address the limitations of SYSmark and PCMark, CpsMark 1.0 [3] , an open-source benchmark for microcomputers was developed. However, the design philosophy of CpsMark 1.0 is not user-oriented but emphasizes workload capacity. As a result of such design philosophy, in practice, users complain that workload characterization is biased, and metric measurements are inflexible. In addition, its benchmark methodology is not designed with adequate consideration for office scenarios. Moreover, it is difficult to precisely grasp the specific needs of end users, let alone individual preferences, especially in centralized procurement. Such inaccessibility makes it unwarranted to formulate the performance evaluation process and limits rational utilization of existing computer benchmarks. This paper aims to solve the above problems, as well as systematically optimize the process of utilizing benchmarks to evaluate the office desktop performance in centralized procurement. Specifically, we have redeveloped CpsMark , a novel and coherent benchmark system that builds a bridge between system performance and simulated user experience in intended usage scenarios, i.e., daily working scenario in modern office. Extensive experiments on multiple real-world tested systems demonstrate high sensitivity and repeatability of CpsMark + results. Then we used CpsMark + as a substitute for hardware specifications in quantitatively evaluating the overall computer performance of responsive bids in a real case of centralized procurement. Experimental results show that user experience ratings of the desktops selected by better benchmark score are significantly higher than those selected by the original bid evaluation method, which indicates the effectiveness of CpsMark + in simulating user experience under modern office-oriented scenario for office desktop performance evaluation in centralized procurement. + The rest of this paper is organized as follows: Section 2 reviews related work and provides our motivation for developing CpsMark . Section + 3 summarizes the challenges in evaluating office computer performance in modern office scenario for centralized procurement. Section 4 describes our methodology and process in developing CpsMark , as well as extensive experiments for evaluating and comparing CpsMark + with other related works. Section + 5 presents a case study of centralized procurement where we demonstrate the effectiveness of using CpsMark as a computer benchmark to simulate user experience in daily office scenario for desktop performance evaluation. Section + 6 concludes our work and elicits possible research directions in the future. 2 Background 2.1 Existing benchmarks and metrics We have reviewed some related works proposed for computer performance evaluation, while most of them have limitations in benchmarking office desktops under modern office scenario for centralized procurement or have not even been designed for commercial use. SYSmark 2018 [4] adopts real-world third-party software as workloads to evaluate overall computer performance and is widely applied in commercial markets. Usage scenarios are modeled in the form of subjectively grouped job nature like productivity and creativity, which cannot describe cooperation across tasks in a common workflow. In terms of the workloads, most of them are designed to be CPU-intensive and place little pressure on GPU and storage system, making the evaluation insensitive to graphics and I/O performance that might be cared by end users in daily use. Further, system responsiveness and program start-up are isolated and measured by specific applications, thus weakening the realistic reference value of benchmarking results. PCMark 10 [5] reports an overall score calculated by the geometric mean of tested metrics for the inclusive workloads within each test group. The geometric mean returns a normalized score that treats the performance of each workload equally. This scoring methodology outputs a balanced result of performance evaluation, which neglects the diversity of importance of different workloads and is unable to describe real user experience in a specific scenario. Phoronix Test System [6] is an open-source and extensible benchmark system that evaluates comprehensive performance of multiple platforms. It includes hundreds of test programs covering a wide range of applications to evaluate various metrics. Nevertheless, the contributors provide little information about benchmarks’ logic and internals, especially on how each system is tagged and applied for specific components [7] . Moreover, the benchmark system has numerous functionally overlapping programs for identical system parts and requires complicated dependencies, which makes them too generic and inefficient to be used in centralized procurement. There are other benchmarks targeting specific application domains. 3DMark [8] mainly describes real-time gaming performance of graphic cards, its dependence of frame rate as the only metric limits further uses in other fields [2] . SPEC CPU 2017 [9] contains a series of floating-point and integer algorithms extracted from the kernel of compute-intensive applications to evaluate the computing performance of CPUs. The workloads are synthetic and biased, making them more suitable for simulative experiments in academic research and industrial development of processors. The Stanford SPLASH benchmark system [10] evaluates parallel algorithms for shared-memory multiprocessors with real scientific workloads, which is of little use for office routines. Micro benchmarks such as STREAM [11] and Imbench [12] solely test single metric like memory bandwidth or latency of individual hardware component through monotonous program operations, which makes them disregard resource allocation and coordination of mixed workload manipulations within the entire computer system [13] . 2.2 Our motivation for upgrading CpsMark + To address the mentioned limitations of SYSmark and PCMark, we released the microcomputer benchmark CpsMark 1.0 in 2014, which evaluates processor performance based on a series of CPU-intensive workloads abstracted from typical computing scenarios [3] . However, the design of CpsMark 1.0 mainly focuses on workload capacity, instead of reflecting end-user experience. The workload operations are designed to be CPU-intensive and isolated from each other, thus it cannot reflect overall performance and user experience in real scenarios, which by contrast, requires workloads to be coherent and interactive. The scoring methodology treats each workload equally and neglects diverse importance of them in practical tasks. In addition, the third-party software used as workloads and the operating system (Windows 7) supported by the benchmark is obsoleted in burgeoning computer-related markets. Generally, these drawbacks merely make CpsMark 1.0 a simple technical reference for an individual customer, while it is powerless to help make purchase decisions according to actual requirements in centralized procurement of office computers. Over the last few years, the role of benchmarks has been in the spotlight in purchasing computers. Some organization like Bitkom, a Germany’s digital association, has proposed the use of benchmarks in tendering of computers [14] . Intel has also recommended some existing benchmarks as the criteria for screening the shortlist from bidders [1] . Inspired by such evolving roles of benchmarks, we redesigned CpsMark 1.0 to a coherent benchmark system by utilizing simulated end-user experience under office-oriented working scenario for better performance evaluation in centralized procurement and finally developed CpsMark in 2019. + 3 Challengs in evaluating the system performance of office desktops 3.1 Evolution of computer architecture and usage Researchers and consumers used to compare the performance of diverse computer systems by merely inspecting their hardware specifications. Latency and throughput used to be typical metrics that served us well in computer performance evaluation, since only the size and the content of input data could affect the processing speed of applications at that time [2] . For the sake of performance evaluation, better hardware always led to higher throughput and lower latency so that computer architecture was merely an inorganic combination of individual components. As computer architecture and usage grow more sophisticated, simple information of computer configurations hardly predicts the program performance in disparate scenarios explicitly [15] . Such transform gradually gives rise to the thriving of numerous benchmarks, which are a system of objective test programs that return the normalized test score compared with the baseline platform by running a series of identical applications or other computer operations. These benchmarks are generally designed to mimic a particular type of workloads on a constant computer system, by which people can be able to compare the performance of alternative computers under the specific working circumstance. Nevertheless, modern computer applications increasingly interact with humans, the physical world, and each other—often simultaneously. Some new types of computing tasks like heterogeneous computing [16] , for example, can classify different subtasks based on the embedded code segments and automatically assign them to the most suitable computing resources for efficient execution so that the total time consumption of the entire task is minimized. Many tasks operate in parallel and compete for resources internally, which might be a stochastic process and lead to dynamic results. Complicated interactions among tasks, hardware, and humans make it difficult to describe the entire performance of a given system according to a single task or even multiple tasks executed in isolation [2] . Generally, the overall performance of modern computer systems is not solely a function of individual hardware and executed applications, but an intricate integration of hardware architecture, the pattern of software execution and resource allocation, and how humans interact with computer systems [17] . 3.2 Obstacles to capturing usage requirements in centralized procurement Effective evaluation process of computer performance must be built upon an explicit awareness of the intended usage scenario of tested systems, nevertheless, which is especially difficult to obtain for office desktops. Evaluation of office desktop performance is often massively required in centralized procurement, which is a long and strenuous process where only the opinion of authorities dominates the purchase decision-making. Hence, the decision-making process is usually distant from real stakeholders [18] , e.g., the internal customers or the external clients in the case of outsourcing work. The principal of procurement and bidding documents are intensively formulated by management and hardly reflect how procurement items are intended to be used in practice. Even in the case of individual purchase, compared to traditional electronic products, information of potential usage for modern desktops is still not easy to be directly referenced in the process of performance evaluation, due to the all-round functions and flexible use of modern computers. A game enthusiast who is keen on 3D games, for instance, might also pay attention to computational performance required by a software engineer. Hence, it is hard to capture explicit usage requirements of modern office desktops, which impedes the effective evaluation of system performance, highlighting the importance of how computer benchmarks can precisely reflect end-user experience in specific scenarios. 3.3 Difficulties in reflecting real user experience In various business domains, a questionnaire is one of the most direct ways to obtain user experience and satisfaction, while like many other similar surveys, it can merely be conducted after the durable use of real end users in practice, which makes it less time-efficient in helping vendors improve their products before releasement or serving as reference when customers are selecting new products. In the field of computers, the rise of various benchmarks solves part of the above problems, however, huge challenges lie in how to precisely reflect user experience without manual intervention. For a specific computer product, the usage of different potential customer groups could be divergent, which requires an accurate match between benchmark workloads and actual user behaviors. Also, each user may have a different standard in evaluating computer performance, depending on the using habit or product dependency. This phenomenon will influence the perceived user experience without any doubt, and thus requires more considerate design of metrics and scoring methodology. Finally, it is not possible to consummately reflect user experience of computer products with any individual benchmark, because the possible over-specific design will cause the benchmark over-fitting and makes it less applicable for wider use. Therefore, the trade-off between pertinence and universality of benchmarks is also pivotal. 4 The CpsMark benchmark tool + In this section, we describe relevant criteria, methodology, and process for developing CpsMark in detail. We also carry out analytical and comparative experiments with respect to typical characteristics of computer benchmarks. + 4.1 Criteria and design features Researchers have been theoretically exploring the art of building a consummate benchmark [19,20] . Kistowski et al. [20] assert that all standardized benchmarks are subject to a group of universal criteria, e.g., relevance, repeatability, fairness, and verifiability, which are proved to be necessary. However, in each domain, the criteria are expected to include additional features specific to individual benchmark, depending on its goal, intended usage scenario or other considerations. The essence of benchmarking office computer performance under daily working scenarios for centralized procurement is to properly evaluate computer systems from a perspective of user experience and describe system performance according to specific purchase demand. In this paper, we propose following benchmark criteria that guide the design of CpsMark ’s features: + • Applications and software manipulations should be scenario-oriented to reflect real user behaviors. Particularly, in centralized procurement, end users can hardly have significant influence on the purchase decision made by authorities, hence the workloads should be closely correlated to behaviors or intended usage that are of interest to end customers in many aspects, e.g., the workload characterization and the input data set. • Cooperation and diverse importance across tasks should be described. End users usually do not have equal performance requirements for all tasks or even applications involved in an individual task. In practice, if several applications operate towards a common task or purpose, sequence and coherence of them will impact the general working efficiency, since the acceleration of some applications might be more beneficial than that of others. • Design of metrics should be flexible and account for nonlinearity, which means that composite metrics should not weigh all applications equally. Considering complicated usage of modern desktops, desired metrics of different workloads may vary. In terms of human interaction, for example, a human cannot perceive faster response time beneath some threshold. While for some other tasks, the diversity of execution time on various systems can be ignored. • The benchmark should be open-source and vendor-neutral. Development of closed-source benchmarks is likely to be manipulated by certain vendors through biased workload design, leading to suspicion [21] and loss of credibility. An open-source benchmark enables public supervision and guarantees the fairness of benchmark results, which is significantly crucial in centralized procurement. 4.2 Entire development process and benchmark framework Unlike most computer benchmarks, CpsMark is designed to be used in centralized procurement, where one single benchmarking result could affect the purchase and use of a specific product for crowds of employees. Hence, during the development process, it is more beneficial to follow an iterative and incremental strategy, instead of top-down principles that formulate schemes at an early stage to make subsequent design right on track. We divide the entire development process into phases, which are associated with relevant checkpoints to guarantee the accomplishment. Within each phase, requirements are elicited from various end users through market research or consultation, then representatives are selected to give feedback on the outcomes of decision-making and implementation. We improve our work based on the feedback and repeat such procedures for each phase. Based on the criteria proposed in Section + 4.1 , the main software components of CpsMark and its overall benchmark framework are depicted in + Fig. 1 . CpsMark benchmark tool contains three components: + • The automatic setup program, which installs third-party applications and the Master Control Program (MCP) in batches. MCP is responsible for benchmark execution, including test initialization, resource extraction, data integrity check, workload execution, log recording, metric measuring and calculation, and report generation. • The resource package, including the input files of workload operations. • The third-party application package, which contains the setups of all third-party applications. The source code of MCP is maintained online at https://github.com/wanghong3116/CpsMarkPLUS , which is still under further improvement and subject to change. The resource and third-party application packages have been uploaded on the website of National Metrology Data Center of China, which can be accessed online through https://jc.nmdc.ac.cn/view-40-609748.html . Note that CpsMark only supports Microsoft Windows 10. + We have not integrated input files, workload applications and the MCP into a unitary package as most commercial benchmarks, which makes our work transparent and easy to be maintained. For the first use of CpsMark , the trial version of each third-party application is automatically installed on the tested computer system and configured by the execution of an automatic setup program. Likewise, each workload runs independently in the form of complete software, the corresponding application is not merged into the MCP and only receives instructions synchronously from the background of tested computer systems. Such design reduces the influence of the MCP on system performance and enables a clear view of workload conditions provided by logs. + The MCP is devised as a serial layout and contains two separate test modules. Users can initialize the number of iterations to run for eliminating fluctuation of benchmark results. Composed of a sequence of orderly executed workloads, each module independently generates a synthetic score that reflects the performance of inclusive workloads. There is an automatic reboot of the tested computer system between the two modules for eliminating the impacts of varying system status (e.g., cache) on module independence. 4.3 Workloads CpsMark has two independent modules for simulating user experience perceived in modern office scenarios, i.e., Comprehensive Application (CA) and Comprehensive Calculation (CC), which can be optionally selected and run independently during the test. Each of them has a series of workloads executed in a specific order. In this section, we will introduce design and characterization of the workloads within each module in detail. + 4.3.1 User profile abstraction of office computers Chen et al. [22] point out that benchmarks are expected to be associated with real application domains and mirror practical demands in subsistence. Although a large employer may have numerous user segments, appropriate classification could minimize complexity and throw more light on exploring the performance requirement of specific user segment. For the daily usage of desktop computers in modern office scenarios, we abstract the profiles of end users from the perspective of occupation and profession described in Table 1 . Since CpsMark has been designed for commercial evaluation of desktop computers used in modern office scenarios, the user profiles summarized in + Table 1 exclude those working in laboratories, R&D centers, factories, or telecommuting. In this paper, we mainly focus on most knowledge workers and some part of power users. 4.3.2 Usage scenario modeling and application selection Employers in a specific department of a company are likely to engage in fixed routine work, thus the performance requirement of a specific task in a homogeneous work section should be more emphasized in centralized procurement of office computers. To highly correlate the design of workloads with the oriented usage scenario of tested computers, we focus on exploring the usage models of intended end users working in daily office scenarios. According to the abstracted user profiles of office computers, we cluster the usage models into four groups of common office scenarios based on their overall functions within a specific workflow, i.e., document manipulation, Internet service, graphic design, and multimedia processing, which are described as follows: • The document manipulation scenario contains multiple manipulations towards the documents in common formats, which are involved in most cases of modern business. • The Internet service scenario mainly includes web browsing and email creation, which are usually auxiliary means in resource acquisition and information communication. • The graphic design scenario refers to visual expression of ideas and information through the combination of symbols, pictures, and text, which is crucial for product presentation tasks like poster production. • The multimedia processing scenario relates to utilizing computers for digitizing and integrating graphics, sound, video and other media information in a specific interactive interface, which is widely applied in consulting, marketing and management. As for workload applications, we select desktop-level office applications based on the metric of popularity. According to the investigation report of office software markets in China by Chinaiern [23] , our software market experts select popular and typical applications for each usage scenario in modern office, which are summarized in Table 2 . Since sufficient time is required for workloads to be developed and validated, versions of some applications are not the latest when CpsMark was released. In addition, the intended applications of CpsMark + are the most widely used version instead of the latest one. While some application like WinRAR is up to date because it is feasible to be instantly updated by end users. + 4.3.3 Test module construction While specific selection of usage scenarios ensures high representativeness of the benchmark, grouping applications with similar performance dependencies from various usage scenarios can easily provide an all-sided picture portraying integral performance required by end customers and enhance the usability of the benchmark. Hence, we merge the usage scenarios into two separately running and scored modules as follows: • Comprehensive Application (CA) module includes the scenarios of document manipulation and Internet service, which reflect light and middleweight use by task or knowledge workers in most business workplaces, where end users might pay more attention to overall performance, response, and smoothness throughout regular use. • Comprehensive Calculation (CC) module includes the scenarios of graphic design and multimedia processing, which reflect heavyweight use by power users skilled in professional fields, where end users possibly focus on the execution efficiency of CPU-intensive or GPU-intensive computing tasks. Within each module, in addition to similar performance dependencies, the usage scenarios are highly correlated and tend to appear in a common workflow under daily office scenarios. Further, each usage scenario is given a different weight based on the sum of metrics measured from inclusive workloads. Such approach can ensure a direct and close connection between benchmark results and computer performance required by end users. 4.3.4 Workload components and design details To reflect the user experience of office computers in modern office, workloads should be not only scenario-oriented but also capable of simulating user behaviors. Therefore, the workload of CpsMark is more than a concept of application automation, but a logical integration of three elements: the input data set extracted from the resource package, the workload operations performed on the input data set through the applications executed by the MCP, and the generated output. + For each workload, the input data set is chosen to functionally reproduce the resources or materials that might be used by end users in modern office scenarios. Specifically, we select raw digital contents or semi-finished project files that are mainly non-structured data such as texts, images, videos, webpages, and other application-specific files, e.g., 3dsMax scene files. Then we explore basic operating units that frequently appear in the routine use of applications and integrate them into a series of workload operations that can accomplish a common task. We guarantee the completeness of workloads via designing diversified operations that independently generate finished files as output for each application. Moreover, there is no random process in the MCP so that the generated output is uniquely determined by the input data set and the workload operations. The workload operations of the CA module are briefly described in execution order as follows: • Google Chrome. Simulate users to browse webpages and switch between tabs. Webpages are accessed through locally configured network services. The webpages contain text, pictures, JS (JavaScript) scripts, and flash. • Microsoft PowerPoint. Set the new template style and create slides. Input texts and adjust character formats, alignment, and font size. Add pictures, captions, and typeset. Insert tables and charts with filled data. Browse slides. • Microsoft Word. Input characters, modify titles and character formats, split paragraphs, set the directory, insert pictures, create tables and charts, input data. • Microsoft Excel. Generate and organize data with fixed formula. Classify and enter data under a specific rule. Calculate and sort common statistics. Draw line charts by categories, set titles and styles, adjust size and position. Macro definition and execution. • Adobe Acrobat. Convert PowerPoint, Word, and Excel documents made in previous workloads to PDF files, browse these PDF files page by page. • WinRAR. Compress and decompress mixed files in multiple formats, including images, videos, documents, databases, and log files. • Microsoft Outlook. Simulate users to receive, browse email contents and attachments offline, including Word, Excel, and PowerPoint files. Upload new attachments, edit the body of the email, and reply. The workload operations of the CC module are briefly described in execution order as follows: • Adobe Photoshop. Use the PSD (Photoshop Document) file to make a vertical poster. Separate target area from the source material and design the layout of layers. In new layers, set titles and captions, add a logo, and adjust its size, coordinates, and transparency. Combine all layers, virtualize the background and merge them into a large picture. • Autodesk AutoCAD. Use the DWG file to draw distributed structure diagrams of buildings. In the main framework, draw structure and vector identification of each area, add coordinates, and mark the size. Change colors of layers and use different line styles. Design wiring, draw pipeline distribution, and flow direction. • Autodesk 3ds Max. Design a 3D model of a whale. Develop the 3D framework, color the texture, add lighting effects, make reflections and shadow effects by calculating light source position, incidence angle, and reflection angle. Produce motion trajectories and movements of the whale model, render segmental frames of action sequences. • Adobe Premiere. Clip and splice source video materials, add lens transition and subtitles, synthesize sound effects, render, and preview the output video. • Adobe After Effects. Add particle explosion effects, render the firework explosion animation sequence of 1800 frames and 30 FPS. • HandBrake. Convert the H.264 encoded source video with 4K resolution to the H.256 encoded target video with 2K resolution, the container format is MP4. Hardware acceleration will be leveraged if enabled. Within each module, the workloads are executed in the order specified above. The format or even the content of the generated output for some specific applications is identical to that of the input data set for subsequent applications. Such design enables test modules to describe cooperation across tasks throughout a common workflow. For example, the workloads of the CA module simulate the following coherent user behaviors: resource preparation via the Internet, content creation, document processing, and email delivery. 4.4 Metric design and test implementation Although work efficiency is a pervasive metric in most benchmarks that evaluate computer performance [24] and is widely referenced in helping customers making decisions, unitary metric design may not tell the true story of user experience for the following reasons. First, people do not have equal performance requirements for all tasks or even for the same portion of an individual task, so that user experience is usually diversified and varying. For instance, professional designers in an advertising agency might pay more attention to the time consumption of multimedia processing, while the user experience of office secretaries is closely related to the response speed and the fluency of frequent document operations. Second, the perception of user experience is nonlinear and difficult to quantify. In terms of human interaction, humans cannot perceive faster response time beneath a certain threshold, hence further acceleration of the task will not bring better user experience. For example, a frame rate that exceeds the support of a monitor will no longer improve the user experience of a graphics task, while in this case the program execution could be accelerated by a better GPU. As a result, in the context of CpsMark , we define work efficiency as the time consumption for systems under test to complete all operations related to user experience within a specific workload, i.e., application launching, input files loading, and basic operating units, which are outlined in Section + 4.3.4 . Then we take the defined work efficiency as the metric of CpsMark and focus on how it can be measured to properly describe the user experience of tested desktops in modern office scenarios. + 4.4.1 Method of sampling To guarantee the pertinence of the metric, CpsMark adopts multiple methods to sample the work efficiency of tested computer systems, depending on various workloads. Such a flexible approach can differentiate the user experience by matching the usages of applications with their performance requirements. To be more specific, we predefine runtime as the time spent by each basic operating unit that actively uses system resources, while response time is the time interval between task activation and task completion. The sampling methods are illustrated in + Fig. 2 . In terms of the workloads in the usage scenarios of document manipulation (WinRAR excluded) and Internet service, basic operating units are numerous and densely distributed with lightweight resource consumption. Some intervals of them consist of events irrelevant to the evaluation of user experience e.g., temporary retention of screen display, timer interference, which will have an adverse influence on the effectiveness of workloads if they are included in the metric. However, too many samplings of basic operating units will accumulate the sampling error and cause frequent switches between transient-state and steady-state of program process, which might interfere with the system performance. Hence, we sample the start timestamps and the end timestamps of the entire task and calculate its response time, i.e., in Method 1, then we sample the time intervals of irrelevant events and subtract them from the response time as the metric of these workloads. t 7 − t 0 For the other workloads of CpsMark , their basic operating units are relatively sparse and have a high concentration of resource consumption. These basic operating units are time-consuming and contribute most of the entire task. In this case, the user experience of end users is more susceptible to the execution speed of a single operation. To accurately measure the runtime, we artificially add extra short waits, e.g., + in Method 2, between the heavyweight operating units to reset the resource consumption. Finally, we sum the sampled runtime of each basic operating unit as the metric of these workloads. t 2 − t 1 4.4.2 UI-level vs. API-level automation Benchmark implementation has a great impact on the test results of the designed metric. There are two primary approaches to automate the execution of workloads, i.e., UI-level and API-level [25,26] . Some benchmarks leverage automated scripts like AutoIt to initiate and navigate applications by simulating mouse clicks or keystrokes [25] . The duration of each task is measured when the completion of the task is detected by application-specific methods. Such an approach mimics practical human interaction at UI level, nevertheless, it instead impedes the accurate reflection of user experience for performance evaluation. Although the estimation of user experience is somewhat subjective, it should be highly relevant to how well computer systems react to or execute the instructions of real end users, however, which might be distorted by a contradictory combination of simulated user behaviors and computer-based metrics. We choose independent APIs or invoke them from application communication standards, e.g., Component Object Model, to automatically control the execution of each workload. In this case, launching of applications, loading of input files, and basic operating units are implemented through a set of functions, methods, and procedures contained in selected APIs or standards. Compared to the UI-level implementation, our decision to choose API-level implementation provides some tangible benefits as follows: • Reduction of irrelevant time measured as metrics. On the one hand, it takes UI-level implementation a large amount of time to detect the completion of tasks according to the returned signals. For instance, automated scripts may wait for the application to show a pop-up window or may wait for a dialog box to disappear, which requires accurate technical identification. Such a judgment process based on automated scripts is quite time-consuming and significantly falls behind the completion of tasks as perceived by end users. On the other hand, some workload operations themselves take much time for automated scripts to perform. For example, text input might be simulated by continuous keystrokes at a fixed speed, which has identical time consumption on all tested computers. This prolonged simulation accounts for a large proportion of the designed metric and makes the results of measurement diluted by what end users do not value. • Less resource consumption and higher test efficiency. Although some UI-level automated frameworks of benchmarks claim to be lightweight and have little influence on performance, they still consume more computing and memory resources than API-level automation [27] . In addition, API-level automation requires fewer codes to perform and does not need to deal with interface elements. This attribute makes performance evaluation a faster and compact test process and further reduces the overall resource consumption. • Greater stability in testing and maintenance. UI-level automation sometimes gets stuck or goes into endless loops due to UI complexities. For instance, a mouse cursor might miss certain buttons due to the change of resolution, or an unexpected window display may lead to wrong recognition. Some applications are event-driven and can easily enter idle states if there are no users interacting with them [2] . By contrast, API-level automation can guarantee the exact execution of each workload operation and help ease maintenance difficulties brought by external factors [28] , e.g., frequent updates of application versions. 4.4.3 Pipeline of metric testing In CpsMark , test of the designed metric for a specific workload is performed through the MCP and follows a similar pipeline across all workloads as shown in + Fig. 3 . More concretely, for the th workload, the MCP first decompresses the resource package and extracts the exclusive input files to a specified location, then an MD5 N [29] check is performed towards them to ensure the data integrity. If the MD5 check fails, the test will abort and return to the initialization phase, otherwise, the MCP will move forward to the application execution phase depicted as the dashed rectangle in Fig. 3 , where the designed metric is tested. When all the workload operations are finished, an MD5 check is performed towards the generated output. Finally, after a five-second countdown, if there is no user input to interrupt the test, i.e., mouse clicks on the pause button, the MCP will proceed for the next workload until the entire benchmark is completed. T N It is worth noting that for the workloads in the usage scenario of document manipulation and Google Chrome, the applications are launched through direct open of the input files, while for the workloads in the usage scenarios of graphic design, multimedia processing, and Microsoft Outlook, the input files are loaded after separate launch of the applications. As a crucial factor affecting the user experience, the speed of application launching is a good indicator of memory and storage performance. 4.5 Scoring methodology The scoring methodology of benchmarks integrates test results of the designed metric and generates quantified scores that evaluate the overall performance of computer systems. For a commercial benchmark used in centralized procurement, the scoring methodology should provide accurate estimation of the user experience for tested computers to help authorities choose better products from alternatives. For CpsMark , the design of its scoring methodology meets the following criteria: + • The resulting score does not have significant fluctuation and can remain steady given a constant computer system. • The resulting score can sufficiently differentiate the user experience of tested computers with diverse performance. • The pair-wise relationship between the resulting scores from different computer systems is neutral to the calibration method and the specification of the baseline platform. Concretely, for each module, we sum the tested metric of each included workload executed on the tested computer system and compare it with the sum of workload metrics tested on the baseline platform. We calculate the ratio value of these two sums and round it to the nearest integer. In this case, a higher score indicates better performance. To be more specific, given the th module and the number of included workloads i , N i and T j are the tested metric of the t j th workload executed on the tested computer system and the baseline platform, respectively. The resulting score for module j is calculated as follows: i S i = 1000 ⋅ ∑ j = 1 N i t j ∑ j = 1 N i T j Note that we do not take the geometric mean of each score as the overall rating, which places equal weight for each module [30] . Instead, we reserve and separate the score so that end users can flexibly customize the weight of each module when they refer to the benchmark results according to diversified requirements. Within each module, the sum of each tested metric reflects cooperation across workloads and different performance dependencies of them. 4.6 Baseline platform and calibration As the datum point of the evaluation framework, baseline platforms are prerequisite for most benchmarks. Judicious choice of the baseline platform is of great significance for the resulting score. For instance, an exorbitant configuration of the baseline platform will lead to low sensitivity and weak differentiation of benchmarks, while an inferior one may cause poor repeatability. Hence, at the time of development of CpsMark , we study the mainstream configurations of office computers purchased in centralized procurement and determine the following configuration for the baseline platform based on performance requirements of the workloads in CpsMark + : + • CPU Model: Intel® Core™ i3-9100 (4 cores, 3.60 GHz, 6 MB L3 cache) • Graphics: Intel® UHD Graphics 630 • RAM: Kingston® ValueRAM™ 8 GB DDR4 2400 MHz • Storage: Western Digital® WD Blue™ 1 TB SATA III HDD (6 GB/s, 7200 RPM) • Chipset: Intel® Z390 • Display Resolution: 1920 × 1080 • OS: Microsoft® Windows® 10 Specifically, to calibrate , we build the baseline platform with brand new parts according to the above hardware configurations and perform a clean installation of the selected operating system. Then we run both modules of CpsMark t j on the baseline platform for 5 independent iterations, the workload-wise calibration + is calculated as the median value over the tested metrics of the t j th workload from the five runs. Note that since the baseline platform is not a finished product of a computer manufacturer, it is illogical to integrate the tested metrics of all workloads within each module as the module-wise calibration of the baseline platform. j 4.7 Benchmark characterization In this section, we analyze some basic characteristics of CpsMark from the perspectives of sensitivity and repeatability, which are two widely used criteria of typical computer benchmarks. Specifically, we have performed extensive test experiments with CpsMark + on multiple assembled computer systems. Then we analyze the sensitivity of tested module performance to varying hardware characteristics. We also explore the repeatability of workload performance under a constant computer system and stable test environment. + 4.7.1 Experimental setup We alter five different hardware characteristics of a predefined datum point to build the tested computer systems, including the number of CPU cores, CPU frequency, graphics card, storage device, and system memory, which are crucial factors in determining user experience. For each hardware characteristic, we select four configurations with significant pairwise performance differences. They are denoted as Config 1 to Config 4 in ascending order of performance. The detailed configurations of each hardware characteristic are listed in Table 3 . For the configurations of the CPU characteristic, instead of using different processor models, we stick to the CPU model of the datum point and enable different CPU frequencies or numbers of CPU cores by changing BIOS settings. For the configurations of the graphics card, we use the same brand of discrete graphics cards to ensure consistency of graphics drivers and available physical memory. For the configurations of system memory, we all adopt the single-channel mode and only change the memory size of the datum point. The configuration of the datum point is listed as follows: • CPU Model: Intel® Core™ i7-9700K (8 cores, 3.60 GHz, 12 MB L3 cache) • Graphics: Nvidia® GeForce® GTX 750 • RAM: Kingston® ValueRAM™ 4 GB DDR4 2666 MHz • Storage: Seagate® Barracuda® 1TB SATA III HDD (6 GB/s, 5400 RPM) • Chipset: Intel® Z390 • Display Resolution: 1920 × 1080 • OS: Microsoft® Windows® 10 Notably, for all the experiments in this section, we disable common auxiliary optimization technologies, e.g., Turbo Boost, Hyper-Threading, and Hardware Acceleration, to better highlight the influence of different configurations under various hardware characteristics on benchmark performance from a static perspective. These auxiliary optimization technologies can be enabled in the practical use of CpsMark . + 4.7.2 Sensitivity analysis Through evaluating the module performance and the workload performance on tested systems with different levels of configurations, we can explore the sensitivity of CpsMark scores to various hardware characteristics. To get strict test results, except for the hardware characteristic under test, the other components of a certain configuration remain identical to the components of the datum point. Specifically, we run CpsMark + on each configuration for 20 independent iterations with a system reboot and a 15-min interval between each run. In each iteration, we sum the tested metrics of the included workloads for each module, then the average of the sums is adopted as the module performance on a certain configuration. Finally, for each hardware characteristic, we calculate the inverse ratio of the module performance tested on the other three configurations to the module performance tested on the first configuration, i.e., base configuration, respectively. The sensitivity of the module performance and the workload performance of CpsMark + to various hardware characteristics are shown in + Fig. 4 and Table 4 , respectively. Based on the module performance evaluation depicted in Fig. 4 , we notice that both modules have a high sensitivity to CPU cores and CPU frequency, the module performance steadily increases as the configurations improve, indicating that both modules can make full use of CPU resources and be significantly affected by more CPU cores and higher CPU frequency. The CC module has a significantly higher sensitivity to the graphics card, the best configuration performs 1.77 times better than the base configuration, while there is no significant difference in the performance of the CA module, which indicates that better graphic cards cannot lead to significant performance improvement of the CA module. Rotation speed and storage media of hard disks also have a great influence on the performance of both modules, since the workloads involve application launching and many I/O operations, while drive interface and protocol contribute less to the module performance. Both modules are relatively less sensitive to system memory than the other hardware characteristics, which indicates that larger size of system memory will bring least significant improvement of both module performance compared to better configurations under other hardware characteristics. We also notice that the sensitivity of the CC module to most hardware characteristics is higher than the sensitivity of the CA module, since the workloads in the CC module are heavier and have more resource consumption. In addition, as the configurations improve, the growth rate of the module performance slows down, especially for the best configurations, because when configurations exceed some requirement bottleneck of the entire workloads, extra improvement of a single hardware characteristic cannot yield much performance growth. As for the sensitivity of the workload performance of CpsMark to each hardware characteristic, based on the workload performance evaluation depicted in + Table 4 , we have observed the same trend as the sensitivity of the module performance. Generally, the performance of all the workloads is highly sensitive to the number of CPU cores, CPU frequency, and the storage devices. The performance of the workloads that require massive GPU-intensive computing, e.g., AutoCAD and Premiere, is more sensitive to graphics cards, compared to the relatively lightweight workloads, e.g., Microsoft Office. However, the performance of some workloads in the CA module, e.g., Excel and WinRAR, is more sensitive to CPU frequency and storage devices, which might be resulted from frequent float point calculations in the RAM and massive document I/O operations in disks triggered by these workloads. We also find out that the performance improvement of most workloads is not significant once the size of system memory reaches 8 GB, which is likely to be the requirement threshold for the workload software to run smoothly. 4.7.3 Repeatability analysis The repeatability of CpsMark is evaluated according to the fluctuation of the module performance and the workload performance tested on the identical computer systems. We leverage Coefficient of Variation (CV), the ratio of the standard deviation to the mean, to indicate the degree of performance fluctuation + [31] . To be more specific, for all the experiments under each hardware characteristic, we calculate the CV of the module performance and the workload performance evaluated on the same configuration over 20 independent iterations, respectively. Finally, we aggregate the CV of the module performance under each hardware characteristic and calculate the average CV of the workload performance evaluated on each level of configurations. The results are shown in Fig. 5 . As we can see from the results depicted in Fig. 5 , the CV of the module performance under all the hardware characteristics is less than 3%, while the CV of the workload performance under each level of the four configurations is less than 2.5%, which indicates that the overall benchmark results of CpsMark are stable and have high consistency under the identical tested computer systems and environment. + Furthermore, for each hardware characteristic, we mark the CV of the module performance under Config 1 and Config 4, respectively. It turns out that except for the results of the CC module under the hardware characteristic of CPU frequency, the best configuration will cause the highest CV of the module performance, while the worst configuration will lead to the lowest CV. Combining the CV of the workload performance with each other, we can conclude that the stability of benchmark results will be improved if the performance of tested configurations exceeds the performance requirements of workloads. As a result, the CV of the module performance under the hardware characteristics of the CPU cores and CPU frequency is relatively high, since CPU frequency of only 2.0 GHz or 2 CPU cores might significantly encumber the performance of tested computer systems. We also notice that the CV of the heavyweight workload performance is generally higher than the CV of the lightweight workload performance, which is consistent with the previous conclusion, the possible reason is that heavyweight workloads will greatly occupy system resources and lead to unexpected disturbance caused by resource competition between complex program instructions. While Google Chrome is an exception, since the value of its tested metric is relatively small so that it is susceptible to the fluctuation of repetitive experiments. Another finding is that the module performance and the workload performance tested on better configurations will become less volatile, as the performance of high-level configurations might greatly exceed the requirements of workload software. Overall, our benchmark methodology ensures that CpsMark is of high sensitivity to provide stable and reliable evaluation results. + 4.8 Comparative evaluation against competing benchmarks In this section, we mainly focus on quantitative and qualitative comparison between CpsMark and two commonly used computer benchmarks in commercial field, i.e., SYSmark 2018 and PCMark 10. We explain the experimental and the analytical results in detail, which further highlight the strength and the design philosophy of CpsMark + . + 4.8.1 Quantitative comparison For quantitative comparison, we compare CpsMark with SYSmark 2018 and PCMark 10 with respect to the sensitivity and the repeatability of the module performance under various hardware characteristics. We do not select other metrics, e.g., test duration and power consumption, since SYSmark 2018 and PCMark 10 are not open-source benchmarks and do not have built-in functions to precisely measure these metrics, which as well makes it impossible to compare the sensitivity and the repeatability of them at a finer granularity, e.g., the level of workload performance. In addition, sensitivity and repeatability are the universal metrics for comparing different benchmarks, even if they possess diverse construction methodologies and usages. + Specifically, we follow the same experimental setup as described in Section 4.7 . The modules of SYSmark 2018 include Productivity, Creativity, and Responsiveness, while the modules of PCMark 10 include Essentials, Creativity, and Digital Content Creation. The detailed information about SYSmark 2018 and PCMark 10 is available on their official websites, respectively. Note that in this section, among the three benchmarks, we only compare the sensitivity and the repeatability of the modules that evaluate system performance in similar usage scenarios. Average sensitivity and repeatability (CV in percentage) of the module performance for the three compared benchmarks are summarized in Table 5 . In terms of the sensitivity results depicted in Table 5 , among the three modules that evaluate system performance related to document editing and Internet surfing, i.e., the CA module of CpsMark , the Productivity module of SYSmark 2018, and the Productivity module of PCMark 10, the Productivity module of SYSmark 2018 has the highest sensitivity to all the configurations under each hardware characteristic, since it includes some workloads that have relatively high consumption of system resources, e.g., AutoIT and Shotcut, while the CA module of CpsMark + has the second highest sensitivity, which is close to the sensitivity of the Productivity module of SYSmark 2018. Among the three modules that evaluate system performance related to multimedia processing and graphics design, i.e., the CC module of CpsMark + , the Creativity module of SYSmark 2018, and the Digital Content Creation module of PCMark 10, the CC module of CpsMark + is most sensitive to all the hardware characteristics, especially graphics cards, which indicates CpsMark + can sensitively reflect performance improvement of better GPUs in digital and multimedia processing tasks. + In terms of the repeatability results depicted in Table 5 , among the three modules that evaluate system performance related to document editing and Internet surfing, i.e., the CA module of CpsMark , the Productivity module of SYSmark 2018, and the Productivity module of PCMark 10, the CA module has the highest repeatability, i.e., the lowest CV, across all configurations under each hardware characteristic, while the Productivity module of SYSmark 2018 has the lowest repeatability, i.e., the highest CV. This result is attributed to the UI-level automation of SYSmark 2018, which introduces massive unstable and delayed interactions, e.g., clicking dialog windows. Among the three modules that evaluate system performance related to multimedia processing and graphics design, i.e., the CC module of CpsMark + , the Creativity module of SYSmark 2018, and the Digital Content Creation module of PCMark 10, likewise, the CC module has the highest repeatability, i.e., the lowest CV, across all configurations under each hardware characteristic, which is attributed to the relatively lightweight workloads and the smooth API-level automation of CpsMark + . + Generally, in terms of the modules that evaluate system performance in similar usage scenarios, CpsMark exhibits the highest repeatability against state-of-the-art commercial benchmarks, i.e., SYSmark 2018 and PCMark 10, while it also possesses the second highest sensitivity to the hardware characteristics tested in this experiment, which is close to the sensitivity of SYSmark 2018. + 4.8.2 Qualitative comparison In this section, we empirically conduct some qualitative comparison of the three benchmarks from the perspectives of workload characterization and scoring methodology. Firstly, the Responsiveness module of SYSmark 2018 and the Essentials module of PCMark 10 contain a large amount of irrelevant workload operations that cannot precisely simulate user experience perceived in practical usage scenarios of tested computer systems, which is not consistent with the primary attribute of CpsMark and accounts for the reason why we exclude them from the above quantitative comparison. + To be more specific, the Responsiveness module of SYSmark 2018 solely measures the response time of program initialization, its workloads consist of a series of sequential application starts and shutdowns, which however, cannot reflect the practical use case in daily office routines and will over amplify the influence of storage devices on the overall performance evaluation based on user experience. On the contrary, each workload of CpsMark reflects a common workflow frequently adopted in modern office scenarios and collectively forms typical tasks that are fluent in nature, which exactly justifies our benchmark principal of simulating user experience. Moreover, the Essentials module of PCMark 10 contains the playback of a video with fixed duration, thus massive time consumption is included in the calculation of test metrics, which nevertheless, will dilute the contribution of better hardware characteristics to the performance improvement of this module and further reduce the benchmark sensitivity. + Secondly, for each module of PCMark 10, the scoring methodology takes the geometric mean over the test metrics of inclusive workloads, which returns a normalized score that treats the performance of each workload equally and neglects different importance of various workload operations in daily office scenarios. By contrast, as described in Section 4.5 , for each module of CpsMark , the scoring methodology takes the weighted sum over the test metrics of inclusive workloads, which emphasizes the influence of heavy or durable workload performance on simulated user experience and ignores the importance of trivial workload operations that are less involved in the routines of end users. + 5 Case study performance evaluation of office desktops using CpsMark in a vendor-neutral tendering + In this section, we aim to demonstrate the effectiveness of CpsMark in simulating user experience under office-oriented working scenarios for better office desktop performance evaluation in practical centralized procurement. Specifically, in a vendor-neutral tendering of desktop computers for a Chinese company, the tendering was divided into two separate batches with different bid evaluation methods. For the second batch, we combined the original bid evaluation method prepared for the first batch with benchmark scores from CpsMark + to formulate a new bid evaluation method. The original and the new bid evaluation methods were then independently adopted in the above two tendering batches, respectively. After one-year use of the wining desktops selected by the two bid evaluation methods, we independently investigated the user experience of end users from each tendering batch and collected their ratings. The results show that the desktops purchased in the second batch have significantly higher ratings for user experience, which indicate that the workloads of CpsMark + can precisely simulate user experience perceived by end users working in modern office-oriented scenarios and enable more targeted performance evaluation for desktops with the above usages. + 5.1 Brief introduction of tendering At the beginning of 2020, a large digital marketing agency in China initialized a centralized procurement to purchase desktop computers for the employees from a functional department and a business department, which are denoted as A and B, respectively. For innovating the traditional tendering policy and validating the effectiveness of CpsMark , within each department, the procurement was arranged as two separate batches of vendor-neutral tendering with different bid evaluation methods, which are denoted as 1 and 2. The basic information of the four tendering batches are listed in + Table 6 . Then during the next year, the employees of each department were divided into two groups to use the desktop computers purchased in the two tendering batches, respectively. Note that in addition to the bid evaluation methods for final decision-making among shortlisted alternatives, we also clarified the minimum technical requirements to preliminarily screen candidates from all bidders, which were based on the standard and high- performance configurations in Bitkom’s guideline for IT procurement [14] , i.e., Vendor-neutral Tendering of Desktop Computers. 5.2 Improvement of bid evaluation methods Main difference between the two tendering batches lay in the bid evaluation methods, which were adopted by bid evaluation committee to determine the best bidding product. In this case study, to help authorities purchase desktop computers with better end-user experience at a certain cost and further validate the effectiveness of CpsMark , we decided to partly replace the straightforward hardware-based scoring rules in the original bid evaluation method with benchmark scores from CpsMark + to develop the new bid evaluation method. + 5.2.1 The original bid evaluation method The old bid evaluation method consists of 3 sections with a total of 100 points, i.e., the commercial section, the technical section, and the price section. The final score for a certain bid is the sum over the score for each section. Specifically, the score for the commercial section is the direct sum over the score for each included item (0–1 point per item). The score for the technical section is the weighted sum over the score for each included item, which is the weighted average over ratings for various metrics ranked by the importance (0–1 point per metric). Detailed information of the items within each section are listed as follows: 1. Commercial section (3/3 points for 1A/1B) • Quality of bid response documents. • Efficiency of logistics and query systems. • Quality of after-sales service. 2. Technical section (67/77 points for 1A/1B) The score for the • CPU. Metrics: craftsmanship, number of cores, base frequency, size of L3 cache, Thermal Design Power. • Motherboard. Metrics: chipset, expansion slots, structure, BIOS, power supply. • Monitor. Metrics: screen size, resolution, brightness, panel type, ports. • Memory. Metrics: DDR generations, capacity, operating frequency, CAS latency. • Storage. Metrics: HDD/SSD, capacity, rotation speed (for HDD), interface, disk buffer. • Graphics. Metrics: integrated/discrete, craftsmanship, architecture, GPU frequency (for discrete graphics). th item is calculated as follows: j where Score j = w j ⋅ ∑ i = 1 n j [ r j ( i ) ⋅ 1 − i − 1 n j ] ∑ i = 1 n j ( 1 − i − 1 n j ) is the number of metrics for the n j th item, j is the rating (0–1 point) for the r j ( i ) th metric of the i th item, j is the weight of the w j th item predefined by domain experts, which is listed in j Table 7 . 3. Price section (30/20 points for 1A/1B) The lowest quotation among all the bids that meet the minimum technical requirements is defined as the Negotiated Base Price (NBP), then the price section score for a certain bid is the product of the price coefficient and the ratio of the NBP to its quotation. The price coefficients for the tendering batches of 1A and 1B are 30 and 20, respectively. 5.2.2 The new bid evaluation method In terms of the new bid evaluation method for the tendering batches of 2A and 2B, we introduce benchmark scores from CpsMark to replace any items related to system performance in the original technical section, i.e., all the items except for the Monitor item. The weight of the Monitor item and the weights of the commercial and the price sections remain constant. To maintain a total score of 100 points, the benchmark score weights for the tendering batches of 2A and 2B are 63 and 70, respectively. + To calculate the absolute benchmark score from CpsMark , unlike the item weights within each section in the original bid evaluation method, the weight of the CA/CC module is not predefined by domain experts from the bid evaluation committee, instead it is assigned as the average value of the survey results from the real end users of both departments. The weights of the CA/CC module for the tendering batches of 2A and 2B turn out to be 0.71/0.29 and 0.12/0.88, respectively. Then the absolute benchmark score from CpsMark + for each tendering batch is defined as follows: + where Score c p s = ∏ i = 1 2 s i w i ∑ i = 1 2 w i is the weight of the w i th module, i is the median score of the s i th module over 5 independent tests on a certain bidding product. i To scale the absolute benchmark score from CpsMark for better reflection of relative performance among various bidding products, we adopted a similar strategy as in the price section. Specifically, the best absolute benchmark score among all the bids that meet the minimum technical requirements is defined as the Negotiated Maximum Performance (NMP), then the final benchmark score for a certain bid is the product of the benchmark score weight and the ratio of its absolute benchmark score to the NMP. Finally, the score for the new technical section is the direct sum over the final benchmark score and the score for the Monitor item. + 5.3 Effects of introducing benchmark scores from CpsMark + To evaluate the effects of introducing benchmark scores from CpsMark as part of the new bid evaluation method, for the winning bids purchased in the tendering batches of 1A/1B and 2A/2B, we performed a comparative analysis towards the one-year user experience rated by the respective end users. + 5.3.1 Evaluation protocols of user experience We first formulated the explicit evaluation protocols for rating user experience of office desktops in modern office scenarios. The ISO 9241 standard [32] of human–computer interaction defines usability as “the extent to which a product can be used by specific users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use”. We defined user experience of the winning bids in a similar way as the usability defined in the ISO 9241 standard. Since for all the bids that meet the minimum technical requirements, the effectiveness of the products in fulfilling the tasks specified by the tenders is guaranteed, we mainly focused on the following two metrics: (1) Efficiency, i.e., the user-perceived time consumption for software and applications to achieve specified goals. The rating is scaled as “very efficient” (5 points), “somewhat efficient” (4 points), “neutral” (3 points), “somewhat inefficient” (2 points), or “very inefficient” (1 point). (2) Smoothness, i.e., the user-perceived overall smoothness in daily use of software or applications, including jank, launching speed, delay, and response to instructions. The rating is scaled as “very smooth” (5 points), “somewhat smooth” (4 points), “neutral” (3 points), “somewhat unsmooth” (2 points), or “very unsmooth” (1 point). In terms of the rating items, we surveyed each department to find out the software or applications frequently used by most end users within one year after the procurement. Then we gave them different weights according to the average hours of use over the entire department, which are listed in Table 8 . For each tendering batch, i.e., 1A, 2A, 1B, and 2B, we randomly invited 20 end users from the corresponding group of their department to independently rate the user experience of the desktop computers purchased in this tendering batch. The questionnaires adopted for rating the user experience are similar as CSAT [34] . For each desktop computer, the total score for each metric of the user experience is the weighted sum over the metric ratings for all the items. 5.3.2 Evaluation results The distributions of user experience ratings for the winning bids from the four tendering batches are shown in Fig. 6 . As we can see from the results, for both metrics of the user experience, the ratings from all surveyed end users are between 2.5 and 5 points. Specifically, the ratings for both user experience metrics of the winning bids from the tendering batches of 1A/1B are mostly between 2.5 and 4 points, while the ratings from the tendering batches of 2A/2B are mostly between 3 and 4.5 points, which indicates that the user experience of the desktop computers selected by the new bid evaluation method is improved to some extent. Table 9 shows some descriptive statistics of the above user experience ratings and the average quotation for the desktops purchased from each tendering batches. For the tendering batches of 1A/2A, the efficiency and the smoothness ratings for the winning bids are 3.51/3.90 points and 3.23/3.69 points, with an increase of 11.11% and 14.24%, respectively. For the tendering batches of 1B/2B, the efficiency and the smoothness ratings for the winning bids are 3.40/3.93 points and 3.53/3.96 points, with an increase of 15.59% and 12.18%, respectively. Although the rating results of user experience demonstrate the effectiveness of CpsMark in identifying office desktops with better user experience under modern office-oriented scenarios, the analysis so far has only told part of the story for evaluating the effects of introducing benchmark scores from CpsMark + in centralized procurement, since pricier bids generally tend to deliver better system performance, which will cause a much higher budget. To this end, we also consider the average quotation for the winning bids from each tendering batch, which is 5316/5562 CNY and 6465/6948 CNY for the tendering batches of 1A/2A and 1B/2B, with an increase of 4.63% and 7.47%, respectively. Note that in this paper, charges for other services, e.g., logistics and insurance, are excluded from the average quotation. Apparently, the higher average quotation of the winning bids leads to more significant increase of user experience ratings. This result demonstrates that the new bid evaluation method based on benchmark scores from CpsMark + can help authorities select the bid with better user experience and higher cost-effectiveness in the centralized procurement of office desktops. + 5.3.3 Statistical analysis In this case study, we randomly selected 20 end users from each tendering batch for higher survey efficiency and minimizing the rating deviation due to the subjective evaluation of user experience. Hence, we perform further statistical analysis to explore potential significant changes of user experience ratings within the whole populations from the tendering batches of 2A/2B. The results of significance tests are shown in Table 10 . According to the Shapiro–Wilk test, the normality for all the distributions of user experience ratings is accepted, which indicates that user experience of the winning bids from each tendering batch is concentrated within a certain range. Then we conduct a two-tailed F test to infer the homogeneity of variance between user experience ratings from 1A and 2A, as well as 1B and 2B. Specifically, all the results accept the null hypothesis, which is possibly attributed to the similar responsibilities of employees from the same department. The results of the student’s t-test also infer a significant change of user experience ratings within the whole populations from the tendering batch of 2B. Specifically, the -value of the student’s t-test for efficiency ratings from the tendering batches of 1B/2B is just 0.0001, which suggests that a significant change of user experience perceived by all the employees from the tendering batch of 2B exists with a large probability. The possible reason is that system performance of the winning bids from the tendering batch of 2B breaks through requirement bottleneck of the routine tasks in department B. p 5.3.4 User experience of items excluded from CpsMark + Although we have seen significant improvements in user experience of the winning bids selected by the new bid evaluation method, the rating items for user experience evaluation partly overlap with the workloads of CpsMark . Without loss of generality, we conduct a comparative analysis to further validate the effectiveness of CpsMark + in simulating user experience of tested computer systems with respect to software or applications that are not included in its workloads. + Specifically, for each rating item that is not adopted as the workload application of CpsMark , we collect and average its user experience metrics over the winning bids selected by the original and the new bid evaluation methods, respectively. The results are shown in + Fig. 7 and Table 11 . According to the above results, under the workloads that are not included in CpsMark , user experience of the office desktops selected by the new bid evaluation method also improves by varying degrees. For example, in terms of heavy workloads, the average ratings for efficiency and smoothness of MySQL increase by 22.95% and 26.56%, respectively. The similar trend of user experience improvement is also observed with respect to more lightweight workloads, e.g., Power BI, Internet Explorer, and Lark. These results suggest that the workloads of CpsMark + are sufficiently representative for simulating user experience of tested computer systems perceived under a wide range of workloads. + 6 Conclusions This paper presents CpsMark , a scenario-oriented benchmark system that quantitively evaluates the overall performance of office desktops in centralized procurement. Considering the proposed challenges in benchmarking desktops under practical usage scenarios for centralized procurement, the workloads of CpsMark + are designed to be scenario-oriented and can simulate user experience of tested computer systems perceived by end users working in modern-office scenarios. The metrics testing and the scoring methodology are flexibly adjusted based on each individual workload. Extensive experiments on multiple real-world tested computer systems demonstrate high sensitivity and repeatability of benchmark scores from CpsMark + , compared to SYSmark 2018 and PCMark 10. From the perspective of end users, in a practical centralized procurement of office desktops, by replacing the original bid evaluation method with benchmark scores from CpsMark + and comparing user experience ratings for the winning bids selected by the two bid evaluation methods, we also demonstrate the effectiveness of using CpsMark + to simulate user experience of tested systems in modern-office scenarios for better evaluation of office desktop performance in centralized procurement. + Our work provides a general idea to design computer benchmarks used in other usage scenarios and helps further explore the benefits of introducing benchmark scores in traditional bid evaluation methods for centralized procurement of office desktops. In the future, we will focus on designing parallel workloads that contain more complex interactions and involving other metrics, e.g., battery life or energy efficiency. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgments This work was supported by the National Key R&D Program of China under grant number 2018YFF0212106 . We are thankful to the purchasing manager in the centralized procurement and all the employees that participated in the user experience survey.
REFERENCES:
1. NORF U (2019)
2. PIEPER S (2007)
3. (2014)
4. (2018)
5. (2017)
6. (2022)
7. MARTIN A (2016)
8. (2022)
9. BUCEK J (2018)
10. WOO S (1995)
11. MCCALPIN J (1995)
12. MCVOY L (1996)
13. LU G (2015)
14. FELICIA F (2019)
15.
16. MITTAL S (2015)
17. WANG Y (2019)
18. VAGSTAD S (2000)
19. HUPPLER K (2009)
20.
21. GORDON U (2016)
22. CHEN Y (2012)
23. (2021)
24. CROLOTTE A (2009)
25. NGUYEN T (2015)
26. TAHERI S (2015)
27.
28. VOS T (2015)
29. RIVEST R (1992)
30. FLEMING P (1986)
31. BEDEIAN A (2000)
32.
33. (2022)
34. MITTAL V (2010)
|
10.1016_j.pecon.2017.03.003.txt
|
TITLE: Assessing the consistency of hotspot and hot-moment patterns of wildlife road mortality over time
AUTHORS:
- Lima Santos, Rodrigo Augusto
- Ascensão, Fernando
- Ribeiro, Marina Lopes
- Bager, Alex
- Santos-Reis, Margarida
- Aguiar, Ludmilla M.S.
ABSTRACT:
Spatial and temporal aggregation patterns of wildlife-vehicle collisions are recurrently used to inform where and when mitigation measures are most needed. The aim of this study is to assess if such aggregation patterns remain in the same locations and periods over time and at different spatial and temporal scales. We conducted biweekly surveys (n
=484) on 114km of nine roads, searching for road casualties (n
=4422). Aggregations were searched using different lengths of road sections (500, 1000, 2000m) and time periods (fortnightly, monthly, bimonthly). Our results showed that hotspots and hot-moments are generally more consistent at larger temporal and spatial scales. We therefore suggest using longer road sections and longer time periods to implement mitigation measures in order to minimize the uncertainty. We support this finding by showing that the proportional costs and benefits to mitigate roadkill aggregations are similar when using different spatial and temporal units.
BODY:
Introduction Roads have a variety of ecological effects on their surrounding environment, and one of the most studied is wildlife-vehicle collisions (WVC) ( Forman et al., 2003; Ree et al., 2015 ). Several researchers have demonstrated that roadkills are often spatially and temporally aggregated, hereafter referred as Wildlife-Vehicle Aggregations (WVA). WVA are generally related to species’ biological traits (e.g. mating), road features (e.g. traffic volume), the surrounding landscape or climate conditions ( Gunson et al., 2011; Malo et al., 2004; Smith-Patten and Patten, 2008 ). Therefore, WVA may indicate preferential targets (hotspots and hot-moments) for implementing mitigation measures ( Malo et al., 2004; Morelle et al., 2013; Ree et al., 2015 ). The identification of WVA is one of the approaches most used by researchers and decision makers to implement mortality mitigation on roads ( Santos et al., 2015 ). Mitigation measures must be planned to ensure effectiveness, due to the high cost of installation and maintenance ( Ree et al., 2015 ). Thus, it is necessary to determine the best spatial scale(s) at which putative predictors indicate locations of WVA ( Langen et al., 2007; Ree et al., 2015 ). Ideally, WVA need to be spatially restricted in length, since short road sections can be more easily mitigated by faunal passages and drift fencing than when WVA segments on road are distributed over a broader extent of the road ( Langen et al., 2007 ). On the other hand, understanding the role of seasonality on road mortality allows the identification of possible WVA in certain periods (hot-moments), and decision makers can direct mitigation measures toward the period of higher WVA, which will reduce the costs ( Sullivan et al., 2004 ). The aim of this study was to investigate if the spatial and temporal patterns of WVA were similar during the same time period for the different taxonomic groups. If WVA occur consistently in the same location and time period, i.e. do not change over time, mitigation measures applied therein will probably be more cost-effective ( Costa et al., 2015 ). Additionally, we evaluated how different road segment length or time period affected the consistency of spatial and temporal WVA patterns. We consider that higher correlation of WVA patterns between consecutive years indicate higher reliability in using such locations as mitigation targets. Hence, we evaluate how cost-benefit effectiveness could vary when targeting mitigation to short/long road sections or time periods. Cost-benefit analysis can be complex in road ecology ( Costa et al., 2015 ). Here, we adopt a simple approach where we count the number of casualties that could have been prevented if road mitigation was implemented in WVA (assuming full effectiveness). Materials and methods Study area We conducted the study in Brasília (Federal District), located in the Cerrado biome of Central Brazil. A total of 114 km pertaining to nine different roads were surveyed. More details of the study area, including weather conditions, traffic, roads, protected areas monitored and a map are provided in Text 1 in Appendix 1. Data collection We conducted road surveys biweekly (two surveys/week) for 5 years, surveying all 114 km by campaign (i.e., all road types were surveyed equally), between April 2010 and March 2015, totaling 480 roadkill surveys. Two observers and one driver searched for roadkills in a vehicle traveling at ca. 50 km/h. The observers recorded the location of carcasses using a hand-held GPS (5 m accuracy). Carcasses were removed after data collection to avoid pseudo-replication and recounting carcasses. Domestic animals were not considered in further analyses. Data analyses WVC records were aggregated by class (amphibians, reptiles, birds, and mammals) and year, and separate datasets for the spatial and temporal information were created. For the spatial dataset, we aggregated the records by road segments of 500, 1000 and 2000 m length. The temporal dataset was aggregated using fortnightly, monthly and bimonthly time periods. We considered a year of survey as the time between April and March of the following year. Hereafter we will refer to the section lengths and time periods as units. For each class and year of survey we assumed that the observed number of roadkills per unit would follow a random Poisson distribution with a mean ( λ ) equal to the total number of roadkills divided by the total number of units. The probability of any unit having x number of collisions was therefore: p ( x ) = λ x x ! e λ A mean value ( λ ) for each taxa was calculated, and considering roadkills per year. As the mean ( λ ) varied across taxa, each 500 m of road section with three or more collisions, could be defined as WVA for Amphibians. Road sections with four or more collisions were classified as WVA for Reptiles, to birds seven or more collisions, and for mammals with three or more. These minimum values for WVA detection increased for longer road sections (1000 m and 2000 m) scales. For hot-moments, periods (fortnight) with five or more collisions could be defined as WVA for Amphibians. For Reptiles, periods (fortnight) with thirteen or more roadkills were classified as WVA, and to birds thirty three or more roadkills. These minimum values for WVA detection increased for longer time units (monthly and bimonthly time periods). We considered a unit to be a WVA when p ( x ) > 0.95. We used the false discovery rate to reduce the likelihood of detecting false WVA (Type I error) due to multiple testing ( Benjamini and Hochberg, 1995 ). We used the same approach of Malo et al., (2004) as it permits easy comparison among sampling schedules using a fixed spatial scale. Besides, this method seems to perform better than others to detect fatality hotspots ( Gomes et al., 2009 ). We then transformed the consecutive units into a binary variable of presence/absence of WVA. Hence, for each year there is a hot-moment and a hotspot evaluation for each taxonomic class. The similarity of WVA patterns over time was assessed using correlation tests between consecutive years using the Phi coefficient ( r Phi ) ( Zar, 1999 ). The Phi coefficient measures the degree of association between two binary variables, and its interpretation is similar to the common correlation coefficients. This process was performed for each aggregation unit (spatial and temporal). Finally, the cost-benefit analysis was performed for each taxa, year and unit, by relating the proportion of road sections or time periods that were classified as WVA with the proportion of casualties potentially avoided if those WVA were mitigated. All calculations and plots were performed using R software ( R Core Team, 2015 ) and the R packages Hmisc , vcd , cowplot and ggplot . Results We recorded 4422 non-domestic road-killed animals, of which 5% were amphibians ( n = 274, 9 species), 15% reptiles ( n = 690, 34 species), 71% birds ( n = 3009, 91 species), and 9% mammals ( n = 448, 24 species) (Tables S1 and S2 in Appendix 1). We detected several WVA in all classes for all spatial and temporal units considered, except for mammals hot-moments ( Fig. 1 A and B ). Regarding the spatial dataset, when using units of 500 m and 1000 m, most WVA were identified only once in each class ( Fig. 1 A). However, this pattern was not consistent across the classes. For example, when using a unit of 1000 m, we detected only 4% of sections that were WVA for amphibians in more than one year, while for birds this proportion ascended to 14%. Nevertheless, we found overall low correlation values ( r Phi < 0.5) between consecutive years in WVA patterns for all classes for these smaller unit lengths ( Fig. 2 A) . Conversely, when using the longer unit length (2000 m) the number of sections that were classified as WVA more than once increased, e.g. 9% for amphibians and 23% for birds. Likewise, the similarity in WVA patterns was higher, particularly for amphibians and reptiles, with values of r Phi well above 0.5 ( Fig. 2 A and Figure S1 in Appendix 1). Surprisingly, the same WVA sections that occurred (km 10 and 38 for road split in 2000 m, Fig. 2 A) for all taxa were located in four-lane roads (Figure S2 in Appendix 1). The cost-benefit evaluation suggests a similar pattern across unit length, within each class. For example, 5–10% mitigation of the road could potentially result in an avoidance of 20–50% of casualties for amphibians, reptiles or mammals. In fact, for these classes, when using a unit length of 2000 m, the relation of the proportion of casualties potentially avoided (benefit) was generally 4 fold greater than the proportion of road mitigated (cost); while for birds the benefit was 2 fold greater ( Fig. 3 A) . Hence, planning mitigation using larger road sections is apparently more effective as it incorporates more WVA from different years, and yet does not represent a decrease in the cost-benefit relation. Regarding the temporal dataset, we found higher similarity in WVA patterns in consecutive years when using the three different time units, except for mammals, which was more evenly distributed throughout the year ( Fig. 1 B). Higher correlations were detected when using longer time units (bimonthly), particularly for amphibians and birds (median r Phi > 0.75) ( Fig. 2 B). The periods of highest roadkill for amphibians were between October and November; for reptiles between February and May (and peaks at December and January); and for birds between October and March. These aggregation periods were consistently highlighted in the different units ( Fig. 2 B and Figure S3 in Appendix 1). In general, using longer time units to detect WVA were also as effective as shorter units. For example, applying mitigation for about two and half months (20% of year) would potentially avoid ca. 50–75% of roadkills of amphibians. For reptiles, the identification of WVA using longer time unit (bimonthly) highlighted 2–6 months of higher mortality, which is probably related to the diversity of species included in this class that have different peaks of movement and therefore mortality throughout the year (e.g. turtles and lizards). In all cases, the relation between the proportion of casualties potentially avoided was twofold (or more) the proportion of year under mitigation ( Fig. 3 B). Therefore, the use of longer time-periods is preferable as it potentially includes WVAs from different years and again does not represent a decrease in the cost-benefit relation. Discussion In this study we aimed to assess the consistency of hotspots and hot-moments overtime, i.e., we questioned if a significant proportion of WVA occur in the same sites/periods, and if the pattern was maintained at different spatial and temporal scales. Our results showed that WVA patterns are more consistent when using larger spatial and temporal units. Moreover, although intuitively one may think that mitigation plans should target well defined and short road sections or time periods to increase the cost-benefit resources, we show that the proportional costs and benefits are similar regardless of the spatial and temporal units used to detect WVA. Although more resources are required when mitigating longer sections or time periods, the number of collisions potentially avoided is also higher. These patterns are well illustrated by the numerous sections classified as WVA when using smaller spatial or time units, many of which are not consistent across the years. Hence, larger units may guarantee more reliable information on where and when to allocate mitigation measures. Importantly, within each WVA, mitigation should focus on broad-scale of the road section or period as roadkills may occur at different points or moments in different years. Mitigation measures focused on single point locations (e.g., culverts) is unlikely to be sufficient to maintain the long-term viability of populations ( Patrick et al., 2012 ). We suggest that mitigation should focus broad-scale measures deployed at longer road sections and time periods, although these are more expensive to build and maintain ( Beaudry et al., 2008; Patrick et al., 2012 ). Few measures can be implemented at large scales, such as the reduction of speed limits ( Hobday and Minstrell, 2008 ), velocity reducers and drift fences connecting to faunal underpasses ( Ascensão et al., 2013; Ree et al., 2015 ). Different strategies can be adopted, which will depend on the financial resources available and the target species. For instance, many small crossings underground can be implemented if turtles are the target specie ( Beaudry et al., 2008 ). Also, our results highlighted the four-lane sections as priority sections to mitigate, suggesting that the “true” WVA is a reflectance of high traffic, since these roads segments shows the highest traffic volumes in our study area. The temporal analyzes revealed a strong association of WVA of amphibians, reptiles and birds with the rainy season (October to March in our study area). This period corresponds to the occurrence of migratory events and/or breeding season for many species here recorded ( Sick, 2001; Coelho et al., 2012 ). Previous works have also reported increased mortality rates during warm and wet seasons, while dry or cold seasons generally present lower values ( Coelho et al., 2012; Langen et al., 2007; Morelle et al., 2013 ). Identifying hot-moments of WVC using larger temporal periods may provide important information to implement short-time mitigation measures such as temporary road closure or speed reduction ( Hobday and Minstrell, 2008; Sullivan et al., 2004 ). The lack of aggregation periods for mammals may stem from the fact that the dataset was composed mostly by highly mobile and generalist species. These traits lead to a more uniform distribution of roadkills, which minimizes the chances of occurring WVA. It should be noted that both spatial and temporal variation of roadkills may be related to differences in vehicle traffic during the year or fluctuations in population abundance ( Coelho et al., 2012; Smith-Patten and Patten, 2008 ). Unfortunately, to our knowledge, such data does not exist for our study area. Also, we worked at the taxonomic level of Class, thereby precluding more specific analyses. By analyzing at the species level, such patterns could probably be more stable over time. However, this would require a large volume of roadkill data for single species, which is rather unfeasible and not possible for our dataset. Finally, we chose not to analyze scales greater than 2000 m, as the costs of implementing mitigation measures would become excessive. Conflicts of interest The authors declare no conflicts of interest. Appendix A Supplementary data Supplementary data associated with this article can be found, in the online version, at doi:10.1016/j.pecon.2017.03.003 . Appendix A Supplementary data The following are the supplementary data to this article:
REFERENCES:
1. ASCENSAO F (2013)
2. BEAUDRY F (2008)
3. BENJAMINI Y (1995)
4. COELHO I (2012)
5. COSTA A (2015)
6. FORMAN R (2003)
7. GOMES L (2009)
8. GUNSON K (2011)
9. HOBDAY A (2008)
10. LANGEN T (2007)
11. MALO J (2004)
12. MORELLE K (2013)
13. PATRICK D (2012)
14. RCORETEAM (2015)
15. REE R (2015)
16. SANTOS S (2015)
17. SICK H (2001)
18. SMITHPATTEN B (2008)
19. SULLIVAN T (2004)
20. ZAR J (1999)
|
10.1016_j.phoj.2024.11.065.txt
|
TITLE: Glutamine mouthwash for preventing methotrexate-induced mucositis in children with acute lymphoblastic leukemia: A randomized cross-over trial
AUTHORS:
- Sankaran, Siva
- dewan, Pooja
- Malhotra, Rajeev Kumar
- Kashyap, Bineeta
ABSTRACT: No abstract available
BODY:
Background and Aim - This study aimed to evaluate the effectiveness of glutamine mouthwash in preventing severe oral mucositis in children with Acute Lymphoblastic Leukemia (ALL) undergoing High Dose Methotrexate (HDMTX) therapy. It also assessed the efficacy and safety of oral glutamine in reducing painful mucositis, the duration of mucositis, dysphagia, and the incidence of oral infections. Method - Eighteen children with ALL were randomised to receive two consecutive courses of HDMTX with glutamine and two without, or vice versa. Glutamine was administered as an oral suspension, taken twice daily, beginning one day before HDMTX, and continued for up to 7 days. All participants followed a standard oral hygiene protocol (SOHP) with supervised brushing, chlorhexidine mouthwash, and clotrimazole mouth paint application twice daily. The primary outcome was the proportion of severe mucositis (grade III or IV), while secondary outcomes included the incidence, duration, and pain of mucositis, the need for rescue analgesia, and the incidence of oral infections. Results - Among the 64 HDMTX courses analysed, severe mucositis was significantly less frequent in the glutamine group (3.1% vs. 44%, P < 0.001), although the overall incidence of mucositis was similar between groups. Glutamine significantly delayed the onset of mucositis (5 days vs. 2 days, P < 0.001) and shortened its duration (2 days vs. 5 days, P < 0.001). The median (IQR) pain scores were significantly lower in the glutamine group [4.5 (0,6) Vs 8 (5.25, 8), P < 0.001]. The duration of dysphagia was also significantly lower in the glutamine group [2 (0, 4) days vs 6 (3, 6) days, P < 0.001]. The incidence of microbial growth was similar between both groups (30.5 % vs 19.3%, P = 0.528). Conclusions - Glutamine mouthwash effectively reduces the severity and duration of mucositis in children with ALL undergoing HDMTX therapy, offering a promising adjunctive treatment.
REFERENCES:
No references available
|
10.1016_j.heliyon.2023.e19983.txt
|
TITLE: Indicators to measure implementation and sustainability of nursing best practice guidelines: A mixed methods analysis
AUTHORS:
- Aloisio, Laura D.
- Graham, Nicole
- Grinspun, Doris
- Naik, Shanoja
- Coughlin, Mary
- Medeiros, Christina
- McConnell, Heather
- Sales, Anne
- McNeill, Susan
- Santos, Wilmer J.
- Squires, Janet E.
ABSTRACT:
Background
The use of best practice guidelines (BPGs) has the potential to decrease the gap between best evidence and nursing and healthcare practices. We conducted an exploratory mixed method study to identify strategies, processes, and indicators relevant to the implementation and sustainability of two Registered Nurses’ Association of Ontario (RNAO) BPGs at Best Practice Spotlight Organizations® (BPSOs).
Methods
Our study had four phases. In Phase 1, we triangulated two qualitative studies: a) secondary analysis of 126 narrative reports detailing implementation progress from 21 BPSOs spanning four sectors to identify strategies and processes used to support the implementation and sustainability of BPGs and b) interviews with 25 guideline implementers to identify additional strategies and processes. In Phase 2, we evaluated correlations between strategies and processes identified from the narrative reports and one process and one outcome indicator for each of the guideline. In Phase 3, the results from Phases 1 and 2 informed indicator development, led by an expert panel. In Phase 4, the indicators were assessed internally by RNAO staff and externally by Ontario Health Teams. A survey was used to validate proposed indicators to determine relevance, feasibility, readability, and usability with knowledge users and BPSO leaders.
Results
Triangulation of the two qualitative studies revealed 46 codes of implementation and sustainability of BPGs, classified into eight overarching themes: Stakeholder Engagement, Practice Interventions, Capacity Building, Evidence-Based Culture, Leadership, Evaluation & Monitoring, Communication, and Governance. A total of 28 structure, process, or outcome indicators were developed. End users and BPSO leaders were agreeable with the indicators according to the validation survey.
Conclusions
Many processes and strategies can influence the implementation and sustainability of BPGs at BPSOs. We have developed indicators that can help BPSOs promote evidence-informed practice implementation of BPGs.
BODY:
What is already known • Best practice guidelines (BPGs) can help decrease the gap between research evidence and nursing/clinical practice. • Understanding the strategies, processes, and indicators that are important for implementation and sustainability of BPGs is an important interest of health care organizations. • There is a knowledge gap regarding the strategies, processes and indicators related to implementation and sustainability of BPGs at Registered Nurses of Ontario's Best Practice Spotlight Organization® (BPSOs). What this paper adds • This study begins the seminal work of measuring indicators related to the implementation of nursing BPGs. • We determined 28 structure, process, or outcome indicators that will inform future implementation and sustainability of BPGs at BPSOs. • We identified eight overarching themes (46 codes) that describe the strategies and processes related to implementation and sustainability of BPGs at 21 BPSOs; 16 of the identified strategies and/or processes were statistically significantly correlated with established process and outcome indicators measuring implementation success of both the Prevention of Falls and/or Person-and Family Centred Care guidelines. 1 Background The use of research evidence is essential for maintaining professional nursing standards and providing high quality care [ 1 , 2 ]. However, barriers continue to hinder the implementation and sustainability of evidence-based nursing practices [ 3–5 ], contributing to a knowledge-to-practice gap [ 6–8 ]. Furthermore, there is inconsistent implementation and sustainability of research evidence into clinical practice across health care disciplines and sectors of health care [ 9 , 10 ]. Best practice guidelines (BPGs) are systematically developed evidence-based documents that summarizes research and provide recommendations to health care providers, leaders and policy makers, patients and families regarding a clinical or health care topic [ 11 ]. BPGs can be used to bridge the gap between research evidence and clinical practice by synthesizing evidence for health professionals [ 4 , 12 ]. BPGs have become a common feature of health service organizations internationally and are of interest worldwide as a tool to facilitate more consistent, effective, and efficient practice [ 12 ]. Recent synthesis studies focused on the implementation of clinical practice guidelines reported that these tools can have a positive impact on providers' knowledge, behaviour and patient outcomes in the context of interdisciplinary and team based care [ 13 ], arthritis, diabetes, colorectal cancer and heart failure care [ 14 ], cancer care [ 15 ], nursing care [ 16 ], broadly across 16 clinical topics [ 17 ], and health systems in low- and middle-income countries [ 18 ]. These synthesis studies summarized the implementation barriers and facilitators [ 18 ], the dissemination and implementation approaches and strategies used during guideline implementation [ 13–18 ] and outcomes (changed in attitude, knowledge, behaviour by health care providers or improvement in patient outcomes) resulting from application of implementation strategies [ 15 , 16 , 18 ]. Peters and colleagues [ 17 ] identified 11 theories and frameworks that were used in 25 studies in planning their implementation approach. One of the frameworks listed was the Knowledge-to-Action Framework [ 19 ] which informed the development of the Registered Nurses' Association of Ontario (RNAO)'s Implementation Toolkit, which is composed of documents used by BPSOs to guide their implementation efforts [ 20 ]. None of the reviews evaluated or discussed the presence of indicators relevant to implementation strategies used during guideline implementation [ 13–18 ]. Further, many authors advocated for evaluating which context attributes and implementation strategies and processes are important for the implementation and sustainability of BPGs [ 4 , 21–24 ]. In 1999, the RNAO launched the Best Practice Guideline program and has since developed 49 clinical, system, and work environments BPGs. Among other innovative strategies, the Best Practice Spotlight Organization® (BPSO) program, launched in 2003, supports the creation of an evidence-based culture through the systematic implementation of multiple BPGs [ 4 ]. Three distinct and interrelated levels are targeted for BPG implementation: micro-level , which refers to individual health professionals; meso-level, which refers to organizations; and macro-level , which refers to the health systems as a whole [ 4 ]. A key implementation strategy at the meso-level is the opportunity for organizations (e.g., hospitals) across the globe to partner with the RNAO to become BPSOs. BPSOs gain access to BPGs and implementation support from RNAO (e.g., training sessions and ongoing expert consultation on guideline dissemination, uptake, implementation, evaluation and sustainability) [ 25 ]. Indicators are widely used in many different fields. They are useful in highlighting problems, identifying trends, and contributing to priority setting, policy development and the evaluation and monitoring of progress [ 26 ]. Implementation strategies and outcomes can be measured by using administrative data and indicators [ 27 ]. Indicators have a long tradition in measuring the quality of health care [ 28 ]; however, there has been limited attention to indicator development and evaluation in implementation science [ 27 ]. There is an increasing interest by organizations and regulatory bodies towards gaining a better understanding of the indicators of evidence-based practice uptake in nursing. For instance, there are systems established in the United States that provide a platform for comparing nursing-sensitive quality indicator data to improve patient outcomes, such as the American Nurses Associations’ National Database of Nursing Quality Indicators [ 29 ], the California Nursing Outcomes Coalition [ 30 ], the Military Nursing Outcome Database (MilNOD) [ 31 ], and the Veterans Administration Nursing Outcomes Database [ 32 ]. The largest of these three databases, the NDNQI, is maintained through survey data from registered nurses in over 2000 healthcare settings in the United States and captures nursing-sensitive structure, process, and outcomes measures related to quality indicators and patient outcomes [ 33 ]. In Canada, there are similar efforts to collect nursing-sensitive indicators regarding quality (e.g., Continuing Care Meta-data from the Canadian Institute of Health Information [ 34 ]; Graham, 1998 [ 35 ]). In 2012, the RNAO initiated the Nursing Quality Indicators for Reporting and Evaluation® (NQuIRE®) database, a seminal quality improvement initiative that hosts a database of nursing-sensitive quality indicators derived from recommendations in the RNAO's BPGs. The goals of this database are to enable evaluation of BPG implementation based on quality-of-care and patient outcome indicators and to demonstrate how nursing BPGs are valuable to patient, organizational, and health system performance. The NQuIRE® database provides a platform for the development of indicators to support BPG implementation and sustainability in BPSOs. The RNAO's MyBPSO qualitative database is another key element of BPSOs' evaluation tool. BPSOs narrates reflections on their progress towards deliverables in MyBPSO reports. These include developing champions' capacity to drive change; gap analysis between current practices and BPG recommendations; monitoring, evaluating, and disseminating impact of BPG implementation; and sustainability of practice changes. MyBPSO reports provide opportunity for RNAO expert staff to coach and rapid learning cycles by organization leaders. We used MyBPSO reports in this study to facilitate a better understanding of evidence-based nursing-sensitive quality measures and context indicators that can best predict implementation and sustainability of BPGs, and how these indicators can be operationalized. In this paper, we present the findings from a mixed-methods, multi-phased project that led to the development and operationalization of indicators of implementation and sustainability of BPGs to improve the use of RNAO's BPGs by BPSOs. These indicators could potentially apply across sectors and type of BPGs but were developed by an expert panel with consideration that guideline implementation and sustainability can vary across sectors or the type of BPG. 2 Methods We conducted a four-phased exploratory mixed method study [ 36 ]. In exploratory mixed method studies, the results of the first method (qualitative) is used to help develop or inform the second method (quantitative) [ 36 ]. In our study, we used qualitative methods to identify descriptions of the implementation process from MyBPSO reports and knowledge user interviews and then integrated these qualitative descriptions with the quantitative indicators collected from NQuIRE® to develop indicators of implementation. Detailed description of the data collection and data analysis steps are provided in Table 1 . Data collection occurred between August 2018 and December 2020. We followed Lee and colleagues’ [ 37 ] reporting guideline for mixed methods studies in presenting our methods and results (Supplemental Table 1). 2.1 Phase 0: organization recruitment and selection A delegate from the RNAO contacted representatives from BPSOs via email or telephone calls regarding their participation in our study, if an organization implemented at least one of the two selected BPGs of interest within the last 10 years. The two BPGs of interest were selected to include one guideline on a topic that is primarily clinically-related, Preventing Falls and Reducing Injuries from Falls (Falls guideline) [ 38 ], and one guideline that is primarily relationship-oriented, Person- and Family-Centred Care guideline [ 39 ]. BPSOs interested in participating in the study contacted the study team. BPSOs were selected based on multiple criteria to ensure that the sample included: i) two organizations from each sector (acute care, home care, long-term care, and public health); ii) both high and low performing organizations with respect to their implementation success (defined below); and iii) designate and pre-designate organizations (designate organizations have attained their BPSOs designation status while pre-designate organizations are in their three-year period during which they aim to achieve designation). Implementation success was based on indicators identified as having the highest quality of data reporting in the NQuIRE® data across BPSOs. BPSOs submit aggregated de-identified data monthly for selected process and outcome quality indicators for each clinical BPG implemented. One process and one outcome indicator for each BPG was considered. For the Falls guideline, “falls risk assessment on new admission” and “falls rate” were the process and outcome indicators that were the most reported on, respectively. Fall risk assessment on new admission was measured as a percentage of newly admitted patients for whom a falls risk assessment was completed using a valid and reliable fall risk assessment tool on admission. Falls rate was defined as the ratio between total number of falls and total number of patient days/visits per 1000 patient care days/visits. For the Person- and Family-Centred Care guideline, “person and family-centred plan of care” was the process indicator defined as the percentage of persons participating in developing their personalized plan of care; “rate of complaints received from the person receiving care” was the outcome indicator defined as ratio between number of complaints received from persons receiving care and total number of care-days/visits per 1000 patient care days/visits. BPSOs were included if they reported data for the specified process and outcome indicators for at least 12 consecutive months for at least one of the two BPGs examined in this study. 2.2 Phase 1: data triangulation of qualitative studies 2.2.1 Secondary analysis of MyBPSO reports Staff at each BPSOs provide annual or bi-annual reports on their implementation and sustainability efforts in the MyBPSO database. We performed secondary content analysis of qualitative MyBPSO reports from the selected BPSOs [ 40 ]. The analysis was completed in NVivo 10 [ 41 ]. Data were analyzed independently by research team members trained in qualitative analysis (NG, MC, JES) using inductive qualitative thematic content analysis [ 42 , 43 ]. Our inductive analysis occurred in three systematic steps: (1) selection of utterances related to implementation and sustainability, (2) coding of these utterances, and (3) categorizing of the codes into higher level themes of implementation and sustainability. In the first step, each report was sequentially reviewed by two research assistants (NG and MC). Utterances that reflected BPG implementation and sustainability strategies and processes were highlighted. In step two, the utterances were coded; all utterances were assigned a “code,” and given an operational definition. Reports were coded independently by two research assistants (MC and NG) with weekly consensus meetings to resolve any conflicts in coding. Excerpts of the reports were coded under two criteria: i) which strategy or process was being used (e.g., Resources and Tools for Staff); ii) type of guideline (i.e., Falls guideline, Person- and Family-Centred Care guideline, or BPSO activity in general). This process was continued for each report from each sector. In step 3, we categorized codes into broader themes based on their similarities of strategies and processes of implementation and sustainability. This process was guided by end-user feedback and expert opinion. Themes were given an operational definition. We examined the frequency of a particular code and theme within a site (i.e., at the organizational level). We also compared the frequency of codes and themes at the level of the health care sector (acute care, home care, long-term care, and public health) and the two BPGs. 2.2.2 Interviews with implementers We interviewed individuals across eight BPSOs (for the selection process of these eight organizations, see above - Phase 0: Organization Recruitment and Selection). We aimed to elicit views from staff in all four health sectors: acute care, long-term care, home care, and public health. In qualitative research, there are no hard rules about sample size; while 6–8 participants often suffice for a homogeneous sample, 12–20 are commonly needed when trying to achieve maximum variation [ 44 ]. Based on findings from our secondary analysis of MyBPSO reports, we anticipated views within a sector to be homogenous. Therefore, we aimed for 6–8 interviews per sector (20–32 total across the four sectors). As a result, we interviewed across eight BPSOs, two per health sector. The sampling process for selecting interviewees was first, through purposeful and convenience sampling, and second, through snowball sampling, as follows. A delegate from the RNAO emailed or called the implementation lead of 8 purposefully chosen BPSOs from the 21 organizations in Phase 1, to recruit a range of individuals who are involved in BPG implementation (e.g., the leadership team, implementation coaches, champions, front line staff) and who were willing to participate in interviews. The implementation leads were asked to participate in the interviews or was asked to refer other individuals from their organization who are involved in BPG implementation to participate in the interview study (snowball sampling). Interested individuals contacted the research team (MC and LA) if they were interested in participating in the study. Semi-structured theory-informed interviews were conducted to elicit tacit knowledge about strategies and processes used to support the implementation and sustainability of the two chosen BPGs and their perceptions as to whether the implementation strategies were effective and why. The interview guide was developed with input from all members of the research team and was informed by the Tailored Interventions for Chronic Diseases Checklist [ 45 ]. We used the Tailored Interventions for Chronic Diseases Checklist to inform our interview guide because it is a comprehensive integrated checklist of the determinants of implementation success, developed through the synthesis of 12 checklists derived from theories and frameworks well utilized in implementation science (e.g., the Consolidated Framework for Implementation Research [ 46 ]) [ 45 ]. The interview guide covered the following domains: 1) guideline factors (e.g., quality of evidence and feasibility and accessibility of clinical interventions), 2) individual health professional factors (e.g., professional knowledge and expertise), 3) patient factors (e.g., patient knowledge and beliefs), 4) professional interactions (e.g., norms and individual mindsets), 5) incentives and resources (e.g., fall rates and patient and family satisfaction surveys), 6) capacity for organizational change (e.g., style of leadership and networks), and 7) social, political, and legal factors (e.g., monitoring and feedback and legislation). We conducted 30–45 min telephone interviews with the participants at a time and place that is most convenient for them. Interviews were conducted by three research team members (LA, MC, and NG). All interviewers were graduate-level prepared nurses with training in and experience conducting qualitative research. They had no prior relationships with any of the interviewees and were independent of both RNAO and all eight BPSOs. Digital recordings of the interviews were transcribed verbatim by a professional transcriptionist weekly and verified by the interviewers (LA, MC, and NG) prior to analysis. We followed the same coding framework as our secondary analysis of MyBPSO reports (see Table 1 ). The codes derived through thematic analysis of MyBPSO reports were used in the analysis of the interviews. The unit of analysis was done at an organizational level and according to the guideline that was implemented. New codes identified during the analysis of the interviews were added to the existing list of codes. The interviews were coded independently by two research assistants (NG and LA) with weekly consensus meetings to resolve any conflicts in coding. Approximately 4–6 interviews were conducted per sector until no new codes emerged. No follow-up interviews for clarity or confirmation of findings were required. We examined the frequency of the codes across the 25 interviews (i.e., frequency was examined at the level of the interview, not within interviews). The relative occurrence rate of each code among the 25 interviews was calculated by taking the number of interviews where the code appeared and dividing by the total number of interviews conducted. The codes were ranked based on frequency of occurrence within a BPSO. Then, ranks of the codes were averaged between the eight BPSOs. For example, if the code “Champions” was ranked first in one BPSO but 4th in another, the average rank for that code was 2.5. Next, a two-way correlation analysis was conducted between code frequencies in the interviews and the MyBPSO reports. 2.2.3 Phase 2: correlations between NQuIRE® data and codes from MyBPSO reports The codes found from MyBPSO reports (qualitative data) were analyzed with respect to the NQuIRE® data (quantitative data) to determine correlations between implementation strategies and quality indicators. We did not evaluate correlations between interview data and NQuIRE® data, as the interview data was not separated by guideline, which would be required to compare interview data with the NQuIRE® data. We calculated descriptive statistics (including percentages, medians and means) for each identified theme and code based on their occurrence within the thematic analysis of MyBPSO reports for the two BPGs and each BPSOs. We developed scores using relative improvement of time series data collected on quality (process and outcome) indicators from NQUIRE for each BPG. For example, for the Falls BPG, we assessed whether there was relative improvement in “falls risk assessment on new admission” and “falls rate” before and after the Falls BPG was implemented. We constructed a frequency table to compare the number of occurrences of codes within the thematic analysis with the developed scores based on NQuIRE® performance evaluation of BPSOs. Chi-square analyses was used to evaluate statistically significant differences between the presence of a strategy that promote guideline implementation and sustainability (codes based on MyBPSO reports) in relation to process and outcome quality indicators (scores based on NQuIRE® data). 2.2.4 Phase 3: indicator development To develop indicators, we sought the tacit knowledge of an expert panel composed of RNAO staff members and users spanning the same four sectors as the MyBPSO reports analysis. The RNAO staff members on the expert panel included three senior managers, the Chief Executive Officer, and a senior data scientist. Ten end users from the sectors were also panel members and included senior members of BPSOs including champion leaders and members of the BPSOs’ senior management teams. The panel met on two occasions in-person to review and discuss together the draft indicators, provide qualitative feedback on their wording, and make revisions. Further revisions to the indicators were also made virtually by email between and following the final in-person meetings. To develop indicators, the expert panel considered whether the codes were: 1) measurable; 2) significantly correlated with NQuIRE® data; 3) frequently mentioned in the interviews and reports; 4) and suitable or feasible for indicator development. To ensure that all eight themes was represented, the panel selected one code per theme to develop indicators for, leading to a total of eight codes for which indicators were developed. The expert panel developed one to three indicators per code, with indicators categorized as structure, process, or outcome. 2.2.5 Phase 4: content validation of indicators After an initial draft of indicators were formulated, the expert panel met with Ontario Health Team leaders from BPSOs to receive feedback. After discussing the indicators with their respective teams, the same Ontario Health Team leaders reconvened to re-evaluate the indicators based on the feedback they gathered from discussions with their respective Ontario Health Teams (i.e., team feedback). The expert panel revised the indicators again based on the feedback, after which they were validated internally at the RNAO ( n = 4) and externally via end user surveys ( n = 12). All participants completed the same online survey. RNAO members were members of senior management who were also members of the research team. End users were from a variety of roles, including management, professional practice leaders, and educators. For end users, the survey was sent by an RNAO delegate to a subset of 35 BPSOs in a range of sectors in Ontario and internationally. Each participant of the internal and external validation exercise was asked to provide demographic information (see Supplemental Table 3) and rate each of the indicators. For each indicator, participants rated 4 criteria (relevance, feasibility, readable, usability) on a six-point scale from strongly disagree to strongly agree. For each indicator, a mean was calculated for each category (Strongly Disagree (1); Moderately Disagree (2); Mildly Disagree (3); Mildly Agree (4); Moderately Agree (5); Strongly Agree (6)). Following validation, the expert panel performed a final revision of the indicators. 2.2.6 Ethical approval and informed consent We received ethical approval from the University of Ottawa Ethics Board (file number: H09-17-13), The Ottawa Hospital Research Institute Ethics Board (file number: 20180366-01H) and the Toronto Public Health Research Ethics Board (file number: 2018-12). The data used in this study was provided by the RNAO following consent from each BPSOs to use their MyBPSO reports and the data from the NQUIRE® database. We garnered informed consent from the 25 interview participants. 3 Results 3.1 Data sources The sample of data used in the analysis of the reports and interviews is summarized in Table 2 . The sample of 21 BPSOs were distributed across four sectors in Ontario: acute care ( n = 6, 29%), home care ( n = 3, 14%), a long-term care ( n = 8, 38%), and public health units ( n = 4, 19%). In total, 14 organizations in the sample (67%) implemented the Falls guideline, 12 organizations (57%) implemented the Person- and Family-Centred Care guideline, and 8 organizations (38%) implemented both guidelines. The average number of MyBPSO reports per organization was 6 reports, with 126 reports analyzed in total. Interviews were garnered from individuals from eight BPSOs across the same four sectors in Ontario (2 organizations per sector). In total, 29 individuals reported wanting to participate in interviews but only a total of 25 interviews across eight organizations were completed: acute care ( n = 5, 20%); home care ( n = 7, 27%), long-term care ( n = 5, 20%), and public health ( n = 8, 32%). 3.2 Phase 1 results: codes and themes from MyBPSO reports and interviews Across the reports and interviews, a total of 47 codes of implementation and sustainability of BPGs were identified and classified into eight overarching themes ( Table 3 ). The number of codes in each theme varies from 11 codes ( Stakeholder Engagement theme) and two codes ( Governance theme). Codes were categorized by sector (acute care, home care, long term care, or public health) and whether they were identified in reports, interviews, or both. Supplemental Table 2 provides a full codebook with definitions. 3.3 Themes There was little variation in themes amongst different sectors. Four themes were captured in all organizations ( n = 21, 100%) and the remaining four themes were captured in 19 organizations (95%). When separated by BPG, there was slightly more variation. For the Falls guideline, there were three themes that were not captured in all organizations: Communication ( n = 10, 71%), Stakeholder Engagement ( n = 11, 79%), and Leadership ( n = 11, 79%). As for the Person- and Family-Centred Care guideline, only two themes were represented in all 12 organizations ( Practice Interventions and Capacity Building ), although all eight themes were represented in 75% or more of the organizations that implemented the Person- and Family-Centred Care guideline. The two lowest reported themes for the Person- and Family-Centred Care guideline were Leadership and Communication ; both reported in nine organizations (75%). 3.4 Common codes across sectors and guidelines In contrast to the broader themes, we found more variability amongst the codes across sectors and guidelines. A subset of codes was found in high frequency across all organizations' reports, regardless of sector or guideline type. Building Capacity of Staff ( Capacity Building theme) was coded in all organizations, for both the Falls and Person- and Family-Centred Care guidelines ( n = 14, 100%, and n = 12, 100%, respectively). Resources and Tools for Staff ( Practice Interventions theme) was mentioned by all organizations that implemented the Falls guideline and by 10 of 12 (83%) organizations who implemented the Person- and Family-Centred Care guideline. Resources and Tools for Staff were often linked to Building Capacity of Staff and included the addition of resources such as checklist tools or the availability of information and resources on an organization's intranet. Data Collection ( Evaluation & Monitoring theme) was reported by all organizations that implemented the Falls guideline ( n = 14, 100%) and most organizations that implemented Person- and Family-Centred Care guideline ( n = 9, 75%). The code Policy Changes ( Governance theme) was reported in most organizations for each guideline ( n = 14, 100% for the Falls guideline; n = 10, 83% for the Person- and Family-Centred Care guideline). 3.5 Low-frequency codes Some codes were infrequently reported across all organizations. The code Benchmarks ( Evaluation & Monitoring theme) were only coded in two organizations overall. The only other two codes that were mentioned by less than half of organizations were BPSO® Lead ( Leadership theme, n = 9, 43%), and Use of Technology ( Practice Interventions theme, n = 9, 43%). 3.6 Codes by guideline type There were four codes that were reported in the Falls guideline but not in the Person- and Family-Centred Care guideline: BPSO® Coach/Mentor ( Capacity Building theme), Benchmarks ( Evaluation & Monitoring theme), External Partnerships ( Stakeholder Engagement theme), and Feedback to Staff ( Stakeholder Engagement theme). Three codes were reported in the Person- and Family-Centred Care guideline but not in the Falls guideline: Pilot Project and Program Expansion ( Practice Intervention theme), Feedback to Managers ( Stakeholder Engagement theme). A subset of codes was more frequently reported in reference to the Falls guideline than the Person- and Family-Centred Care guideline. Incident Monitoring ( Evaluation & Monitoring theme) was reported in 10 organizations that implemented the Falls guideline (71%) compared to only two organizations that implemented the Person- and Family-Centred Care guideline (17%). Similar results were found for the code Monitoring Progress (also in the Evaluation & Monitoring theme), wherein all 14 organizations (100%) reported it for Falls guideline while seven organizations (58%) were coded for the Person- and Family-Centred Care guideline. A third code, Resources and Tools for Patients code ( Practice Interventions theme) was more common in the Falls guideline with a frequency of 11 organizations (79%) compared to only being coded in one of 12 organizations (8%) for the Person- and Family-Centred Care guideline. Two codes were significantly more common in the Person- and Family-Centred Care guideline rather than the Falls guideline. Feedback from Patients ( Stakeholder Engagement theme) was mentioned in 10 organizations (83%) that implemented the Person- and Family-Centred Care guideline, in contrast to two organizations that implemented the Falls guideline (14%). Senior Leadership Engagement ( Leadership theme) was also more common during implementation of the Person- and Family-Centred Care guideline ( n = 6, 50%) than the Falls guideline ( n = 2, 10%). 3.7 Codes and themes according to BPG by sector When the coding is separated by the organizations’ sector, there was some variation. For the Falls guideline, all themes and 23 codes (23/41 = 56%) were reported by at least one organization that implemented the Falls guidelines for all four sectors. Only one code was unique to only one sector (Benchmarks), reported in acute care. For the coding of the Person- and Family-Centred Care guideline, all themes and 18 codes (18/41 = 44%) were reported by at least one organization in each sector. There were three codes with no coding in Person- and Family-Centred Care: BPSO® Lead ( Leadership theme), Benchmarks ( Evaluation & Monitoring theme), and External Partnerships ( Stakeholder Engagement theme). 3.8 New codes from interviews All codes identified in the reports were also identified in the interviews. Additionally, seven new codes (can be mapped into four themes identified from the MyBPSO reports) emerged from the interviews. These codes were Patient Engagement in Implementation and Staff Engagement ( Stakeholder Engagement theme); Resources and Costs for Implementation and Systems Approach ( Practice Interventions theme); Education of Patients and Families ( Capacity Building theme); and Adapt to Client and Population Needs as well as Staff Attitude toward BPG ( Evidence-based Culture theme). These codes were not specific to the Falls or Person- and Family-Centred Care guidelines. As well, 5 of the 7 (71%) new codes emerged across all sectors. Staff Engagement ( Stakeholder Engagement theme) and Systems Approach ( Practice Interventions theme) were not identified in the home care and acute care settings, respectively. 3.9 Phase 2 results: correlations between NQuIRE® data and codes from MyBPSO reports The statistically significant correlations between the codes (abstracted from MyBPSO reports) and scores (based on relative improvement of quality indicators from NQuIRE® data) for both Falls and Person- and Family-Centred Care guidelines are presented in Table 4 . Of all the codes, 19 had statistically significant correlations ( p ≤ .05) with quality indicators for at least one of the guidelines. For both the Falls guideline and the Person- and Family-Centred Care guideline, 17 codes were statistically significantly correlated with quality indicators. Of the 19 codes that were statistically significantly correlated with at least one guideline, 16 were statistically significantly correlated with quality indicators for both the Falls and Person- and Family-Centred Care guidelines. Unique to the Falls guideline, the codes culture change ( Evidenced-Based Culture theme; ρ = 0.850, p < .001), policy changes ( Governance theme; ρ = 0.712, p = .0039), monitoring progress ( Evaluation and Monitoring theme; ρ = 0.520, p = .0498), and collaborating committees ( Stakeholder Engagement theme; ρ = 0.569, p = .0485) were statistically significantly correlated with quality indicators. For the Person- and Family-Centred Care guideline, senior leadership engagement or support ( Leadership theme; ρ = 0.611, p = .0254) was statistically significant for this guideline only. Communication was the only theme that did not have a code that was statistically significantly correlated with quality indicators for either guideline. Five out of the 8 (62.5%) themes had codes that were statistically significant with quality indicators in both guidelines; communication, governance, and leadership only having either one or no codes statistically significantly correlated with quality indicators for either guideline. 3.10 Phase 3 results: indicator development Following expert panel discussion, a total of eight codes were selected for indicator development. The reason for inclusion of a code is provided in Table 5 and Supplemental Table 3 provides a list of developed indicators prior to internal and external validation. For example, building capacity and education of councils will ask staff to indicate the number of hours spent on formal education activities, and specify the method of delivery (e.g., online or in-person, or self-guided). 3.11 Key indicators Building capacity is defined as the process by which individuals and/or the BPSOs harness or improve skills, knowledge, or resources to meet performance expectations required to implement or sustain the BPGs. All four sectors for both BPGs reported using capacity building processes. As well, the building capacity of staff & education of councils code was statistically significantly correlated to the successful implementation of both Falls and Person- and Family-Centred Care guidelines. This was the only code in the theme shown to have statistically significant correlation for both BPGs and had the highest frequency in the reports. Building Capacity. Communication is defined as the transfer of information related to BPG implementation and sustainability. This theme included codes such as external dissemination, internal dissemination, and promoting BPSO status. Of these, external and internal dissemination were mentioned across all four sectors in the interviews and reports. External dissemination was deemed as especially critically important by an expert panel. Communication. Evaluation and monitoring are defined as the processes by which the progress of implementation and sustainability of BPGs are observed or measured over time. Codes within this theme included establishing indicators, incident monitoring, and monitoring progress. Of these, monitoring progress was the highest frequency code in the reports and was significant in both BPGs. Monitoring progress includes reviewing the data collected and conducting chart audits to evaluate the progress of implementation. Evaluation and Monitoring. An evidence-based culture in this study is defined as ‘the collective identity of the organization or group of individuals that is grounded in the utilization of best available evidence to support practice. Reference to evidence-based culture was reported to a varying degree across sectors in both interviews and reports. Interprofessional collaboration was chosen as the code for indicator development because it was the highest frequency in qualitative analysis of those determined to be measurable in this theme. In both reports and interviews, there was mention of a need to validate existing evidence-based culture to achieve successful implementation, or recognition of culture changes when evidence-based practice became normalized, valued, and understood. Determining the baseline state of an organization was a common strategy to determine the evidence-to-practice gaps as well as to understand what priority recommendations for working groups in organizations. A need for adapting the BPG to the local context was often reported in the progress reports as any change made to the BPG recommendations to meet the needs of the unit or setting within which it was being implemented. Evidence-Based Culture . Governance is defined as including the mechanisms by which the governing body of an organization provides a framework of rules and/or monitors practice of its stakeholders, including BPG implementation and sustainability. Governance included two codes: organizational alignment and policy changes. Of these, policy changes were the only code with significance in BPG. Policy change is defined as changes made to policies related to BPG implementation. Governance. Leadership is defined as individuals or groups of individuals within an organization who provide direction, oversight of BPG implementation and sustainability or inspire, encourage, and motivate others to engage in BPG implementation and sustainability. This theme contained seven codes, including BPSO Lead, BPSO Committee, and Champions. Of these, Champions (identification and deployment of staff who strongly promote the use of BPG in practice) had the highest frequency in the qualitative analysis. Leadership. Practice interventions are defined as any modification to the way front-line staff practice related to the implementation or sustainability of a BPG. Results from the quantitative analysis revealed that several codes within this theme were significantly associated with the process and outcome quality indicators measured for the Falls and Person- and Family-Centred Care guideline implementations. Specifically, practice changes, program expansion, and resources and tools for staff, as well as resources and tools for the public, were statistically significantly correlated to quality indicators. For the Person- and Family-Centred Care guideline, practice changes and resources and tools for the public were the two topmost highly correlated codes with quality indicators. For Falls, resources and tools for the public had the second highest correlation coefficient with quality indicators. Although Practice Interventions. Practice Interventions was a high-frequency theme, literature related to this theme is specific to each individual intervention. The commonality between all practice interventions discussed in the MyBPSO reports were that they were grounded in the evidence from the BPG recommendations and local data from environmental scans. Stakeholder engagement is defined as the process by which individuals are involved in and influenced to buy-in to the implementation or sustainability of BPGs. This theme is composed of eleven codes ranging from collaborating committees and feedback to networking and staff engagement. Staff engagement, defined as fostering buy-in to the BPSO and BPG programs. The level to which staff are engaged or means by which staff are being engaged to participate, was frequently mentioned in the qualitative analysis. Stakeholder Engagement. 3.12 Phase 4 results – content validity of indicators The research team revised indicators based on feedback received from the Ontario Health Teams. Indicators were rated to be moderately to highly relevant (range mean = 4.08–6), feasible (range mean = 3.70–5.75), readable (range mean = 4.08–5.75), and useable (range mean = 3.25–5.75) during the pilot survey (n = 16). The detailed result of the validation survey with individuals internal and external to RNAO and their demographics are outlined in Supplemental Tables 3 and 4, respectively. Results of the survey showed little variance across indicators, with no scores indicating strong or moderate disagreement. Therefore, any of the indicators created could be integrated into the NQuIRE® database for further testing. 4 Discussion 4.1 Summary of findings Overall, the findings reveal eight overarching themes and 47 codes that were identified from 126 MyBPSO reports from 21 organizations spanning across 4 sectors within Ontario as well as qualitative analysis of interviews with 25 personnel (from eight organizations) involved in implementing or sustaining BPGs within the previous ten years. There was little variation amongst sectors at the theme level, with all eight themes captured in all organizations. Moreover, while there was variability in frequency of use of the strategies and processes from each of the themes related to implementation and sustainability across organizations; all themes were present to some capacity across all four sectors for both BPGs. This finding contrasts with the results of an international scoping review by Gagliardi et al. [ 14 ] that concluded that the choice of strategy was often not associated with guideline topic. A possible reason for the variation in the identified codes across BPGs could be due to the interpretation of the recommendations by BPSOs, which could have influenced the implementation process and the strategies needed or chosen to embed the evidence-informed practice in daily routine. The codes and eight themes drawn from qualitative analysis of the reports and interviews, as well as the quantitative analysis, were used to develop indicators related to implementation and sustainability of the two BPGs. These indicators will be used by health service BPSOs in Ontario to achieve successful implementation and sustainment of a selection of BPGs. These indicators have been incorporated into the NQuIRE® database for routine collection by BPSOs. 4.2 Significance and applicability of findings The findings from this study are validated by current research that multifaceted and tailored implementation strategies are an effective means for implementing BPGs in many sectors of healthcare [ 13 , 47 ]. Many of the strategies or processes identified in this study are in-line with current literature. For example, Peters and colleagues [ 17 ] conducted a scoping review ( n = 188 included studies) that identified a plethora of implementation strategies and approaches relevant to guideline implementation across 16 clinical topics. These strategies included: professional strategies (e.g., educating and providing feedback to providers; these strategies coincides with our building capacity theme/indicators), patient/consumer related strategies (e.g., counseling and engaging patients; these strategies coincides with our stakeholder engagement theme/indicators), financial (e.g., funding and incentives; these strategies coincides with our practice interventions theme/indicators), organizational (e.g. communication and human resources; these strategies coincides with our communication and leadership themes/indicators), structural changes (e.g. physical/organizational structure and evaluation processes; these strategies coincides with our evaluation monitoring, governance, and evidence based culture themes/indicators) [ 17 ]. Some of the strategies and processes we identified in this study were also found to be associated with quality indicators. Studies originating from Australia that reported a reduction in pressure ulcers as a results of guideline implementation, emphasized key elements relevant to successful implementation that aligns with strategies and process we identified [ 48–50 ]. These elements include involvement of inter-professional teams [ 48 ], clinical leaders and champions [ 49 ]; adequate staff education and awareness campaigns [ 49 , 50 ]; simplification and incorporation of documentation into workflow [ 49 , 50 ]; support from senior management and allocation of resources [ 48 ]. Our findings are similar to other research that identified variability in strategies and processes between sectors. For example, from an international perspective, Egholm et al. [ 51 ] concluded that it is necessary to supplement the dissemination of guidelines by applying setting-specific initiatives to support implementation to improve knowledge uptake by hospitals compared to municipalities. Behaviour-change techniques, for example, implementation intentions [ 52 ], or self-formulated conditional plans [ 53 ] that have been shown to be effective for changing provider behaviour in clinical practice in a variety of contexts, was not identified as an explicit strategy for implementation and uptake of the BPGs with one exception. One team reported, in their MyBPSO reports, creating goals and action plans based on areas identified from the environmental scan as a method for addressing findings from their environmental scan. 4.3 Future research Currently, there is limited information about the strategies and processes used by nurses in the Canadian health care system to support implementation and sustainability of BPGs, let alone knowing which are the most appropriate in any given context [ 14 ]. As such, this study is the beginning of important seminal work in the field. A concept analysis of all identified strategies and processes used for implementation of evidence-based knowledge and tools, such as protocols and guidelines in Canadian health care settings is needed to bring clarity to this essential aspect of implementation. Furthermore, a distinction between process and strategy will provide more clarity to these discussions as well as a better understanding of the utility of each. In future studies, the indicators will be piloted in regions with BPSO, including Australia, China, Chile, Columbia, and Spain. Finally, successful indicators will be incorporated into the NQuIRE® database for long-term use. 4.4 Limitations There are some important limitations to be acknowledged. First, it was not possible to analyze the data individually as either implementation related themes or sustainability related themes as was originally planned. Sustainability of organizational innovations is determined when “after a period of time, the program, clinical intervention, and/or implementation strategies continue to be delivered and/or individual behavior change is maintained, the program and individual behaviour change may evolve or adapt while continuing to produce benefits for individuals/systems” [ 54 ]. The overlapping features of implementation and sustainability strategies and process made it difficult to delineate between the two. Further, demarcating strategies that promote sustainability would require longitudinal observations as both the determinants and strategies that are important for sustained use of guidelines changes over time [ 55 ]. Second, data collection and analysis were focused on only two BPGs; these BPGs were selected in close consultation with the research team from the RNAO; they were the most implemented BPGs by BPSOs. Our findings can potentially be applicable to the implementation and sustainability of similar BPGs developed by the RNAO. Other limitations of our study pertain to the disproportionate number of participants across sectors, sexes, and health systems. In Phase 1 (secondary analysis of MyBPSO reports; n = 21 organizations), there was a higher proportion of long-term care BPSOs that participated compared to the other sectors (n = 8 of 21 (38.1%). Despite this, none of the codes found were unique to long term care. Furthermore, the interview sample with implementers (n = 25) were primarily female (80%) and the sample of the validation survey during phase 4 (n = 16) were all female. The validation survey for the indicators was also conducted with a sample of individuals primarily working in hospitals. The study was also conducted in Ontario, which primarily has a publicly funded health care system. Further testing of the indicators will be completed with a more diverse sample and settings after their integration into the NQuIRE® database. 5 Conclusion The successful implementation and sustainability of BPGs is crucial to their effectiveness in improving process and patient outcomes. In this study, using a systematic and rigorous approach, we developed a comprehensive set of implementation indicators targeting structure, process and outcomes of implementation. These indicators will allow for frequent monitoring of the implementation of all BPGs to allow organizations to see what strategies work and when, plus assist with the early identification of deviations and new problems arising in the implementation process. The indicators have been implemented into the RNAO Nursing Quality Indicators for Reporting and Evaluation® (NQuIRE®) database and are now being used by BPSOs to document and monitor their implementation of BPGs, and learn from these findings to tailor their implementation of future BPGs. Funding sources This study was funded by the RNAO who were involved in all phases of the project. Author contribution statement Laura D Aloisio, Nicole Graham, Shanoja Naik, Mary Coughlin and Christina Medeiros: performed the experiments; analyzed and interpreted the data; wrote the paper. Doris Grinspun, Heather McConnell, Anne Sales, Susan McNeill and Janet E Squires: conceived and designed the experiments; performed the experiments; analyzed and interpreted the data; wrote the paper. Wilmer J Santos: analyzed and interpreted the data; wrote the paper. Data availability statement Data will be made available on request. Declaration of competing interest The authors declare the following financial interests/personal relationships which may be considered as potential competing interests: We received ethical approval from the University of Ottawa Ethics Board (file number: H09-17-13), The Ottawa Hospital Research Institute Ethics Board (file number: 20180366-01H) and the Toronto Public Health Research Ethics Board (file number: 2018-12). Janet Elaine Squires reports financial support was provided by Registered Nurses Association of Ontario. Doris Grinspun reports a relationship with Registered Nurses Association of Ontario that includes: board membership. Shanoja Naik reports a relationship with Registered Nurses Association of Ontario that includes: employment. Christina Medeiros reports a relationship with Registered Nurses Association of Ontario that includes: employment. Heather McConnell reports a relationship with Registered Nurses Association of Ontario that includes: employment. Susan McNeill reports a relationship with Registered Nurses Association of Ontario that includes: employment. Acknowledgements The authors are very grateful to the RNAO team and BPSOs for the valuable input, suggestions, and support in building the indicators within the database. Appendix A Supplementary data The following is the Supplementary data to this article: Multimedia component 1 Multimedia component 1 Appendix A Supplementary data Supplementary data to this article can be found online at https://doi.org/10.1016/j.heliyon.2023.e19983 .
REFERENCES:
1. GRINSPUN D (2021)
2. MELNYK B (2015)
3. FLEISZER A (2015)
4. GRINSPUN D (2018)
5. GRINSPUN D (2018)
6. HUTCHINSON A (2004)
7. MELNYK B (2018)
8. WALLIS L (2012)
9. BRAITHWAITE J (2020)
10. SQUIRES J (2022)
11. ALONSOCOELLO P (2010)
12. MELNYK B (2019)
13. MEDVES J (2010)
14. GAGLIARDI A (2015)
15. TOMASONE J (2020)
16. CASSIDY C (2021)
17. PETERS S (2022)
18. BRENEOL S (2022)
19. GRAHAM I (2006)
20. (2023)
21. AMENT S (2015)
22. GRIMSHAW J (2004)
23. MAY C (2014)
24. SOLBERG L (2000)
25. (2022)
26. (2002)
27. WILLMEROTH T (2019)
28. (1989)
29. (2010)
30. AYDIN C (2004)
31. PATRICIAN P (2010)
32. BUFFMAN M (2005)
33. GRAHAM I (1998)
34. (2023)
35. GRAHAM I (1998)
36. CRESWELL J (2017)
37. LEE S (2022)
38. (2017)
39.
40. SZABO V (1997)
41. (2012)
42. SANDELOWSKI M (2000)
43. HSIEH H (2005)
44. CRABTREE B (1992)
45. FLOTTORP S (2013)
46. DAMSCHRODER L (2009)
47. BAKER R (2010)
48. ASIMUS M (2011)
49. SENDELBACH S (2011)
50. YOUNG J (2010)
51. EGHOLM C (2018)
52. GOLLWITZER P (1999)
53. CASPER E (2008)
54. MOORE J (2017)
55. NADALINPENNO L (2022)
|
10.1016_j.jadr.2025.100899.txt
|
TITLE: Resilience as a protective factor to academic Burnout in adolescents during COVID-19
AUTHORS:
- Puig-Lagunes, Ángel Alberto
- German-Ponciano, León Jesús
- Varela-Castillo, Guerson Yael
- Ortiz-Cruz, Fabiola
- Rosas-Sánchez, Gilberto Uriel
- Ramírez-Rodríguez, Rodrigo
ABSTRACT:
Background
COVID-19 pandemic increased academic concerns and risk for development of academic burnout syndrome (ABS) among adolescents. In Mexico, little research has been conducted on the impact of resilience in adolescents as a potential coping strategy against ABS. Therefore, this study aimed to examine the link between resilience and ABS symptoms in Mexican high school adolescents during the COVID- 19 pandemic.
Method
An analytical and observational cross-sectional study was conducted with 2,194 adolescents from nine public high schools in Veracruz. Resilience and ABS were assessed between May-June 2021 by Google form, using the Mexican Resilience Scale and the Maslach Burnout Inventory - Student Survey. Using resilience domains, we conducted a binomial logistic regression model to predict the protective or risk factors for burnout syndrome.
Results
A total of 9.73 % of adolescents exhibited symptoms of ABS. Female gender was found to be significantly associated with ABS. Furthermore, lack of value in domains such as strength and self-confidence (OR = 2.14, 95 % CI: 0.59–1.57), family support (OR = 1.89, 95 % CI: 1.47–2.44), and structure (OR = 1.62, 95 % CI: 1.22–2.16) were identified as risk factors for the development of burnout syndrome. In contrast, increase of value in social support (OR = 0.59, 95 % CI: 1.22–2.16) emerged as a protective factor.
Conclusions
Resilience served as a crucial protective factor against ABS in high school adolescents highlighting the need for interventions aimed at promoting their emotional well-being, particularly among females.
BODY:
1 Introduction Despite the many changes that adolescents experience during adolescence, the norms implemented during the COVID-19 pandemic and their derivative effects, such as living in low-income family ( Rothe et al., 2021 ), decreased social interaction with peers ( Widnall et al., 2022 ), have caused unprecedented disruptions in their lives that have affected their mental health and created uncertainty and concern in the educational setting ( Cervantes-Cardona, et al., 2022 ). In the context of the ongoing health crisis, adolescents have expressed concerns related to their academic future, motivation to focus and complete schoolwork, and the decreased support and communication with teachers that have become a hallmark of the current educational landscape. These concerns, along with a lack of emotional intelligence and resilience, have the potential to contribute to the emergence of school adjustment problems and the development of Academic Burnout Syndrome (ABS) among students ( Tang et al., 2021 ). ABS is a psychological state that is characterized by long-term emotional fatigue, a lack of interest in studies and a reduction in academic efficiency ( Schaufeli et al., 2009 ). The syndrome is characterized by three main symptoms: emotional exhaustion, cynicism and reduced professional efficiency. The prevalence of ABS is estimated to be between 10 and 30 %, representing a significant public health concern with the potential for long-term consequences among developing adolescents ( Salmela-Aro and Tynkkynen, 2012 ; Cheung and Li, 2019 ; Lee et al., 2019 ; Gabola et al., 2021 ), which can result in reduced engagement, impaired identity development, and diminished life satisfaction. It may be linked to problematic internet and social media use and is strongly associated with elevated levels of anxiety and depression ( Zhu et al., 2021 ; Vansoeterstede et al., 2023 ). It is imperative to study ABS in Mexican adolescents, given the multitude of challenges posed by the educational system. These include an onerous academic workload, inadequate home conditions for online education, social, economic, and occupational pressures, and the pervasiveness of anxiety and depression ( Rothe et al., 2021 ; Cervantes-Cardona, et al., 2022 ). Furthermore, deficiencies in coping and resilience skills ( Bravo-Sanzana et al., 2023 ) contribute to heightened academic stress and the emergence of burnout. These factors, both individually and in combination, significantly elevate the risk of ABS in adolescents. On the other hand, it has been demonstrated in other countries that resilience plays a crucial role in mitigating and preventing burnout ( Martínez-Ramón et al., 2021 ; Rodriguez-Unda et al., 2023 ), however, research on this topic is practically non-existent in Mexico, even in the adolescent population. Therefore, the objective of this study was to determine the association between resilience and symptoms of ABS in adolescents during the COVID-19 pandemic in México. 2 Materials and methods 2.1 Design of study and subjects An analytical and observational cross-sectional study was conducted to evaluate students enrolled in nine public high schools located in the municipalities of Acayucan, Chinameca, Cosoleacaque, Jaltipan, Juan Rodríguez Clara, Minatitlán, Sayula de Alemán, Soteapan, and Zaragoza in the southern part of Veracruz, Mexico. The selection of the schools was based on the cooperation and support of the local authorities, who facilitated the implementation of the research. A non-probability convenience sample of approximately 4380 students from designated public high schools was used, with students invited to participate in the study based on their availability and willingness. The response rate obtained was 53.85 %, with a total of 2359 adolescents invited, of whom 167 refused to participate, resulting in a sample size of 2194 adolescents. However, missing data were identified for some responses from four participants, leading to their complete exclusion from the analysis. This resulted in a final sample size of 2190 participants (99.81 %). The students, aged 14 to 18 years, of both sexes, were enrolled in the February - June 2021 semester, from the second to the sixth semester of high school (equivalent to grades 10–12 in the U.S. system), whose parents had provided informed consent, and students who had given their consent (by checking the boxes), were eligible to complete the questionnaire in Google Forms. They were assured of data confidentiality in accordance with the Federal Law for the Protection of Personal Data in the Possession of Individuals, approved by the National Chamber of Deputies ( México, 2011 ). Conversely, the study excluded adolescents undergoing psychiatric and/or psychological treatment, as this could potentially impact their perception of ABS and resilience. This information was obtained through a specific question on the administered form. 2.2 Measurements A questionnaire was utilized to collect general information from the participants, which included items pertaining to their sociodemographic characteristics, such as age, gender, semester, family composition, living arrangements, exercise habits, hobbies, occupational and academic pursuits. To identify the presence of ABS in high school adolescents, the Maslach Burnout Inventory - Student Survey (MBI-SS) was employed. This instrument has been demonstrated to be both valid and reliable for the assessment of ABS. The inventory consists of 15 self-administered questions ( Schaufeli et al., 2009 ) grouped into three subscales: burnout, cynicism, and academic efficacy. Burnout is measured by items related to fatigue, cynicism reflects a distant attitude toward studies, and academic efficacy indicates the student's ability to achieve the desired level of educational performance. High scores on burnout and cynicism and low scores on academic efficacy indicate higher levels of burnout. The MBI-SS has been demonstrated to be a valid and reliable instrument, with a proven track record of use in diverse student populations across the world, including in México. In the Mexican population, the scale, along with its structural validity and reliability, has been shown to exhibit considerable levels of Cronbach's alpha, with a value of 0.733 ( Banda Guzmán et al., 2021 ). Each item is scored from 0 (never) to 6 (always) in terms of frequency, and according to the score obtained, it is classified into the following three dimensions; Emotional exhaustion: Low from 0 to 9, moderate from 10 to 14, and high >14; Depersonalization: Low from 0 to 1, moderate from 2 to 6, and high >6; Academic Efficacy: Low <22, moderate from 23 to 27, and high >28; for a student to be considered to meet the burnout criteria, they must have low personal efficacy scores and high emotional exhaustion and depersonalization scores ( Schaufeli et al., 2009 ). The internal consistency of the MBI-SS in the present study was excellent, as indicated by McDonald's omega (ω = 0.930). In order to determine the resilience levels in adolescents, we used the Mexican Resilience Scale (RESI-M), which is a questionnaire consisting of 43 items ranging from 1 to 4 points (from "strongly disagree" to "strongly agree") and grouped into five factors: 1: strength and self-confidence; 2: social competence; 3: family support; 4: social support; and 5: structure (ability to organize both activities and time). The scale has a Cronbach's α total consistency of 0.93, explaining 43.60 % of the variance, and has been used in Mexican adolescents ( Blanco et al., 2018 ). The internal consistency of the RESI-M in the present study was good (ω = 0.87). 2.3 Data analysis The Spearman rank correlation test was calculated to assess the strength of the association between the RESI-M and MBI-SS domains, with correlation strength classified as weak (0.3 to 0.5), moderate (0.5 to 0.7), strong (0.7 to 0.9), or very strong (0.9 to 1) ( Schober et al., 2018 ). For comparisons of categorical variables between groups, Chi-square test was used, with Cramer's V calculated as the effect size for significant results: >0 (no or very weak), >0.05 (weak), >0.10 (moderate), >0.15 (strong), and >0.25 (very strong) ( Aoki, 2020 ; Hashim, 2019 ). To predict the likelihood of belonging to the burnout syndrome group, we reduced the number of domains in the RESI-M to enhance the predictive power. For this, multiple binomial logistic models (1 = burnout [reference level], 0 = no burnout) were fitted and ranked using the Akaike Information Criterion (AIC) to evaluate model quality, with the glmulti function from the glmulti package in R. The model's goodness-of-fit was assessed using the Hosmer-Lemeshow. Finally, Tjur's and McFadden's coefficients of determination (R²) were calculated to assess the discrimination and resolution capabilities of the generalized linear model ( Hughes et al., 2019 ), respectively. Data analysis and graphs were computed and designed using R Studio Integrated Development Environment for Macintosh. 2.4 Ethical considerations The project was evaluated and approved by the Research Bioethics Committee of the Faculty of Medicine of the Universidad Veracruzana, Campus Minatitlán (F-001-CI-2021). In addition, it complied with the guidelines established in the General Health Law, specifically in articles 13, 14, 16, 20 and 36, as well as the provisions of the Declaration of Helsinki and the Mexican ( World Medical Association, 2014 ) legislation on research in chapters 96, 100 and 102 ( Secretary of Health, 2022 ). 3 Results A total of 2190 high adolescents were included, of whom 1291 (58.95 %) were female and 899 (41.05 %) were male, with a mean age of 16.0 ± 1.03 years. Approximately 13.79 % ( n = 302) of the adolescents reported to have a job in addition to their studies, 62.05 % ( n = 1359) reported to do sports, and 82.78 % ( n = 1813) reported to have a hobby. 3.1 Comparisons of burnout syndrome and healthy adolescents Approximately 9.73 % of the sample met the criteria for academic burnout syndrome. Our data indicate that women are more likely to experience ABS than men ( Table 1 ). In terms of resilience, adolescents showed significantly lower levels of strength and self-confidence, with a very strong effect; social competence with a moderate effect; family support with a strong effect; social support with a weak effect; and structure with a strong effect. In contrast, adolescents with ABS were more prone to experience high emotional exhaustion and moderate-to-high depersonalization compared to adolescents without ABS, with both characteristics showing a very strong effect size. No significant differences were found between the groups in terms of academic efficacy. 3.2 Correlation between RESI-M and MBI-SS domains The Spearman rank correlation test revealed significant associations between all domains of resilience and ABS. Social competence ( r = 0.55), family support ( r = 0.53), social support ( r = 0.51), personal structure ( r = 0.61) and general resilience ( r = 0.81) were found to be moderately related to lower levels of ABS ( Table 2 ). 3.3 Risk and protective factors for burnout syndrome Strength and self-confidence (OR = 2.14) acted as risk factor, indicating that a decrease in strength and self-confidence increases the odds of ABS by 14 %. Family support (OR = 1.89) also emerged as a risk factor, with lowest levels of family support associated with an 89 % increase in the odds of developing ABS. Similarly, structure (OR = 1.62) was found to serve as a risk factor, suggesting that lower structure in life increases the odds of experiencing ABS by 62 %. In contrast, increase of value in social support (OR = 0.59) functioned as a protective factor, indicating that an increase in social support is associated with a 41 % reduction in the odds of experiencing ABS. Social competence (β: 0.02) was excluded from the model due to its low model-averaged importance ( Fig. 1 ). The Hosmer-Lemeshow test indicates that the model does not exhibit a poor fit (df = 8, χ² = 11.15, p = 0.19). Finally, measures of predictive power of our study as McFadden's Pseudo-R² = 0.12 and Tjur's R² = 0.08 suggest that the model has a moderate ability to explain the variability of the dependent variable and predict the probability of events. While these values are not high, they indicate an acceptable fit for the binomial logistic regression model, particularly in complex problems or scenarios with many unobserved factors that may influence the behavior of the dependent variable, in this case, the risk for the development of burnout syndrome. 4 Discussion The aim of the present study was to identify the relationship between resilience and ABS in Mexican adolescents during the COVID-19 pandemic. Our findings revealed that higher levels of resilience were associated with a lower likelihood of developing ABS. Strength, self-confidence, family support and structure were identified as the main risk factors for ABS, while social support was a protective factor. In this regard, higher levels of resilience and most of its components were also found in males compared to females. A paucity of available information regarding the prevalence of burnout in adolescents in Mexico is a limitation that prevents comparison. However, a study conducted by Martínez-Alvarado et al. (2021) demonstrated a burnout prevalence ranging from 1.88 % to 1.96 % among Mexican adolescent athletes aged 12 to 18 years. Conversely, the prevalence of ABS in our study (9.73 %) was less than the prevalence below the 20 % threshold observed in adolescents in some oriental countries such as Finland, Korea, and China ( Salmela-Aro and Tynkkynen, 2012 ; Cheung and Li, 2019 ; Lee et al., 2019 ; Gabola et al., 2021 ). It seems reasonable to suggest that this ethnic or cultural discrepancy is associated with the progression of the COVID-19 pandemic and other individual and social consequences thereof, including compliance with social restrictions, academic demands, educational disruptions, persistent concerns about illness, as well as economic and occupational pressures and concerns ( Rothe et al., 2021 ; Cervantes-Cardona et al., 2022 ; Graupensperger et al., 2022 ; Sexton et al., 2022 ). On the other hand, an interesting result in the present study was that 99.8 % of the students exhibited low levels of academic self-efficacy. These findings may be related to various factors highlighted in the literature, such as academic stress ( Gadzella and Masten, 2005 ), lack of self-regulation strategies ( Sari et al., 2020 ), absence of adequate academic support ( Charleston and Leon, 2016 ), and negative perceptions of one's capabilities ( Bresó et al., 2011 ). However, it is important to note that this study did not specifically assess these factors, and therefore, it is not possible to establish a direct causal relationship between them and the observed results. Future research would be needed to explore these variables in greater depth in order to better understand the potential reasons behind the high prevalence of low academic self-efficacy in this sample. This study also identified the protective and risk factors associated with ABS, highlighting that social support reduced vulnerability to ABS, while a lack of strength, self-confidence, family support, and personal structure increased the risk of developing this syndrome. However, addressing these factors in isolation interventions is insufficient to guarantee ABS symptoms reduction. Given the multifaceted nature of resilience, these results underscore the importance of designing multicomponent strategies that incorporate various protective factors ( Llistosella et al., 2023 ). A recent study underscores the pivotal role of institutional support in fostering student resilience during crisis situations, emphasizing the significance of effective demand and resource management to ensure a positive educational experience, even in challenging times. The effective implementation of support strategies has the potential to not only prevent academic burnout (ABS) but also promote students' overall mental well-being, as indicated by previous research ( Awais et al., 2023 ). Furthermore, extant research has demonstrated the efficacy of resilience-based interventions, such as cognitive behavioral therapies, mindfulness-based interventions, social and emotional skills programs, and extracurricular activities like sports, in enhancing resilience in adolescents ( Llistosella et al., 2023 ). The findings of this study corroborate the applicability and benefits of these interventions within the school setting. We also observed that males exhibited higher levels of resilience and its components compared to females, suggesting females may be more vulnerable to developing ABS, consistent with previous research ( Jordan et al., 2020 ). Factors explaining sex differences in resilience include lower heritability of resilience in women ( Boardman et al., 2008 ), poorer health ( McDonough and Walters, 2001 ), and lower self-confidence, self-efficacy, and resources ( Costa et al., 2001 ). Women also reflect more on problems, prolonging depressive episodes and affecting resilience ( Nolen-Hoeksema, 1991 ). In our context, ABS in women during COVID-19 pandemic was predicted by several psychiatric disorders such as post-traumatic stress disorder, substance use disorder, mood disorders, depression, neurotic disorder, sleep disorder, borderline personality disorder, autism spectrum disorder, attention-deficit hyperactivity disorder, and postviral fatigue syndrome ( Wallensten et al., 2024 ). This underscores the importance of focusing on adolescent girls when studying resilience and designing interventions to prevent burnout, examining the presence of psychiatric comorbidities that undermine the resilience. In another hand, various studies have identified gender as a key factor in ABS ( Salmela-Aro and Tynkkynen, 2012 ; Herrmann et al., 2019 ; Rusandi et al., 2022 ) and resilience ( Singh et al., 2019 ; Grazzani et al., 2022 ) in adolescents, although few have explored its role as a moderator between these two variables. Regarding ABS, it has been observed that women report higher levels, which is attributed to greater pressure regarding academic performance, leading to increased concern about poor performance and academic failure ( Salmela-Aro and Tynkkynen, 2012 ). Additionally, the development of the hypothalamic-pituitary-adrenal (HPA) axis during puberty enhances adolescent sensitivity to stressors, which may partly explain the higher levels of academic burnout in this group. Some studies suggest that estradiol, a female sex hormone, has an excitatory effect on the HPA axis, intensifying the stress response in adolescent girls ( Vansoeterstede et al., 2023 ). It has been observed that this estrogen drops significantly in young women exposed to severe traumatic stress ( Rieder et al., 2022 ), which may be explained as a compensatory mechanism in response to elevated cortisol levels. To date, the hormonal levels of adolescent females with burnout syndrome remain unknown. This hormone plays a crucial role during the second phase of brain organization that occurs during puberty, as it supports the proper arrangement of the white and gray matter architecture ( Herting and Sowell, 2017 ; Zwaan et al., 2022 ). Disruption of sex hormones during puberty predisposes individuals to the development of psychiatric disorders in humans and animal models ( Oliveira et al., 2024 ). Therefore, chronic stress elicited by burnout syndrome might disrupt hormonal pulses and brain re-organization in a critical stage such as adolescence in females. Concerning resilience, women tend to report lower scores and often use emotional coping strategies, such as rumination or emotional isolation, which are considered less adaptive for managing stress. In contrast, men tend to employ more adaptive coping strategies, such as direct action or positive self-instruction, which may contribute to greater resilience and lower vulnerability to ABS ( Salmela-Aro and Tynkkynen, 2012 ; Herrmann et al., 2019 ; Rusandi et al., 2022 ). However, since most studies focus on these variables separately, further research is needed to determine whether gender truly acts as a moderator in the relationship between resilience and academic burnout. The study of resilience in relation to ABS in adolescents during the pandemic can provide valuable information not only to understand how to manage stress and prevent burnout in times of crisis, but also to develop prevention strategies and effective tools to manage academic burnout. This will help to sustainably improve both the mental health and academic performance of adolescents in the future. In addition, this approach will allow us to understand how resilience influences adolescents' ability to manage stress and cope with pressure without reaching academic burnout. Some limitations of our study that should be considered include the limited national research available on ABS and adolescent resilience, which makes comparisons to previous studies difficult and limits our understanding of the local context. In addition, the variability of the study group is another challenge, as although the group consisted of a large sample of students from public schools, a significant proportion of the school population did not complete the survey. The diagnostic tools and criteria used also affect the interpretation of the results and limit the generalizability of the findings internationally, particularly due to the cross-sectional design of the study. In addition, longitudinal studies examining the causal links between resilience and ABS, as well as possible mediating factors, are essential. Research is also needed on interventions that promote resilience and prevent ABS among adolescents in Mexico. These include school programs that teach coping skills and stress management strategies, and community interventions that promote social support. Addressing these considerations and areas for future research will improve our understanding of ABS and resilience in students and develop effective strategies to promote their emotional and academic well-being. 5 Conclusions Our findings show that resilience protects adolescents during COVID-pandemic against ABS by reducing its likelihood with higher resilience levels. This highlights the urgent need for interventions to strengthen resilience and coping strategies. Given that female adolescents exhibit lower levels of resilience, they are more vulnerable to ABS, underscoring the importance of addressing gender differences and developing specific programs to promote emotional and academic well-being among women. In conclusion, focusing on building resilience is crucial to prevent burnout, with particular attention to the challenges faced by adolescent girls. Sources of financing This research was partially funded by the Sistema Nacional de Investigadores: Exp. 77,820 (AAP-L) and 84,949 (LJG-P). Conflicts of interest The authors declare no conflicts of interest. Events presented None. CRediT authorship contribution statement Ángel Alberto Puig-Lagunes: Writing – review & editing, Writing – original draft, Supervision, Methodology, Investigation, Formal analysis, Conceptualization. León Jesús German-Ponciano: Writing – review & editing, Writing – original draft, Methodology, Investigation. Guerson Yael Varela-Castillo: Writing – review & editing, Formal analysis. Fabiola Ortiz-Cruz: Supervision, Validation, Visualization, Writing – review & editing. Gilberto Uriel Rosas-Sánchez: Supervision, Validation, Visualization, Writing – review & editing. Rodrigo Ramírez-Rodríguez: Data curation, Formal analysis, Methodology, Supervision, Validation, Visualization, Writing – review & editing. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgements None.
REFERENCES:
1. AOKI S (2020)
2. AWAIS A (2023)
3. BANDAGUZMAN J (2021)
4. BLANCO J (2018)
5. BOARDMAN J (2008)
6. BRAVOSANZANA M (2023)
7. BRESO E (2011)
8. CERVANTESCARDONA G (2022)
9. CHARLESTON L (2016)
10. CHEUNG P (2019)
11. COSTA P (2001)
12. GABOLA P (2021)
13. GADZELLA B (2005)
14. GRAUPENSPERGER S (2022)
15. GRAZZANI I (2022)
16. HASHIM M (2019)
17. HERRMANN J (2019)
18. HERTING M (2017)
19. HUGHES G (2019)
20. JORDAN R (2020)
21. LEE M (2019)
22. LLISTOSELLA M (2023)
23. MARTINEZALVARADO J (2021)
24. MARTINEZRAMON J (2021)
25. MCDONOUGH P (2001)
26. (2011)
27. NOLENHOEKSEMA S (1991)
28. OLIVEIRA G (2024)
29. RIEDER J (2022)
30. RODRIGUEZUNDA N (2023)
31. ROTHE E (2021)
32. RUSANDI M (2022)
33. SALMELAARO K (2012)
34. SARI H (2020)
35. SCHAUFELI W (2009)
36. SCHOBER P (2018)
37.
38. SEXTON J (2022)
39. SINGH R (2019)
40. TANG X (2021)
41. VANSOETERSTEDE A (2023)
42. WALLENSTEN J (2024)
43. WIDNALL E (2022)
44. (2014)
45. ZWAAN I (2022)
46. ZHU X (2021)
|
10.1063_2.1403203.txt
|
TITLE: Role of wing morphing in thrust generation
AUTHORS:
- Ghommem, Mehdi
- Hajj, Muhammad R.
- Beran, Philip S.
- Puri, Ishwar K.
ABSTRACT:
In this paper, we investigate the role of morphing on flight dynamics of two birds by simulating the flow over rigid and morphing wings that have the characteristics of two different birds, namely the Giant Petrel and Dove Prion. The simulation of a flapping rigid wing shows that the root of the wing should be placed at a specific angle of attack in order to generate enough lift to balance the weight of the bird. However, in this case the generated thrust is either very small, or even negative, depending on the wing shape. Further, results show that morphing of the wing enables a significant increase in the thrust and propulsive efficiency. This indicates that the birds actually utilize some sort of active wing twisting and bending to produce enough thrust. This study should facilitate better guidance for the design of flapping air vehicles.
BODY: No body content available
REFERENCES:
1. LENTINK D (2007)
2. MESEGUER J (2005)
3. DENNY M (2009)
4. PENNYCUICK C (1996)
5. VIDELER J (2005)
6. TUCKER V (1970)
7. TAHA H (2012)
8. HOLST E (1942)
9. PARRY A (1949)
10. VEST M (1996)
11. FRITZ T (2004)
12. GHOMMEM M (2012)
13. STANFORD B (2010)
14.
15. GHOMMEM M (2014)
16. GHOMMEM M (2013)
17. PENNYCUICK C (1989)
18. WITHERS P (1981)
19. BONSER R (1995)
20. DIAL K (1992)
21. SVANBERG K (1987)
22. SVANBERG K (2002)
23. TOBALSKE B (1996)
24. TOBALSKE B (2003)
25. TOBALSKE B (2007)
26. TIAN X (2006)
|
10.1016_j.gecco.2025.e03760.txt
|
TITLE: African lion conservation requires adaption to regional anthropogenic threats and mitigation capacity
AUTHORS:
- Nicholson, Samantha K.
- Roxburgh, Lizanne
- Bauer, Hans
- Adams, Erin
- Asfaw, Tsyon
- Naude, Vincent N.
- Slotow, Rob
ABSTRACT:
Lion populations are declining rapidly throughout their range in Africa due to either indirect threats such as habitat loss and fragmentation or more direct threats such as targeted poaching for body parts and illegal wildlife trade. Conservation strategies and resource deployment around mitigation requires a comparable understanding of regional threat typology and severity, as well as an assessment of available resources for intervention. To inform such species-level planning, an online survey conducted with experienced landscape managers and lion researchers representing 132 subpopulations across Africa was used to develop standardised perceived threat severity and resource availability indices for comparison with biogeographic, socio-economic, and mitigation covariates. Lion subpopulations were perceived to be either increasing (38 %) or stable (37 %) over the last five years, with some decreasing (17 %) and several unknown (8 %) trends. Perceived threat severity differed significantly by region (i.e., highest in central and lowest in southern Africa) and country (i.e., highest in Angola, the Democratic Republic of Congo, Cameroon and Ethiopia, while Rwanda, South Africa and Namibia were lowest). Further significant differences in the total threat index were related to variables such as communities living within the lion habitat, livestock grazing within the area, livestock competition with wildlife, as well as the level of fencing, community engagement and management resources. Angola, the Democratic Republic of Congo, Cameroon and Ethiopia had the highest perceived threat severity indices, while Rwanda, South Africa and Namibia had the lowest threat severity. The most severe threats varied significantly across regions and countries. Lack of funding, human encroachment, and loss of prey base emerged as severe local threats, while climate change was identified as the most severe global threat. Perceived resource availability was highest in Rwanda, Chad and Benin and lowest in six countries including Angola, Burkina Faso, Niger, South Sudan, Sudan and Uganda. The perceived threats facing lion conservation in Africa vary with context, highlighting the need for tailored conservation strategies.
BODY:
1 Introduction The lion ( Panthera leo ) was once widespread across Africa but has experienced dramatic declines in both population size and range over the last several decades ( Loveridge et al., 2022 ). While most large carnivores have experienced similar declines, the decline in Africa’s lion has been particularly severe ( Wolf and Ripple, 2017; Loveridge et al., 2022 ). This has led to the species being listed as Vulnerable on the IUCN Red List of Threatened Species ( Nicholson et al., 2023 a). These declines have been precipitated by a wide range of both global and local threats and pressures that vary in severity and scale across the continent ( Bauer et al., 2020; Lhoest et al., 2022 ). A global threat is a situation or event that does not necessarily originate in a localised area and can extend beyond country or region, posing a significant and widespread threat to the entire population, or a large part of it ( Hulme et al., 2001 ). One large scale global threat that will likely affect all species is that of climate change ( Hulme et al., 2001; Thuiller, 2004; Araujo et al., 2005; Ziervogel et al., 2014; Carter et al., 2018 ). In Africa, it is expected that climate change will result in increased temperatures and decreased rainfall ( Peterson et al., 2014 ). Based on climate change models, few new areas would become suitable for lions and much of its range in southern and West Africa will become less suitable, resulting in further anthropogenic range reduction ( Peterson et al., 2014 ). Another global threat, more applicable at a regional level, is that of civil unrest, local war or violent extremism that can have both direct and indirect impacts on a species and their conservation ( Salafsky et al., 2008; Bouley et al., 2018; Bauer et al., 2020; Lhoest et al., 2022; Aglissi et al., 2023 ). Such forms of conflict can directly impact a species and their conservation through habitat destruction, disruption of conservation efforts, and targeted killing of individuals, while indirectly affecting them through displacement of local communities, illegal wildlife trade, loss of law enforcement capacity to combat poaching and habitat degradation ( Lhoest et al., 2022; Aglissi et al., 2023 ). Another, albeit much debated, example of a potential global threat to lions could be an international ban on lion trophy hunting or the banning of imports of trophy hunted ( Lindsey et al., 2013; Makuyana, 2018; Dickman et al., 2019 ; but see Loveridge et al., 2007 ; Mweetwa et al., 2018 ). Local threats or pressures refer to those that affect a specific area or population and are within the control, to a certain extent, of the area managers to mitigate. These threats can either be through illegal killings ( Ikanda and Packer, 2008; Everatt et al., 2019; Loveridge et al., 2020 ) or related to inadequate local level management ( Lindsey et al., 2017, 2018; Loveridge et al., 2023 ). A well-studied and prevalent local threat would be that of retaliatory killings, which refers to the intentional killing of lions (and other carnivores) in response to perceived or actual loss of livestock or human life ( Hazzah, Borgerhoff Mulder and Frank, 2009; Hazzah et al., 2014; Leflore et al., 2020; Felix et al., 2022 ). The “bushmeat poaching crisis” is a significant and increasing local threat to animal populations in Africa ( Luiselli et al., 2019; Loveridge et al., 2020; Mudumba et al., 2021 ). Lions are frequently caught as by-catch in snares and this has been recognized to negatively impact populations either by limiting population growth or causing local population declines ( Loveridge et al., 2020; Mudumba et al., 2021; Montgomery et al., 2023 ). An emerging threat to lion populations is the targeted poaching of lions for parts and derivatives ( Everatt et al., 2019 ), which are either used in local traditional medicinal practices ( Coals et al., 2022 ) or for the international illegal wildlife trade ( Williams et al., 2017; Everatt et al., 2019 ). In South Africa, for example, lion parts are used in muthi practices where reported uses of parts included “protection from evil spirits”, “power”, “healing”, “protection” and “sexual health and wellbeing” ( Green et al., 2022 ). As Tiger ( Panthera tigris ) populations began declining, the availability of bones for Eastern traditional medicine decreased ( Williams et al. 2017 ). This resulted in lion being poached specifically for parts in India’s Gir National Park and parts of Africa, which are smuggled into the international wildlife trade in the East to be used as a substitute for dwindling availability of tiger parts ( Williams et al., 2017 ). Local level threat and pressures that do not involve direct mortalities, but do contribute to a decline in lion populations, include a lack of funding or appropriate management, as well as poorly managed and unregulated trophy hunting ( Loveridge et al., 2007; Lindsey et al., 2017, 2018 ). Wildlife areas require significant financial support and without that, fundamental operations and management implementation are limited ( Lindsey et al., 2018, 2021; Bauer et al., 2020; Robson et al., 2022 ). Limited funds increase an area’s risk or likelihood of being overcome by anthropogenic threats that could potentially have been mitigated with the proper resources ( Lindsey et al., 2021 ). Unsustainable trophy hunting levels, through poorly regulated management and high hunting quotas, have been found to cause population declines of lions ( Croes et al., 2011; Packer et al., 2011; Groom et al., 2014; Rosenblatt et al., 2014; Creel et al., 2016; Mweetwa et al., 2018 ). Individual anthropogenic pressures and threats can negatively impact lion populations but when combined, they can be devastating ( Creel et al., 2016; Loveridge et al., 2023 ). Such an example is Limpopo National Park in Mozambique where high levels of retaliatory killings through human-lion conflict and targeted poaching of lions for parts has resulted in significant population declines in recent years ( Everatt et al., 2019; Almeida et al., 2025 ). To effectively conserve lions and other carnivores, these threats and pressures and their severity need to be understood ( Bauer et al., 2020; Nicholson et al., 2023b ). At a site level, this understanding allows effective mitigation measures to be implemented and enforced. At a global level, this understanding plays a critical role in providing high level institutions, policy makers and multilateral conventions (such as the IUCN, CITES and CMS) with the information and knowledge required to guide decisions and policies. Conventions rely on scientific research to establish international regulations and guidelines for the conservation and management of wildlife species ( Bauer et al., 2018 ). Understanding where threats are more severe or where populations are more threatened can guide prioritization of areas that require more immediate conservation action. To better mitigate threats effectively, as well as developing lion conservation plans that have the highest likelihood of success, it is essential to understand the threat a species faces at both a site/local level or those that may be more of a global nature (and therefore potentially harder to mitigate). In addition, being aware of the severity of those threats and the population level impacts is vital to developing strategies that prioritize more severe threats to prevent further population declines. Various factors influence the severity of threats and pressures experienced by local wildlife. For instance, areas where livestock can be predated upon are likely to experience increased human-wildlife conflict and increased carnivore killings ( Beck et al., 2019; Leflore et al., 2020; Sibanda et al., 2021 ). The ability to decrease the severity of such a threat can be done, broadly, by implementing effective livestock barriers (e.g., kraaling, exclusion fencing) and community engagement programmes ( Sibanda et al., 2021 ). The level of protection offered to an area through increased resources; anti-poaching patrols and adequate fencing also decreases the severity of potential threats by mitigating them ( Packer et al., 2013; Bauer et al., 2015; Lindsey et al., 2018 ). The level of protection offered by the formal protection status of the park/concession or the authority that is mandated to conserve and protect it could potentially have impact on the severity of threats to wildlife. In this study we aimed to determine which threats are perceived to be important for lions across their range, and to assess how these differ at a continental, regional, national and subpopulation level. Data were gathered through a structured questionnaire survey. Threats included direct threats derived from the World Conservation Union–Conservation Measures Partnership (IUCN-CMP) classification of direct threats to biodiversity ( Table 1 : Salafsky et al., 2008 ). Direct threats, enabling stressors (defined as “attributes of a conservation target’s ecology that are impaired directly or indirectly by human activities” ( Salafsky et al., 2008 ), and global conditions that directly hinder the effective conservation of lions, were combined in our threat index (see Methods). We also included contributing factors (defined as “ultimate factors, usually social, economic, political, institutional, or cultural, that enable or otherwise add to the occurrence or persistence of proximate direct threats” ( Salafsky et al., 2008 )) into the threat index ( Table 1 ). To broadly measure the severity of threats and pressures, we developed a “threat severity index” using the perceived severity rating of these threats by survey participants. In addition, we aimed to: • Understand the scale at which specific threats occurred. • Determine how prevalent targeted poaching for parts was, and what is known about the target • markets and target body parts. • Gather perceptions on past, present and future lion population trends. • Determine the perceived level of resources available to prevent the illegal killing of lions. • Identify the key knowledge gaps that exist that would allow the increased understanding of these • threats and, thereby, interventions likely to effectively improve the conservation status of lions. We expect to see variation in threat severity across subpopulations. Subpopulations with higher levels of protection, such as those that are routinely patrolled or fenced protected areas, to report lower threat severity. Conversely, we anticipated that sites with human-livestock presence, or limited community engagement would experience elevated levels of threat ( Table 2 ). 2 Methods 2.1 Survey design To obtain information on the perceived anthropogenic threats and pressures to lions across their African range and on the resources available to reduce anthropogenic mortalities, we conducted a structured questionnaire survey. This was done through online surveys as it allowed a large-scale investigation to be conducted rapidly., and also potentially reduces some of the bias that is often present when questions are asked by an interviewer ( Bergen and Labonté, 2020; Stantcheva, 2023 ). The survey was conducted between August 2021 and July 2022, and responses were solicited from park managers and researchers of as many lion subpopulations in Africa as possible. We conducted our survey using a web-based platform called SurveyMonkey where we were able to share the link to the survey, directly with responders. Respondents were selected based on their role as regular data contributors to the African Lion Database, ensuring they had up-to-date, site-specific knowledge of lion subpopulations. As this study focused on lions in Africa, the lion population in India was excluded. Surveys were completed at a subpopulation level, which we define as the non-transboundary lion area that is clearly separated by either political or formal protection boundaries ( Nicholson et al., 2023b ). One completed survey represented one lion subpopulation. We targeted managers of lion subpopulations within formal protected areas, recognised conservancies and game management areas, hunting concessions, and other wildlife areas. In total, 187 experts were invited to complete the online survey, and responses were received from 145, giving a response rate of 78 %. Ten responses (6.9 %) were removed as the surveys were incomplete. An additional five responses (3.4 %) were removed as the areas they were completed for were confirmed not to have lions. Two surveys (1.4 %) were removed as they were duplications of the same survey. Therefore, completed responses for 132 lion subpopulations were included in this study. The survey was divided into three parts ( Supplementary Material S1 ). Part one gathered information on the area, part two focused on the status of lions and perceived threats to them, and the final section focused on mitigation strategies implemented in the area. A pilot study was not conducted, as our methods were informed by other published studies that conducted similar surveys to obtain information relating to lion conservation and management or threats (e.g., Page et al., 2015 ; Williams et al., 2017 ; Lindsey et al., 2018 ; Robson et al., 2022 ). Thus, we built upon previous work by concentrating focus on conducting a more in-depth assessment of the severity of threats to lions and how these differ spatially. Perception-based surveys can introduce bias due to individual interpretation and personal experience ( Podsakoff et al., 2003 ). Online self-reporting may also result in selection bias. Reported threat severity and resource availability reflect expert opinion and may not always align with objective measures, however, these are traded off against an inherent lack of comparable range-wide data and the systemic value of these experience-based interpretations of lion conservation in the field. This survey received ethics clearance from the University of KwaZulu-Natal Human and Social Sciences Research Ethics Committee (approval number: HSSREC/00003076/2021) in accordance with the principles of the Declaration of Helsinki, including voluntary participation, informed consent, confidentiality, and the right to withdraw at any time. 2.2 Threat indices From a list of threats, respondents were required to indicate whether a particular threat occurred in the area for the past five years and to provide a score on its severity (ranging from 0, where the threat to lions does not occur in the area, to 4 where the threat is so severe that it could potentially result in local population extinction). Three indices were created based on the responses to part two of the questionnaire ( Table 1 ): (1) a Global Threat Index (GTI), which was made up of four threats; (2) a Local Threat Index (LTI), which was made up of 16 threats; and (3) a Total Threat Index (TTI) that combined all 20 threats. The severities for the global and local threat indices were calculated by summing all severity scores in that category, and the total threat index was determined by summing the local and global threat indices (similar to Page et al., 2015 ; Lindsey et al., 2017 ; Robson et al., 2022 ). The maximum total threat score that could be achieved was 80 ( Table 1 ). Higher values, for all indices, indicated more severe threat intensity while lower scores (i.e., closer to 0) indicated less severity, or a more secure lion population. Severity was defined as low (indices below one standard deviation below the mean), medium (between low and high), and high (indices above one standard deviation above the mean). 2.3 Resource availability index An index was developed based on the responses to section three of the questionnaire, to determine the resources available to park/area management to reduce the illegal killing of lions. This index was made up of a series of four trichotomous questions where respondents could answer either yes (scoring +1), unsure (scoring 0), or no (scoring −1), based on whether the area management had sufficient funding; staff; correct anti-poaching gear; or enough vehicles. The value for the index, for each of the respondents, was calculated as the sum of the scores of the resource-based questions ( Page et al., 2015 ). 2.4 Statistical analyses As our data for the three indices were not normally distributed (Shapiro-Wilk normality tests: Global W = 0.85, p-value < 0.0001; Local W = 0.977, p-value < 0.05 and total threat index W = 0.968, p-value < 0.05), Kruskal-Wallis H tests were used to determine differences in the index values within each variable, and statistical significance was set at 0.05. To identify significant differences among groups while minimizing the risk of making incorrect conclusions due to multiple testing, we used the Dunn’s test with the Holm correction ( Vickerstaff et al., 2019 ). Data analysis and statistical computations were performed using RStudio ( Kronthaler and Zöllner, 2021 ). R (version 4.2.1) was used for programming and data manipulation ( Chambers, 2008 ). We assessed whether geographical variables (country and region), protection and governance variables (authority, formal protection), sociological variables (communities living within the area, livestock competition with wildlife, and people’s attitudes towards wildlife) or mitigation variables (fencing, area resources, livestock barriers, routine patrols conducted, and community engagement) had any effect on the threat indices. In this study, we aimed to identify which factors had significant effects on the threat indices and thus pose threats and pressures to lion subpopulations ( Table 2 ). By assessing the influence of these variables, we sought to gain a deeper understanding of the complex factors contributing to anthropogenic threats and pressures on lion subpopulations across Africa. 3 Results More than half of surveyed subpopulations were from southern Africa (56 %; n = 74). Within the region, South Africa and Mozambique had the most responses (18 % and 9.9 % respectively). West Africa had the fewest responses (5.3 %; n = 7), but this was to be expected due to there being few remaining subpopulations ( Henschel et al., 2014 ). Eighteen responses were from Central Africa (13.64 %), mostly from the Central African Republic (4.6 %; n = 6) and Cameroon (3.8 %; n = 5). Twenty-five percent of survey responses were from East Africa (n = 33). In total, we surveyed managers and researchers associated with 132 lion subpopulations totalling 1183,435.26 km 2 , or 75 % of remaining lion range. 3.1 Perceived population trends Most lion subpopulations in the last five years were perceived have increased (37.9 %; n = 50) or to have remained stable (37.1 %; n = 49; Supplementary Figure 2 ). However, 17 % of surveyed subpopulations were perceived to be decreasing (n = 22), while trends were unknown for 8.3 % of subpopulations (n = 11). Generally, most surveyed lion subpopulations were expected to increase (39.4 %; n = 52) or remain stable (37.12 %, n = 49) over the next five years. Decreasing subpopulation trends were anticipated for 12.88 % (n = 17) of Africa’s surveyed subpopulations. Central African subpopulations were mostly considered to have decreased in the last five years (38.9 %; n = 7), but future trends appeared more optimistic as 61 % (n = 11) of surveyed subpopulations were expected to increase in numbers. East African subpopulations were largely considered to have remained stable for the last five years (42.2 %; n = 14) and the same trend was expected over the next five years (36.7 %; n = 12). Most surveyed subpopulations in southern Africa have had an increasing trend (47.3 %; n = 35) but were anticipated to become more stable (43.2 %; n = 32). Over the past five years, West African lion numbers have exhibited diverse trends, with two subpopulations perceived to have experienced declines, two increased in numbers, two remained stable, and one subpopulation where trends were unknown. Of concern is that nine (6.8 %) subpopulations were expected to become extinct in the next decade, including five transient populations: Sibiloi National Park (Kenya), Kasanka National Park (Zambia), Coutada 13 (Mozambique), Vwaza National Park (Malawi) and Kasungu National Park (Malawi) and four resident populations: Bale Mountains (Ethiopia), Waza (Cameroon), Benoué (Cameroon) and W (Burkina Faso) National Parks. 3.2 Local Threat Index (LTI) The average LTI for all surveyed subpopulations ( Table 1 and Fig. 2 ) was 19.40 (1 SD ± 9.95, range 1 – 46). There were significant regional differences in LTI ( Table 3 : χ 2 (3) = 17.95, P < 0.001), with Central African LTI significantly higher than other regions (P < 0.05), and East Africa LTI significantly higher than southern Africa (P < 0.05). There were significant differences among countries (χ 2 (25) = 55.56, P < 0.001), but not among subpopulations (χ 2 (129) = 130.379, p > 0.05). At a country level, Angola had the highest LTI (x̄ 42.00, n = 1), followed by Cameroon (x̄ 36.20 ± 2.28, n = 5) and the Democratic Republic of the Congo (x̄ 34.0 ± 2.65, n = 3). Benin had the lowest LTI (x̄ 11.5 ± 3.5, n = 2), followed by South Africa (x̄ 11.79 ± 5.9, n = 24). Ethiopia’s Chebera Churchura NP and Omo NP both had the highest threat indices of all subpopulations (each scoring 46), followed by Mozambique’s Coutada 13 (44) and Angola’s Luengue – Luiana NP (42). Subpopulations with the lowest LTI were Ongava Game Reserve (1, Namibia), Rifa Zambezi Valley (2, Zimbabwe), Welgevonden Game Reserve (3, South Africa), Thanda Private Game Reserve (3, South Africa), and the Greater Lebombo Transfrontier Conservation Area (3, Mozambique).( Fig. 1 ). Subpopulations that had communities living within the area had significantly higher LTI than those without ( Table 3 ; χ 2 (2) = 17.51, P < 0.001). Local threat indices (χ 2 (2) = 18.09, P < 0.001) were significantly lower for subpopulations that were fully fenced than those that had no fencing at all (P < 0.001), areas that had partial fencing had significantly lower LTIs than non-fenced areas (P < 0.05) but had higher LTI than fully fenced areas (P < 0.05). At a local level, subpopulations that were enclosed by some level of fencing, had significantly lower local threat indices. Most populations surveyed (n = 80, 60.61 %) had no fencing at all, while 39.4 % of populations surveyed had some fencing (21.97 % fully fenced and 17.42 with partial fencing). None of the surveyed populations in Central or West Africa were fully fenced, and only one population in East Africa (Kenya) was fully fenced. Of the fully fenced populations, 96 % (n = 28) occurred in southern Africa (21 in South Africa). A subpopulation’s LTI varied significantly with the perceived level of resources available to managers (χ 2 (7) = 33.859, P < 0.000), and were significantly higher for subpopulations where resources were inadequate when compared to other resource availability levels ( Table 2 , P < 0.05). Surprisingly, subpopulations where engagement with the community was carried out had a higher LTI (χ 2 (1) = 8.102, P < 0.05) than where community engagement was not carried out. However, this might be because community engagement was carried out where communities experienced conflict and/or were perceived to be a threat to lions. The local threat index did not vary significantly with authority (type of governance), formal protection, people’s attitudes, and whether livestock barriers were erected (methods to prevent livestock grazing) or routine patrols were conducted ( Table 3 and Supplementary Table 1 ). When looking at local threat severity, most populations experienced medium severity ( Supplementary Table 2 ; n = 86, 65.2 %). Across Africa, 16.7 % (n = 22) of lion subpopulations surveyed faced high local threat severity, with most of Central Africa’s populations in this category (n = 11, 61.1 %). Most lion subpopulations in East (n = 23, 69.7 %), southern (n = 51, 68.9 %) and West Africa (n = 6, 85.7 %) were in moderate local severity category. Ten countries (Cameroon, Central African Republic, Chad, Democratic Republic of the Congo, Ethiopia, Kenya, Tanzania, Angola, Mozambique and Zimbabwe) had subpopulations that were considered to experience high local threat severity ( Supplementary Table 2 ). All lion subpopulations in Cameroon, Democratic Republic of the Congo and Angola experienced high local threat severity. 3.3 Global Threat Index (GTI) The average GTI for all surveyed subpopulations ( Table 1 and Fig. 2 ) was 3.19 (1 SD ± 3.12) and varied significantly across regions (χ 2 (3) = 20.68, P < 0.001) ( Table 4 ). Central Africa and West Africa had the highest GTI (5.83 ± 3.54 and 5.14 ± 3.02 respectively), with the GTI for East Africa (3.88 ± 3.61) significantly lower than for Central Africa, and GTI for southern Africa significantly lower (2.07 ± 2.13) than all other regions (P < 0.05). GTI differed significantly among countries (χ 2 (25) = 53.201, P < 0.001) and were highest in Burkina Faso and Niger (GTI of 9), followed by the Democratic Republic of the Congo (8.33). GTI were lowest in Gabon (0, n = 1), Malawi (1 ± 2.13, n = 4) and Rwanda (0, n = 1). Twenty-one subpopulations received a GTI of 0 including five in South Africa, five in Mozambique, three in Tanzania and Zimbabwe, two in Kenya, and one each in Namibia, Kenya and Malawi. Four subpopulations had the highest global indices: Ethiopia’s Chebera Churchura NP and Omo NP (index of 12 each), followed by Waza NP and Zimbabwe’s Bubye Valley Conservancy (index of 11 each). GTI varied significantly depending on whether competition with wildlife occurred, and whether livestock were grazing in the area ( Table 4 and Supplementary Table 1 ). Significant differences in GTI were only observed when comparing the levels of whether respondents felt that they were sufficiently equipped enough to reduce the illegal killing of lions ( Table 3 ). Respondents who strongly disagreed that sufficient resources were available had significantly GTI than other responses (P < 0.001). GTI did not vary significant with different levels of authority, formal protection, or people’s attitudes towards wildlife ( Table 4 and Supplementary Table 1 ). In terms of mitigation, whether livestock barriers, community engagement or routine patrols were conducted did not significantly affect GTI ( Table 4 ). When looking at GTI, most subpopulations were perceived to experienced medium severity ( Supplementary Table 1 ; n = 56, 42.42 %). Half of Central Africa’s subpopulations ( Supplementary Table 2 ; n = 9, 50 %), and 29 % (n = 2) of West African lion subpopulations were perceived to experience high GTI (n = 5). Most subpopulations in West Africa (71 %; n = 5), and 33.3 % (n = 11) in East Africa were perceived to experience medium GTI. Thirty-three percent of East African (n = 11)) and 45.9 % of southern African (n = 34, 45.9 %) subpopulations were perceived to experience average GTI. All lion subpopulations in Burkina Faso, Niger, Angola, South Sudan and the Democratic Republic of the Congo experienced high GTI ( Supplementary Table 2 ). 3.4 Total Threat Index (TTI) The average TTI for all subpopulations was 22.6 (± 1 SD 12.06; range: 1–58, Fig. 3 and Fig. 4 ). TTI results differed significantly across regions (χ 2 (3) = 22.02, P < 0.0001), with the TTI in Central Africa (33.83 ± 1 SD 11.22) significantly higher than East Africa (χ 2 (3) = 22.02, P < 0.005, TTI = 24.91 ± 12.85), West Africa (χ 2 (3) = 22.02, P < 0.05, TTI = 23 ± 6.83) and southern Africa (χ 2 (3) = 22.02, P < 0.001, TTI = 18.79 ± 10.36); the latter also being significantly lower than East Africa. Angola had the highest overall threat index (50.0 ± 0, n = 1), followed by Democratic Republic of the Congo (42.33 ± 3.05, n = 3) and Cameroon (41.40 ± 6.02, n = 5). Two subpopulations in southern Africa (South Africa (13.79 ± 6.88, n = 24) and Namibia (14.50 ± 9.83, n = 6)) and one in Eastern Africa (Rwanda 14.00 ± 0, n = 1) had the lowest TTI. TTI did not vary significantly among subpopulations (χ 2 (129) = 129.69, P > 0.05). Ethiopia’s Chebera Churchura NP and Omo NP both had the highest TTI of all subpopulations (each scoring 58), with Angola’s Luengue – Luiana NP and Cameroon’s Waza NP next highest (50 and 48 respectively). Five subpopulations had the lowest TTI, all in southern Africa and fenced: Ongava Game Reserve (1, Namibia), Rifa Zambezi Valley (2, Zimbabwe), Welgevonden Game Reserve (3, South Africa), Thanda Private Game Reserve (4, South Africa) and the Greater Lebombo Transfrontier Conservation Area (4, Mozambique). Similar trends were observed in the TTI as in the LTI and GTI ( Supplementary Table 1 ). Region, country, level of fencing, communities living in the area, livestock grazing within the area, livestock competition with wildlife and the level of park resources all significantly effected TTI. A full description is available in the supplementary material . 3.5 Total threat severity and Resource Availability Index (RAI) Populations in Central Africa mostly faced high total threat severity ( Fig. 3 ) but with high resource availability (n = 5, 27.78 %). East African lion populations were mostly faced with medium total threat severity and low resource availability (n = 12, 39.19 %); the same was experienced in southern Africa (n = 29, 39.19 %). Interestingly, West Africa experienced medium total threat severity and high resource availability (n = 4, 57.14 %). Almost half of the surveyed lion populations (n = 65, 49.24 %) are reportedly not equipped with sufficient resources to adequately reduce the illegal killings of lions. Only a quarter of surveyed areas were perceived to have high resource availability. Almost half of the surveyed populations in southern Africa (n = 35, 47.3 %) had low resource availability. The Resource Availability Index (RAI) was lowest, indicating the least resources, in East Africa (-2 ± 2.94), followed by West Africa (-1.71 ± 3.09; Fig. 4 ). Southern Africa and Central Africa, while still not sufficiently equipped, scored −0.83 ± 3.4 and −0.17 ± 3.60 respectively ( Fig. 4 ) with Central Africa perceived to have less resources than southern Africa. Most lion populations surveyed in this study (n = 47, 35.61 %; Table 3 ) experience average total threat severity and low resource availability ( Fig. 4 and Supplementary Table 2 ). Cameroon, the Democratic Republic of the Congo, Ethiopia and Angola all had high total threat severity scores ( Fig. 4 ) while Benin, Rwanda, Namibia and South Africa had low total threat severity scores ( Fig. 4 ). 3.6 Perceived poaching metrics Lions were not considered safe from poachers in almost half of surveyed subpopulations (47.7 %, n = 63, Fig. 5 ), and were considered safe in only 39 % (n = 52) of subpopulations. Regionally, lions were mostly considered unsafe from poachers (West Africa 42.87 % of subpopulation, Central Africa 61 %, East Africa 42.4 % and southern Africa 47.3 %). Of the 66 % of survey subpopulations that experienced poaching of lions, 20.7 % (n = 18) had an increasing poaching trend, 23 % (n = 20) felt that poaching trends were the same, 42.5 % (n = 37) were unsure of trends, and only 13.8 % (n = 12) had decreased trends. However, detailed records of lion mortalities were reportedly only kept by 57.6 % of lion subpopulations surveyed (n = 76), 28.8 % (n = 38) did not keep records, and 13.6 % (n = 18) of respondents for those subpopulations were unsure. 3.7 Perceived threats of poaching across lion range Lions were poached for parts in 30 % (n = 40) of surveyed subpopulations ( Table 5 , Fig. 6 ). The most frequent parts that were sought were skin, claws teeth, fat. Most respondents were unsure of the market for these parts (n = 15, 37.5 %) and only two respondents felt they were solely for the international market. Generally, it was perceived that parts were collected for both local and international markets (n = 12, 30 %). However, 27.5 % (n = 11) of those respondents who thought lions were poached for parts, thought that the parts were for the local markets within country. Based on the responses, uses for parts ranged from strengthening the body, curing diseases, increasing power and improving the immune system ( Table 5 ).( Table 6 ) Based on the severity scores, the most severe local threats across Africa were perceived to be a lack of or inconsistent funding for operations (average severity score of 1.79, Table 7 , Fig. 7 and Supplementary Figure 1 . Graphical representation of the average severity score of each of the local and global threats and pressures for African and each region), human encroachment for agriculture or settlement (1.65), and a loss of natural prey base resulting from poaching (1.62) ( Table 7 ). The most severe threats in Central Africa were perceived to be human encroachment (2.67), loss of prey base from poaching (2.67) and retaliatory killings or pre-emptive killing of lions to protect livestock (2.67). In East Africa, the three most severe threats were a lack of or inconsistent funding (2.03), retaliatory or pre-emptive killings (1.91) and human encroachment (1.91). In southern Africa, the three most severe threats were lack of or inconsistent funding (1.51), human encroachment (1.38) and development of infrastructure adjacent or within the area (1.33). In West Africa, severity of threats was perceived to be highest for the loss of natural prey base due to poaching (2.71), targeted poaching of lions for their parts (2.29), and small, isolated lion populations (1.86). Central Africa had the highest average severity score (1.75), followed by East Africa (1.32), West Africa (1.12) and southern Africa (1.05). The most severe global threat for Africa was perceived to be climate change (1.23; Table 7 and Fig. 7 and this was the most severe threat within East Africa (1.64) and Southern Africa (0.93). Within Central Africa, the most severe global threat was civil unrest or local war that poses direct threats to lions (2.17), while in West Africa civil unrest and local war, which reduces the effectiveness of protected area management, was perceived to be the most severe threat (2.14). For global threats, Central Africa had the highest average severity (1.46), followed by West Africa (1.29), East Africa (0.97) and southern Africa (0.52). 4 Discussion We used questionnaire surveys of park managers and researchers to assess the perceived severity of threats to lion subpopulations across the African continent. Lion subpopulations in Central Africa had the highest perceived threat severity scores. At the local scale, human encroachment, lack (or depletion) of natural prey, retaliatory killing by farmers, and lack of funding were seen as the primary threats to lions. On a global scale, climate change was seen as the primary threat in East and southern Africa, while civil unrest or local war was considered the greatest risk in Central and West Africa. Currently, limited population data exists for lions across their range, especially data that are reliable and comparable ( Nicholson et al., 2023 a). However, all evidence suggests that declines in African lion populations are expected to continue, and the recent Red List Assessment estimated that lion populations in Africa have a 41 % probability of declining by 33 % within three lion generations ( Nicholson et al., 2023 a). Regionally, the probability of a 33 % decline within three generations was estimated to be 74 % in West Africa, 36 % in East Africa, 33 % in Central Africa, and 20 % in southern Africa ( Nicholson et al., 2023 a). Our study found similar trends, with East and southern Africa perceived to have more optimistic past and future population trends compared to Central and West Africa. As human populations grow and the interface between wildlife and people becomes more pronounced, it is inevitable that anthropogenic threats and pressures on the natural environment will continue ( Nazarevich, 2015; Rull, 2022 ). Understanding the anthropogenic threats and pressures on individual subpopulations is critical in developing and implementing effective conservation action ( Bauer et al., 2020 ). Where subpopulations face combinations of threats, more targeted and intense conservation action will be required. This is especially so in areas where resources are inadequate, such as Cameroon, the Democratic Republic of the Congo, Ethiopia and Angola as they were perceived to experience high threat severity, to have insufficient resources, and populations are isolated ( Nicholson et al., 2023 , this study). Subpopulations within these countries should therefore be prioritised for conservation funding and action. We identified nine subpopulations expected to become extinct in the next decade, and as the lion is listed as Vulnerable, this is of concern and these areas should also be prioritized. Considering that the lion in West Africa is regionally listed as Critically Endangered ( Henschel et al., 2014 ), it would have been expected that threat severity is significantly higher, and that resource availability were low. However, we found the opposite, which could be due to optimistic perceptions of future lion subpopulations. In addition, considerable attention has been given to West African lion subpopulations over the past two decades ( IUCN, 2006a; Bauer and Nowell, 2010; Henschel et al., 2010, 2014 ), potentially improving conservation action and outcomes, thereby reducing threats. However, our results indicate that subpopulations in West Africa still have inadequate resource availability and medium to high severity, indicating that this region should continue to be prioritized for conservation action. As to be expected, areas where communities were living within the same area as lions lived, and livestock grazing in the area occurred, particularly where it was illegal, had higher severity scores than those without, indicating that the presence of livestock can play a potentially significant role in increased threats (see ( Patterson et al., 2004; Schuette et al., 2013; Tumenta et al., 2013; Boulhosa and Azevedo, 2014; Carvalho et al., 2015 ). As a result of this, increased threats such as human-lion conflict ( Blackburn et al., 2016; Lindsey et al., 2017; Leflore et al., 2020; Gueye et al., 2022 ), disease transmission ( Miguel et al., 2017 ), and habitat degradation ( Bauer et al., 2020; Frank, 2023 ) may occur. Fencing effectively reduces human-lion conflict, thereby reducing lion mortalities through retaliatory or pre-emptive killings ( Packer et al., 2013; di Minin et al., 2021 ). Indeed, we did find that areas that were fenced had significantly lower local threat severity scores. Several direct threats were identified in this study, including the prevalence of illegal grazing, which may result in increased human-lion conflict, with more lions killed ( Lindsey et al., 2017; Bauer et al., 2020 ). Illegal grazing is evidently a more notable problem in West Africa where it occurred in all seven surveyed subpopulations. Poaching for parts has typically been understood to occur in select southern African subpopulations, for example, Limpopo National Park in Mozambique where 35 % of human-caused mortalities were due to the targeted poaching for parts ( Everatt et al., 2019 ). However, it is concerning to note how much more widespread this threat is, with lions being poached for parts in the W-Arly-Penjari Complex (Niger, Benin, Burkina Faso), the Benoué Complex (Cameroon), the Eastern Central African Republic Wilderness (including Chinko Reserve), Kafue National Park (Zambia), Boma National Park (South Sudan), and Gambella National Park (Ethiopia). While this threat could potentially contribute to local population declines, as it has done in Limpopo National Park ( Everatt et al., 2019; Mole and Newton, 2020 ), there is limited information available on this threat and the scale to which it is happening. We recommend that more attention is given to understanding this threat to determine its local level population impacts and scale, particularly in West and Central Africa. Impacts of civil war and unrest, most severe in Central and West Africa, generates unique insecurity and lawlessness, which amplifies existing threats and can also contribute to the reduction of protected area management efficiency ( Lhoest et al., 2022; Aglissi et al., 2023 ). This has resulted in local extinctions of lions in Comoé National Park (Côte d’Ivoire), where, following the civil war in 2002, the park was abandoned, resulting in the intensification of all existing threats, causing the demise of many species within Comoé ( Aglissi et al., 2023 ). This is an extremely challenging threat to mitigate, and requires large-scale intervention from international stakeholders, not just from within country ( Lhoest et al., 2022 ). Where management has sufficient resources to protect and conserve lions, the perceived threats to them was lower, and where resources are limited, lions possibly experience more negative human pressures and, as a direct consequence, have decreasing numbers ( Lindsey et al., 2017 ). Where area management is considered to be poor, or funding is insufficient, it exacerbates existing threats and pressures ( Lindsey et al., 2017; Bauer et al., 2020 ). Resources for conservation and protected area management are disproportionately distributed ( Lindsey et al., 2017 ) and this was evident in our study. The lack of, or inconsistent, funding for area operations was perceived to be the most severe threat at a continent level, and for both southern and East Africa at a regional level. The challenge of insufficient, or a lack of consistent, funding is widespread ( Lindsey et al., 2018; Bauer et al., 2020; Nicholson et al., 2023b ). Protected areas with lions collectively require between USD 1.2 to USD 2.4 billion annually, but Lindsey et al. (2018) calculated that these areas only receive USD 381 million per annum. Between 88 % and 94 % of protected areas with lions are insufficiently funded ( Lindsey et al., 2018 ). Nicholson et al. (2023b) estimated that to adequately safeguard all areas with lions, approximately USD 3 billion would be required annually. Generating greater investment for areas with lions would not only increase the conservation success for lions, but would have far reaching benefits including social, economic and ecological benefits ( Lindsey et al., 2018; Nicholson et al., 2023b ). Areas that are well funded and have sufficient resources (e.g., vehicles, anti-poaching equipment, staff) are able to reduce the impact of threats, thereby protecting wildlife populations ( Bauer et al., 2020 ). One caveat to note in our study regarding the level of resources available is that we reported on perceptions. Most surveyed subpopulations were perceived to be under-resourced, but it must be noted that the measures respondents used to define sufficient resources was based purely on their own opinion. Data that are gathered through questionnaire type surveys may present several challenges. Firstly, responses gathered in this manner are often based on personal experiences, perceptions and opinions rather than quantitative data and as such are subjective ( Stantcheva, 2023; Podsakoff et al., 2003 ). Respondents may be influenced by personal biases, social desirability, or lack of expertise, leading to inaccuracies in their responses ( Bergen and Labonté, 2020; Stantcheva, 2023; Podsakoff et al., 2003 ). However, caution was taken in the interpretation of the data and we have indicated in our aims and throughout the paper that this study presents perceptions of the reality from experts we surveyed. Importantly, responses were calibrated by having the same metric of severity across respondents, who are likely to have access to the most up-to-date information on the sampled sites. To date, most studies that assess threats are subpopulation specific ( Hazzah et al., 2009; Rosenblatt et al., 2014; Ogutu et al., 2016; Everatt et al., 2019; Aebischer et al., 2020 ), and we present the first continental overview of threats facing Africa’s lions; we see the comparison across subpopulations not as a weakness but as a strength. Further, we acknowledge the qualitative nature of this analysis and the limitations in disentangling the relationships between threats, vulnerability, and adaptive capacity. Several variables used in the analysis—such as area resources, formal protection, fencing, and community presence—may be endogenous and correlated with perceived threat severity ( Kock et al., 2021 ). Future studies should incorporate more detailed data on park characteristics (e.g., size, age, patrol effort) and apply advanced spatial or causal modelling approaches to better understand the dynamics between threats and adaptive capacity. 4.1 Conclusion We provide preliminary insight into the threats experienced across lion subpopulations in Africa, and how these differ across regions. We highlight the complex dynamics affecting lion subpopulations in Africa, emphasizing the importance of considering regional and local contexts in conservation efforts. We have provided insights into the perceived severity of these threats for a large subset of Africa’s lion subpopulations, and we recommend that this work be developed further using more robust measures of severity and impact. The threat indices developed in this study could serve as a template to support current and future research efforts, conservation planning and adaptive management of lion landscapes. The assessment of these threats should be integrated into regional conservation strategies, and we recommend that these be updated, as they were last done in 2006 ( IUCN, 2006b, 2006a; Nowell et al., 2006 ). In addition, very few mitigation and conservation strategies include aspects of climate change, and we recommend that measures such as habitat restoration and climate change adaptation initiatives be considered and included to enhance the resilience of lion subpopulations. In conclusion, this research provides a nuanced understanding of the current state of lion subpopulations in Africa, shedding light on their geographical distribution, perceived trends, and the multifaceted threats they face. The regional disparities in responses underscore the importance of considering diverse ecosystems and conservation challenges across the continent. Our findings emphasize the need for tailored conservation strategies, recognizing the unique challenges faced by different regions. Ethical Statement This survey received ethics clearance from the University of KwaZulu-Natal Human and Social Sciences Research Ethics Committee (approval number: HSSREC/00003076/2021) in accordance with the principles of the Declaration of Helsinki, including voluntary participation, informed consent, confidentiality, and the right to withdraw at any time. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Appendix A Supporting information Supplementary data associated with this article can be found in the online version at doi:10.1016/j.gecco.2025.e03760 . Appendix A Supplementary material Supplementary material
REFERENCES:
1. AEBISCHER T (2020)
2. AGLISSI J (2023)
3. ALMEIDA J (2025)
4. ARAUJO M (2005)
5. BAUER H (2010)
6. BAUER H (2015)
7. BAUER H (2020)
8. BAUER H (2018)
9. BECK J (2019)
10. BERGEN N (2020)
11. BLACKBURN S (2016)
12. BOULEY P (2018)
13. BOULHOSA R (2014)
14. BRUNER A (2001)
15. BRUSKOTTER J (2014)
16. CARTER N (2018)
17. CARVALHO E (2015)
18.
19. COALS P (2022)
20. CREEL S (2016)
21. CROES B (2011)
22. DANCER A (2022)
23. DIMININ E (2021)
24. DICKMAN A (2019)
25. EVERATT K (2019)
26. FELIX N (2022)
27. FRANK L (2023)
28. FYNN R (2016)
29. GREEN J (2022)
30. GROOM R (2014)
31. GUEYE M (2022)
32. HAZZAH L (2009)
33. HAZZAH L (2014)
34. HENSCHEL P (2010)
35. HENSCHEL P (2014)
36. HULME M (2001)
37. IKANDA D (2008)
38. IUCN (2006)
39. IUCN (2006)
40. KISSUI B (2008)
41. KOCK F (2021)
42.
43. LEFLORE E (2020)
44. LHOEST S (2022)
45. LINDSEY P (2013)
46. LINDSEY P (2017)
47. LINDSEY P (2018)
48. LINDSEY P (2021)
49. LOVERIDGE A (2007)
50. LOVERIDGE A (2020)
51. LOVERIDGE A (2022)
52. LOVERIDGE A (2023)
53. LUISELLI L (2019)
54. MAKUYANA P (2018)
55. MIGUEL E (2017)
56. MOLE K (2020)
57. MONTGOMERY R (2023)
58. MUDUMBA T (2021)
59. MWEETWA T (2018)
60. NAZAREVICH V (2015)
61. NICHOLSON S (2023)
62.
63. NOWELL K (2006)
64. OGUTU J (2016)
65. PACKER C (2011)
66. PACKER C (2013)
67.
68. PAGE S (2015)
69. PAGENICHOLSON S (2017)
70. PATTERSON B (2004)
71. PETERSON A (2014)
72. PODSAKOFF P (2003)
73. RIGGIO J (2013)
74. ROBSON A (2022)
75. ROSENBLATT E (2014)
76. RULL V (2022)
77. SALAFSKY N (2008)
78. SCHUETTE P (2013)
79. SIBANDA L (2021)
80. STANTCHEVA S (2023)
81. THUILLER W (2004)
82. TUMENTA P (2013)
83. VICKERSTAFF V (2019)
84. WILLIAMS V (2017)
85. WOLF C (2017)
86. WOODROFFE R (2000)
87.
88. ZIERVOGEL G (2014)
|
10.1016_j.celrep.2024.113690.txt
|
TITLE: Documenting the diversity of the Namibian Ju|’hoansi intestinal microbiome
AUTHORS:
- Truter, Mia
- Koopman, Jessica E.
- Jordaan, Karen
- Tsamkxao, Leon Oma
- Cowan, Don A.
- Underdown, Simon J.
- Ramond, Jean-Baptiste
- Rifkin, Riaan F.
ABSTRACT:
We investigate the bacterial and fungal composition and functionality of the Ju|’hoansi intestinal microbiome (IM). The Juǀʼhoansi are a hunter-gatherer community residing in northeastern Namibia. They formerly subsisted by hunting and gathering but have been increasingly exposed to industrial dietary sources, medicines, and lifestyle features. They present an opportunity to study the evolution of the human IM in situ, from a predominantly hunter-gatherer to an increasingly Western urban-forager-farmer lifestyle. Their bacterial IM resembles that of typical hunter-gatherers, being enriched for genera such as Prevotella, Blautia, Faecalibacterium, Succinivibrio, and Treponema. Fungal IM inhabitants include animal pathogens and plant saprotrophs such as Fusarium, Issatchenkia, and Panellus. Our results suggest that diet and culture exert a greater influence on Ju|’hoansi IM composition than age, self-identified biological sex, and medical history. The Ju|’hoansi exhibit a unique core IM composition that diverges from the core IMs of other populations.
BODY:
Introduction The human gastrointestinal tract (GIT) harbors a dynamic population of bacteria, archaea, fungi, protozoa, and viruses, i.e., the intestinal microbiota. The human intestinal microbiome (IM) performs critical functions in digestion, development, and immunity. 1 Dysbiosis of the IM has been associated with inflammatory and auto-immune diseases, including allergies, 2 obesity, 3 diabetes, 4 and inflammatory bowel disease. 5 In addition, the IM played a key role in facilitating human adaptation to novel environments, in part facilitating the global dispersal of our species. 6 The impacts of changing diets, lifestyles, and environmental exposure to novel pathogens and pollutants on the human IM, therefore, relate directly to long-term human health and well-being. 7 Prior to the Neolithic Age, humans subsisted solely by hunting and gathering. Although the lifestyle changes associated with the advent of sedentary communities and farming significantly impacted hunter-gatherer IM taxonomic composition and metabolic capacity, the impacts from the 1700s, of the Industrial Revolution, and the resultant process of global “Westernization” on the human IM are particularly marked. 8 9 , 10 , 11 , 12 , 13 , It is widely held that contemporary hunter-gatherers provide insight into the configuration of the preindustrial human IM, owing largely to their comparatively limited exposure to Western lifestyle factors such as novel food sources, allopathic medication, and pollutants. However, even communities such as the Tanzanian Hadza hunter-gatherers, 14 the Venezuelan Yanomami Amerindians, 15 the BaAka in the Central African Republic, 16 and the Arctic Inuit 17 are and have been subject to the influences of Westernization, including urbanization and industrialization. As such, these communities represent a small window of opportunity to study the evolution of the IM during the transition from a non-industrialized, rural subsistence-based to an urban-industrialized lifestyle. 18 19 Despite these transformations, differences in IM adaptations to diverse lifestyles remain prevalent between urban-industrialized societies and non-industrialized rural populations. Non-industrialized rural populations, whose subsistence is based primarily on foraging and small-scale subsistence-based agriculture and pastoralism, typically adhere to a high-fiber, low-fat, and low-sugar diet and generally have limited access to allopathic medication. Moreover, in these contexts, people typically tend to associate more closely with one another, their pets, livestock and wildlife, and environmental microbes. 20 21 , 22 In contrast, Western diets, as is most common among urban-industrialized societies, tend to comprise processed, high-fat, low-fiber foods combined with increased sedentarism and easier access to allopathic medication. Western communities typically tend to experience less exposure to the natural environment and associated microbes. These socioeconomic and subsistence-related factors are understood to account for observed compositional differences between the IMs of rural non-industrial and urban industrial populations. 23 24 Non-industrialized populations tend to harbor taxonomically more diverse IMs, containing higher abundances of short-chain fatty acid (SCFA)-producing bacteria, such as Prevotella , Succinivibrio , and Treponema . 24 , 25 , The changes in taxonomic composition and metabolic capacity, including the loss of various “cornerstone” IM members resulting from urbanization and industrialization, are suspected to contribute to the increasing prevalence of inflammatory diseases typically seen in Westernized populations. 26 7 , 27 , 28 Studies concerning the Hadza, Amerindians, 15 the BaAka, 16 and the Inuit 17 have provided insight into the IM composition of non-industrialized forager-farmer societies. To date, comparable research has not been conducted in Namibia. Moreover, few studies explicitly investigate the influence of lifestyle factors, such as medical history and residential mobility, on observed IM taxonomic and metabolic variability. 18 The Ju|’hoansi of the Nyae Nyae To elucidate the taxonomic composition and metabolic capacity of the bacterial and fungal IMs of a former southern African hunter-gatherer community, we sequenced and analyzed the 16S rRNA gene (V3-V4) and the ITS1-ITS2 (internal transcribed spacer) region of fecal samples derived from 40 Juǀʼhoansi (pronounced “Zhu- t -wasi”) hunter-gatherers. The Ju|’hoansi, formerly known as the “!Kung Bushmen,” inhabit the Nyae Nyae Conservancy (NNC) in northeastern Namibia. Established in 1998, the NNC covers ∼9,000 km 2 and is home to 2,300 Juǀʼhoansi and Bantu-speaking Herero agro-pastoralists. Even though the Juǀʼhoansi have long been portrayed as “pristine” hunter-gatherers representative of prehistoric southern African foragers, it is emphasized that Neolithic Iron Age farmers have sporadically entered the Kalahari since at least 500 AD. Contact with agrarian societies, including the associated dietary aspects, has long been part of Kalahari hunter-gatherer history. Despite long-standing interactions with Bantu-speaking farmers and, since the 19th century, European hunters and traders, many Juǀʼhoansi do maintain hunting and gathering practices, particularly as Nyae Nyae is one of the few remaining regions where an indigenous African community retains rights to their lands and where they may still forage throughout the year. 29 The Ju|’hoansi have experienced a dietary transition, increasing their consumption of domestic plants, milk, and meat from livestock while reducing their dependence on wild plants and animals. Prior to the 1960s, the Ju|’hoansi subsisted by hunting and gathering no less than 85 species of wild plant foods. 30 Up to 80% of all identified food species consumed comprised plants, with the remainder of their diet comprising meat obtained via the hunting and trapping of approximately 50 different animal species. 31 31 From the 1960s onwards, several stores selling food, liquor, and other facilities, such as a medical clinic, were introduced to the NNC. This exposed the Ju|’hoansi to Western commodities, including sugar, canned foods, coffee, tea, and maize porridge. Since the 1980s, several foundations have assisted the Ju|’hoansi to plant vegetable gardens and raise livestock. Such initiatives include the planting of papaya ( 32 Carica papaya ), beetroot ( Beta vulgaris ), carrots ( Daucus carota ), onions ( Allium cepa ), and tomatoes ( Solanum lycopersicum ) and the ownership of cattle and goats. Currently, following the onset of the summer rains in December, the Juǀʼhoansi diet largely comprises “bush food,” various species of geophytes termed “wild onions” (such as Babiana , Dipcadi , Eulophia , etc.), wild melons (e.g., Citrullus lanatus and Cucumis hookeri ), mongongo nuts ( Schinziophyton rautanenii ), Grewia berries and baobab ( Adansonia digitata ), marula ( Sclerocarya caffra ) fruits, tree gums (e.g., Acacia , Combretum , and Terminalia ), mushrooms (e.g., Terfezia ), and honey. By July, foraging becomes less important, as natural resources become less abundant, although rhizomes and Acacia tree resins are still collected. The Ju|’hoansi are very fond of meat and will consume it at every opportunity. Hunting and trapping provide the Ju|’hoansi with kori bustard ( 33 Ardeotis kori ), helmeted guineafowl ( Numida meleagris ), steenbok ( Raphicerus campestris ), springhare ( Pedetes capensis ), porcupine ( Hystrix africaeaustralis ), and other taxa. During the dry season, when foraging opportunities become scarcer, the Ju|’hoansi will more frequently hunt and trap animals. Sometimes, if their hunting is successful, they will eat meat up to three times a day. Given these seasonally dependent dietary variations, it is probable that the taxonomic composition of the Ju|’hoansi IM is influenced by seasonality, although this does not form part of the scope of this analysis. It must be noted that sugar, tea, coffee, and, rarely, chocolate form part of their diet throughout the year, and increasing amounts of chili, pepper, and salt are consumed. Food sources available from the stores in Tsumkwe, the central village, substitute a significant proportion of foraging as the primary means of subsistence, and, consequently, the Ju|’hoansi have become reliant on a combination of both hunter-gatherer and contemporary-market-based subsistence strategies. 34 35 We aimed to determine how observed taxonomic variations in the Juǀʼhoansi IM might relate to eight variables, namely (1) the ages of research participants, (2) their former use of antibiotic treatment for tuberculosis, (3) their self-identified biological sex (i.e., male or female), (4) whether diarrhea is or had been experienced following the consumption of certain foods, (5) whether participants have ever experienced an intestinal infection, (6) their former or current use of malaria medication, (7) their exposure to local, regional, and international travel, and (8) the village of primary residence of each research participant (i.e., Duinpos, Mountain Pos, Den/ui, and !Om!o!o). We also aimed to ascertain whether a core bacterial and fungal Juǀʼhoansi IM could be identified and the degree to which this might approximate the core IM identified on a global scale. Results Characterizing the Ju|’hoansi IM by 16S rRNA and ITS sequencing In addition to the 40 fecal samples, two control samples, namely KIT-CTRL (kit control; i.e., the buffers of the used extraction kit) and CON-CTRL (a sampling container control), were analyzed. A total of 4,679,902 16S forward and reverse reads were imported into QIIME2-2021.2 and merged, resulting in a mean read count of 38,031 reads per sample. A total of 5,938,170 ITS forward reads were imported, resulting in a mean read count of 88,116 reads per sample. Six individuals were removed from the study on account of being outliers following quality control, resulting in 34 individuals. After filtering and quality control, 4,184 bacterial ASVs (amplicon sequence variants) remained. The fungal reads were clustered due to insufficient resolution for an ASV-level analysis, resulting in 167 OTUs (operational taxonomic units). 36 ASV/OTU and taxonomy tables were imported into R. Contaminant ASVs/OTUs were identified using decontam 37 based on their prevalence in the control samples. Twelve ASVs from three bacterial species were identified as contaminants: 38 Streptococcus salivarius , Parabacteroides merdae , and members of the Eubacterium coprostanoligenes group. Fungal contaminant identification yielded four OTUs, namely Malassezia globosa , Pleosporales sp., Saccharomycetales sp., and Candida albicans. Following the removal of these contaminants, 17 bacterial phyla were identified, with Firmicutes (66%) and Bacteroidota (25%) being the two most abundant, resulting in a Firmicutes/Bacteroidota (F/B) ratio of ∼2.64. Other phyla included Proteobacteria (7.4%), Spirochaetota (0.84%), and Actinobacteria (0.29%) ( Figure 1 ). In total, 125 bacterial genera and 120 bacterial species were identified in the Juǀʼhoansi IM, with the top five genera comprising Prevotella (23%), Blautia (9.53%), Faecalibacterium (7%), Succinivibrio (6%), and Ruminococcus (5%). Treponema occurred at an abundance of 0.42%. The two most abundant fungal phyla were Ascomycota (54%) and Basidiomycota (46%), with Chytridiomycota (0.01%) comprising the remainder ( Figure 2 ). In total, 82 fungal genera representing 95 species were identified, with the top three genera comprising Malassezia (21%), Candida (20%), and Naganishia (14%). Community composition and differentially abundant taxa of the Ju|’hoansi IM No statistically significant differences in α-diversity were detected between groups for the variable factors tested, i.e., age, biological sex, mobility, medication use, medical history, and the village of primary residence. Bacterial and fungal combined β-diversity was measured using the Bray-Curtis metric, and a significant difference was evident for communities between !Om!o!o and Mountain Pos (p = 0.004), Den/ui and Duinpos (p = 0.002), and Den/ui and Mountain Pos (p = 0.001) using ANOSIM. ( Figure 3 A). β-Diversity was also significantly different when considering bacterial and fungal communities individually ( Figure S1 ). Differentially abundant genera were identified using ALDEx2 Welch’s t test. Rikenellaceae RC9 gut group was more abundant in Mountain Pos than !Om!o!o (p = 0.02). Cladosporium was more abundant in !Om!o!o than Duinpos (p = 0.09) and more abundant in Den/ui than Duinpos (p = 0.04). Candida was more abundant in Duinpos than Den/ui (p = 0.08) ( Figure 3 B). Differential abundance for travel, the use of malaria medication, and whether participants experienced frequent diarrhea could not be tested due to class imbalance. There were no differentially abundant genera between participants of different ages or biological sexes or whether participants experienced intestinal infections. We observed an interaction effect between the village of primary residence and antibiotic usage: most Duinpos (88%) and Den/ui (100%) villagers had used antibiotics, while far fewer villagers from !Om!o!o (13%) and Mountain Pos (50%) made use of antibiotics (chi-squared p = 0.0004). Core bacterial and fungal genera of the Ju|’hoansi IM The ongoing search for a core IM—a set of taxa shared across human populations —will further our understanding of the evolutionary pressures that govern host-microbe interactions, as well as the organization and functional importance of these interactions. 39 40 The Ju|’hoansi bacterial and fungal core IM was elucidated at 90% (hard core), 70% (medium core), and 50% (soft core) prevalences, and a detection threshold of 0.1% was used to exclude very rare taxa from the core IM. The Ju|’hoansi bacterial and fungal intestinal medium-core IMs comprised five and six taxa ( Figure S2 ), respectively, whereas the soft core consisted of 11 bacterial and seven fungal taxa ( Figure 4 ). Noticeable were the high relative abundances of Blautia and Malassezia in the medium core and of Prevotella , Faecalibacterium , Malassezia , and Naganishia in the soft core. No intestinal hard core was detected at a 90% prevalence cutoff. A few bacterial genera uniquely comprised the soft-core microbiome of one or two villages only, such as the Ruminococcus gnavus group, a core member of Mountain Pos alone. This was also the case for several fungal soft-core genera, such as Candida being unique to Duinpos and !Om!o!o and Vishniacozyma being unique to Den/ui ( Tables S1 and S2 ). Metabolic enrichment of the Ju|’hoansi IM To explore the bacterial functional enrichment of the Ju|’hoansi IM, we established putative metabolically functional profiles for both fungal and bacterial datasets and determined which pathways are differentially expressed. Only two villages, Mountain Pos and !Om!o!o, showed differences in abundances for pathways involved in (1) amino acid biosynthesis, (2) biotin biosynthesis, (3) co-factor, carrier, and vitamin biosynthesis, (4) fatty acid biosynthesis, (5) proteinogenic amino acids biosynthesis, (6) sugar biosynthesis, and (7) other biosyntheses. Interestingly, we also detected pathways for pathogenicity, particularly for polymyxin resistance in E. coli and peptidoglycan biosynthesis (β-lactam resistance) in Enterococcus and Staphylococcus . It should also be mentioned that fewer participants living in Mountain Pos (50%) and !Om!o!o (11%) indicated they had or were using antibiotics, whereas the opposite was noted for Duinpos (88%) and Den/ui (100%). Additionally, all villages except Mountain Pos reported intestinal infections, although they were more prevalent in Duinpos. Our results suggest that the IMs of Mountain Pos and !Om!o!o participants potentially have more pronounced amino acid, fatty acid and lipid, enzyme co-factor, and carbohydrate metabolisms than the other villages ( Figure S3 ). We detected the functional profiles of fungal IM inhabitants using FUNGuild. Most of the fungal IM inhabitants were animal pathogens and wood or leaf saprotrophs. This included 41 Fusarium , which is an opportunistic human pathogen, and genera such as Amyloporia , Botryobasidium , and Wojnowiciella , which are wood saprotrophs commonly found on dead plant material. Other fungi, such as 42 Podospora , can also be found on the dung of wild animals ( 43 Table S3 ). Interaction between the Ju|’hoansi bacterial and fungal IMs Since fungi and bacteria commensally co-inhabit the human IM, the interactions between these taxa are of interest. It has been shown that the diet-microbe association in the human IM is not exclusively limited to a particular microbial kingdom and that interactions with other microbes, such as fungi, also play a role in human health and disease. To explore the co-occurrence of fungal and bacterial taxa in the Ju|’hoansi IM, we performed Spearman correlation analysis to explore the incidence of fungal-bacterial co-occurrence networks in the Ju|’hoansi IM. Only statistically significant correlations (p < 0.01) with a high correlation coefficient (ρ) ≥ ±0.7 were selected and translated into a network ( 44 Figure 5 ). The network was further analyzed to identify the network statistics and modular structures of highly interconnected nodes. Due to the effects of antibiotics on the gut microbiome, the network was colored according to antibiotic use, and the nodes were shaped by village (see Figure S4 to visualize the network with its nodes identified by their microbial phyla of origin). The network was highly modular and consisted of 754 nodes (bacteria: 682 [90.45%] and fungi: 72 [9.55%]) and 5,887 edges (99.9% positive [5,886/5,887] and 0.01% negative [1/5,887]) ( Figure 5 ; Table S4 ). Interestingly, the network comprised several smaller groups of nodes (i.e., ASVs/OTUs) assigned to a specific village with or without antibiotic use. These smaller groups were linked by “universal” nodes, classified as those observed at multiple villages, irrespective of antibiotic use (i.e., some participants have used antibiotics and others have not). Expectedly, the groups were clustered into several modules by MCODE, with village as the primary grouping factor; the largest module consisted of nodes only found in !Om!o!o with a few universal nodes (module I) ( 45 Figure 5 ). Only the most important modules are illustrated in Figure 5 . The dominant phyla in the network included Firmicutes (56.36%), Bacteroidota (28.91%), and Ascomycota (7.82%), and the majority of interactions were also within and between these three groups ( Figure 5 ). For example, positive interactions between phylotypes within (1) Firmicutes (e.g., Faecalibacterium , Eubacterium sp., and Clostridia sp.), (2) Bacteroidota (e.g., Prevotella , Alloprevotella , and Bacteroides ), and (3) Ascomycota ( Aspergillus sp., Candida sp., Didymella sp., Epicoccum sp., and Fusarium sp.) and (4) between Firmicutes and Bacteroidota comprised 63.28% of the total interactions. Their prevalence in the network was not surprising since these are common taxa of the IM 39 , and play important roles in carbohydrate and amino acid metabolism and energy production. 46 47 , Although these common taxa and their associations were prevalent throughout the network, we noticed interesting co-occurrences only for modules where participants have used antibiotics and/or had inflammation, specifically the genera 48 Sutterella , Dialister , Alistipes , Epicoccum , Enterococcus , Escherichia - Shigella , Fusobacterium , Knufia , Paraprevotella , and Streptococcus . We also identified keystone species since alteration in their abundance is expected to induce changes in the abundance of other species and possibly impact host metabolism and health. Consistent with the topology of the network, most keystone species were positioned in the main module (module I) and included the three dominant taxa with genera Prevotella , Parabacteroides , Alloprevotella ( Bacteroidota ), Faecalibacterium , Phascolarctobacterium , Anaerovibrio , Blautia , UCG-002 , CAG-352 , Holdemanella , Eubacterium ventriosum group , Ruminococcus torques group , Lachnoclostridium , Clostridia UCG-014 ( Firmicutes ), and Aspergillus ( Ascomycota ), as well as three minor taxa ( Elusimicrobiota , Elusimicrobium ; Proteobacteria , Succinivibrio ; and Basidiomycota , Tremellales ). The remainder of the keystones were primarily universal nodes present in multiple villages. Taxa for the latter differed slightly from the keystone species in module I and included CAG-873 , Prevotellaceae NK3B31 group , Rikenellaceae RC9 gut group ( Bacteroidota ), UCG-005 , Ruminococcus torques group , Roseburia , Subdoligranulum , Christensenellaceae R-7 group ( Firmicutes ), Succinivibrio ( Proteobacteria ), Aspergillus , Didymella , Phaeosphaeria , Exserohilum , Wojnowiciella , and Sclerostagonospora ( Ascomycota ). Our results indicate that some keystone species of the Ju|’hoansi IM might be village dependent, while others are common between villages; nevertheless, Firmicutes , Bacteroidota , and Ascomycota appear to influence the IM of the Ju|’hoansi most strongly. Global comparative analysis of the Ju|’hoansi IM To determine how the Ju|’hoansi IM compares to the IM of a global cohort, we incorporated bacterial IM data from the BaAka hunter-gatherers, Bantu and Papua New Guinean agriculturalists, and United States industrialists, 17 , as well as fungal IM data from rural and urbanized South Africans. 49 We identified the soft-core microbiome at a prevalence of 50% and a detection threshold of 0.1% ( 50 Table S5 ). We first considered the global cohort as a whole to identify features unique to the Ju|’hoansi core IM. The Ju|’hoansi IM harbors several unique core IM residents, including 20 bacterial genera such as Bacteroides , Colidextribacter , Oribacterium , Desulfovibrio , and Sarcina , as well as four unique fungal genera: Malassezia , Fusarium , Naganishia , and Panellus . The only two fungal genera that were part of the core South African microbiome were Candida and Cladosporium , which also formed part of the Ju|’hoansi core fungal IM ( Figures 6 A and 6B ). To determine which bacterial features were shared between individual populations and the Ju|’hoansi, the core IMs of the populations were analyzed individually at the same detection and prevalence thresholds mentioned above. The Ju|’hoansi share only two core genera with the BaAka hunter-gatherers: Butyrivibrio and Anaerovibrio . Marvinbryantia is common between the Ju|’hoansi and the agriculturalists from Papua New Guinea ( Figure 6 C). These are the only core bacterial genera common between the Ju|’hoansi and other populations, suggesting that the Ju|’hoansi core IM is somewhat unique compared to the global cohort. Discussion Based on our results, we conclude that (1) the Ju|’hoansi IM is enriched for bacterial taxa commonly associated with other hunter-gatherer populations, (2) overall bacterial and fungal IM composition was significantly different between residents of different villages, (3) Rikenenellaceae RC9 gut group , Candida , and Cladosporium were differentially abundant between participants from different villages of residence, and (4) unique Ju|’hoansi dietary and lifestyle characteristics are associated with a similarly unique core IM compared to those of other populations. The Ju|’hoansi bacterial IM broadly resembles that of other non-industrialized societies Firmicutes and Bacteroidota are the dominant phyla in the Ju|’hoansi IM, resulting in an F/B ratio of 2.64. While the significance of the F/B ratio is controversial, it has been associated with the onset of inflammation, obesity, and various metabolic diseases. The Ju|’hoansi F/B ratio broadly resembles those reported for Bantu-speaking Africans in Burkina Faso 51 (2.8) and the East African Hadza 26 (2.6). The Ju|’hoansi F/B ratio does not resemble that reported for a preindustrial (i.e., archaeological) Neolithic agro-pastoralist South African IM, which has an F/B ratio of 0.4. 15 The increased presence of 14 Firmicutes in the Ju|’hoansi IM can be attributed to the fact that diets rich in starches have been shown to increase the F/B ratio, corresponding to increases in enzymatic pathways and metabolites involved in lipid metabolism. Additionally, the presence of 52 Treponema in the Ju|’hoansi IM is expected, as this taxon normally occurs in the IMs of rural forager-farmer societies, while it is rare in the IMs of urban-industrialized populations. 25 , 53 The Ju|’hoansi IM harbors an abundance of bacterial taxa that ferment fiber and plant polysaccharides, including Prevotella , Blautia , Faecalibacterium , Succinivibrio , and Treponema . These convert fiber into metabolically advantageous SCFAs, namely propionate, acetate, and butyrate, which have anti-carcinogenic and anti-inflammatory properties. 54 , The abundance of fiber-fermenting bacteria in the Ju|’hoansi IM reflects their fiber-rich diet that is relatively low in animal protein and fat. It includes staple food items such as mongongo nuts, which have 3.5 and 2.7 g fiber per 100 g in the flesh and kernel, respectively. 55 The abundance of fiber-fermenting taxa in the Ju|’hoansi IM is comparable to the IMs of other rural forager-farmer societies that adhere to a similar lifestyle. Children from Burkina Faso 56 harbor high abundances of the same taxa as individuals from rural Nigeria 26 and the Tanzanian Hadza. 57 The Hadza are documented to consume tubers, berries, honey, baobab fruit, and wild animals, 15 which is similar to the primary dietary constituents of the Ju|’hoansi. 58 The parallels in diet across these populations may clarify the observed similarities in IM composition. 59 The Ju|’hoansi IM harbors functional potential relating to several metabolic pathways, of which amino acid and lipid metabolism are the greatest, followed by enzyme co-factor and carbohydrate metabolism (e.g., glucose, galactose, sucrose, starch, hemicellulose). Similar to the BaAka, our results suggest that the Ju|’hoansi incorporate a considerable amount of meat into their diet during the dry season, when foraging becomes less prevalent. 17 Inflammatory bowel disease or colonic inflammation is often associated with increased amino acid turnover and secondary bile acids due to the high consumption of red meat. 60 , Interestingly, a moderate percentage of participants (35%) indicated they were or had been experiencing intestinal infections, and several pathways associated with bacterial pathogenicity, such as polymyxin resistance in 61 E. coli and peptidoglycan biosynthesis (β-lactam resistance) in Enterococcus and Staphylococcus , were identified, consistent with specific taxa (e.g., Sutterella , Dialister , Alistipes , Epicoccum , Enterococcus , Escherichia - Shigella , Fusobacterium , Knufia , Paraprevotella , and Streptococcus ) related to villages with higher antibiotic use and/or inflammation. This is noteworthy since similar results have been observed in the BaAka and, interestingly, parasitism was found here (e.g., 17 Trichuris trichuira ) and in the gut microbiome of other rural African populations. 17 , However, it is not clear if antibiotic use and/or evolutionary adaptations in genes in the Ju|’hoansi increase susceptibility to colon infection, as knowledge on this is still lacking. Future research should determine the impact of Ju|’hoansi host genetics in selecting the IM and potential pathogens, host-microbe interactions, and if virulence-associated genes and those associated with host immune response are comparable with other African hunter-gatherers. 62 Bacterial-fungal interactions are known to occur in the human IM, and associations can influence bacterial/fungal growth and physiology and, ultimately, behavior and survival. 63 , Since the majority of interactions (99%) between bacteria and fungi in this study were positive, we can infer mutualistic relationships where one species promoted the growth of the other, such as commensal bacteria/fungi influencing the availability of specific biologically important metabolites 64 or fungi (e.g., 65 Candida ) enhancing the environment for strict anaerobes like B. fragilis and B. vulgatus. For example, a recent study investigating the difference in gut IMs between Japanese and Indian participants showed higher abundances of 66 Candida and Prevotella in Indian subjects who consumed a plant-rich diet. The authors demonstrated the ability of 67 Candida to convert plant polysaccharides (e.g., cellulose and xylan) to arabinose, which enhances the growth of Prevotella. Similar deductions may be drawn in this study, as both Candida and Prevotella were dominant taxa in the Ju|’hoansi IM, but we also found positive interactions between Aspergillus and Prevotella . Aspergillus has similar plant polysaccharide degradation properties to 68 Candida , providing the necessary carbon source for bacterial growth. These results suggest a dietary-metabolite-mediated interaction between fungi and bacteria in the Ju|’hoansi IM, possibly influencing gut homeostasis. Interestingly, specific taxa associated with infection and disease, such as Alistipes , Dialister , Fusobacterium , Streptococcus , and Sutterella , which were primarily identified in villages with higher antibiotic use and/or inflammation, showed positive interactions with other commensal bacteria, although the number of interactions was fewer, while interactions with fungi were limited and mainly involved Aspergillus . A previous study has shown that commensal bacteria can promote the virulence of potential pathogens by cross-respiration, thereby enhancing the growth yield and persistence of the pathogen. This might be one factor supporting the positive interactions observed here; however, the exact mechanism(s) for this occurrence in the Ju|’hoansi is not yet known and should be elucidated in follow-up studies. Interactions of opportunistic pathogens (e.g., 69 Trichosporon ) with other taxa (e.g., Faecalibacterium and Roseburia ) have also been observed for the BaAka, although these associations were mainly negative. Nonetheless, these and our results contribute to the growing body of evidence that clinically relevant bacterial-fungal interactions exist in hunter-gatherers, which could impact host health through pathogen or inflammation control. Our results warrant further exploration to determine how bacterial-fungal interactions in the Ju|’hoansi IM enhance bacterial and fungal virulence and how antagonistic or mutualistic relationships are linked to disease. 70 The Ju|’hoansi fungal IM is divergent from the global IM Thus far, the majority of studies have investigated the human mycobiome in the context of healthy vs. diseased patients, for example elucidating differences in mycobiome composition between patients with and without Crohn’s disease or between obese and healthy subjects. 71 The inclusion of the mycobiome in a study investigating the IM of a hunter-gatherer population is novel, and as such, the comparison of our results to the existing literature is challenging. 72 The healthy human fungal IM is generally lower in diversity than its bacterial counterpart and is frequently dominated by yeasts such as Candida and Malassezia. 73 Candida and Malassezia were the most abundant core fungal genera in the Ju|’hoansi IM, while Candida and Cladosporium were the only two core fungal genera common between the Ju|’hoansi and the urban and rural South African IMs. Cladosporium is also a common intestinal inhabitant, probably due to its abundance in air. The significance of the Ju|’hoansi fungal IM in relation to diet and geographic location remains unclear but presents an interesting avenue for future research. 74 Both Malassezia and Candida are characterized as commensals of the human IM that can become pathogenic upon immune dysfunction. 75 , The role of 76 Candida in the human GIT has garnered some interest lately, and the research outcome has been mixed thus far. 77 Candida may be involved in training the immune system and preventing infections, but it has also been linked to increased inflammation and candidiasis. 78 79 Malassezia may be involved in Crohn’s and inflammatory bowel disease. 80 However, both 81 Malassezia and Candida are common inhabitants of the IM, irrespective of the population. Their influence on the host might, therefore, be dependent on host health. Indeed, evidence suggests that fungi such as Candida can disseminate from the GIT to other organs, causing life-threatening diseases in immune-compromised individuals. 82 The Ju|’hoansi appear to have a less diverse fungal IM than what is typically reported. The Human Microbiome Project reports 247 named genera, while the Ju|’hoansi have only 82. However, out of the top 15 most abundant fungal genera in Ju|’hoansi IMs, eight are not reported in the Human Microbiome Project study: 73 Naganishia , Issatchenkia , Stereum , Panellus , Mycena , Vishniacozyma , Neoascochyta , and Westerdykella , suggesting that the Ju|’hoansi GIT is inhabited by unique fungal taxa. Most fungal taxa inhabiting the Ju|’hoansi GIT are animal pathogens and wood or leaf saprotrophs. This includes Amyloporia , Botryobasidium , and Wojnowiciella. 42 Podospora , which is frequently found on wild animal dung, is also present in the Ju|’hoansi IM. Fungal taxa prevalent in the IM seem to derive mostly from dietary and environmental factors. 43 83 , The Ju|’hoansi spend most of their time outside, in close association with their environment. This might explain why their fungal IM composition is somewhat divergent from the population studied in the Human Microbiome Project, 84 which comprises participants from the United States 73 who likely subscribe to a more industrialized lifestyle than the Ju|’hoansi. 85 Village of primary residence equates to a significant taxonomic difference The only variable for which IM composition was significantly different was the village of primary residence. This may result from several factors, including variable socio-economic status, dissimilar ecological conditions at village locations, the use of different water sources at each village, and family and social networks. Participant villages exhibit varying degrees of affluence, with some possessing commodities such as vegetable and fruit gardens, cattle, and even motor vehicles, while others do not. Socio-economic status is known to influence IM composition, as it determines factors such as the type of food that is accessible and the level of psycho-social stress that is experienced. 86 87 The geology and associated vegetation types surrounding each village also vary, resulting in changes in the types of species consumed most frequently. In addition, each village has its own unfiltered water source (i.e., boreholes) that may support different microbial taxa and which might, in turn, determine the range of taxa to which residents are exposed. 88 Familial and social networks may also influence shared bacterial lineages. Historically, Ju|’hoansi settlement patterns and social organization were characterized by close interaction between mostly related residents who tended to live in high-density “camps” or villages. 89 , A similar pattern of close daily interpersonal interaction between co-inhabitants of villages can still be observed today; this may explain some of the differences in IM composition observed between Ju|’hoansi villages. 90 The Ju|’hoansi core IM is divergent from the global core IM To compare the Ju|’hoansi core IM to those of other populations, we first considered the global core IM as a combination of the IMs of other populations. Twenty bacterial and four fungal genera were unique to the Ju|’hoansi core IM compared to the combined bacterial core IMs of the BaAka, Bantu, Papua New Guineans, and Americans and the fungal core IMs of rural and urban South African populations. 17 , 49 , This includes bacteria such as 50 Butyrivibrio , Ruminobacter , and Rikenellaceae RC9 gut group. Malassezia , Fusarium , Naganishia , and Panellus were not found in the core IMs of rural or urban South Africans, while they did form part of the core Ju|’hoansi IM. We then considered each population as an individual entity and discovered that the Ju|’hoansi only shared three bacterial genera with the populations sharing similar lifestyles: Butyrivibrio and Anaerovibrio with the BaAka and Marvinbryantia with Papua New Guineans. The Ju|’hoansi harbor a unique core IM compared to other populations. This could be due to genetic or environmental factors. Although the role of host genetics in shaping the IM is unclear, there are reports of ethnicity- and geography-specific variations in IM configuration, such as that among African Malawian and South American Amerindian communities. Factors such as dietary preferences and cultural practices may exert a more pronounced influence on Ju|’hoansi IM composition than factors such as medical history, age, biological sex, and the degree of exposure to microbes during travel. The Ju|’hoansi culture and their relatively isolated geographic location may contribute to their unique IM composition compared to a global cohort. 27 Limitations of the study The storage of specimens following sampling is known to influence DNA yield and microbial profiles. While the storage of fecal samples at −20°C presents an ideal scenario, this is unrealistic in the field. We endeavored to subject samples to immediate freezing at <0°C, in combination with a preservative, which has been shown to result in the least amount of taxonomic community changes. 91 92 Furthermore, sequencing 16S rRNA genes and ITS regions may result in lower taxonomic resolution, and over-estimation may occur 93 , ; however, these methods are cost effective and commonly used in microbiome research. 94 We wanted to incorporate a global IM comparison into this research, however, it came with certain limitations. While the comparative IMs also made use of a marker as opposed to whole-metagenome sequencing, different regions of the 16S rRNA gene and the ITS region were used. We tried to keep the data processing as close as possible to the Ju|’hoansi workflow; however, we acknowledge that inaccuracies may arise due to unidentical sequencing and processing procedures. STAR★Methods Key resources table REAGENT or RESOURCE SOURCE IDENTIFIER Critical commercial assays DNA/RNA Shield-Fecal Collection Tube Zymo Research Cat. No. R1101 PowerLyzer® PowerSoil® Kit Qiagen Cat. No./ID: 12855-50 Deposited data BaAka and Bantu intestinal microbiome data MG-RAST Accession number: 16608 Papua New Guinea and United States intestinal microbiome data MG-RAST Accession number: 4576511.3–4576572.3 Rural and Urban South African intestinal microbiome data NCBI Accession number: PRJNA589500 Ju|'hoansi intestinal microbiome NCBI Accession number: PRJNA1029329 Software and algorithms QIIME2 https://docs.qiime2.org/2021.2/ https://doi.org/10.1038/s41587-019-0209-9 SILVA-138-99 database https://www.arb-silva.de/ PMID: 23193283 UNITE version 8 dynamic database https://unite.ut.ee/ PMID: 30371820 R-4.2.1 https://cran.utstat.utoronto.ca/bin/windows/base/ N/A Decontam https://www.bioconductor.org/packages/release/bioc/html/decontam.html https://doi.org/10.1101/221499 Phyloseq https://www.bioconductor.org/packages/release/bioc/html/phyloseq.html PMID: 23630581 Tidyverse https://cran.r-project.org/web/packages/tidyverse/index.html https://tidyverse.tidyverse.org/articles/paper.html Fantaxtic https://github.com/gmteunisse/fantaxtic N/A Vegan https://cran.r-project.org/web/packages/vegan/vegan.pdf N/A ALDEx2 https://bioconductor.org/packages/release/bioc/html/ALDEx2.html PMID: 23843979 Cytoscape https://cytoscape.org/download.html PMID: 14597658 NetworkAnalyzer https://apps.cytoscape.org/apps/networkanalyzer https://doi.org/10.1038/nprot.2012.004 MCODE https://apps.cytoscape.org/apps/mcode PMID: 12525261 PICRUST2 https://huttenhower.sph.harvard.edu/picrust/ https://doi.org/10.1038/s41587-020-0548-6 ComplexHeatmap https://bioconductor.org/packages/release/bioc/html/ComplexHeatmap.html PMID: 27207943 circlize https://cran.r-project.org/web/packages/circlize/index.html PMID: 24930139 ggplot2 https://cran.r-project.org/web/packages/ggplot2/index.html https://doi.org/10.1080/15366367.201901565254 metagMisc https://github.com/vmikk/metagMisc N/A Resource availability Lead contact Further information and requests for resources should be directed to and will be fulfilled by the lead contact and corresponding author, Riaan F. Rifkin ( riaanrifkin@gmail.com ). Materials availability This study did not generate any new unique reagents. Data and code availability • All raw sequencing data have been uploaded to the NCBI under accession number PRJNA1029329 . • This paper does not report the original code. • Any additional information required to reanalyze the data reported in this paper is available from the lead contact upon request. Experimental model and study participant details The four Ju|’hoansi villages from which our fecal samples are derived are located 18 km–28 km (x̄ = 23.3 km) from Tsumkwe, the primary village in the Otjozondjupa Region. Following informed consent, samples were acquired from an equal number of adult self-identified males (n = 20) and females (n = 20) ranging from 19 years to 69 years of age (median = 38 years). To make age a categorical variable, age was divided into two groups based on the median age of the participants. These were collected in July 2019, during the winter (dry) season (i.e., from May to November), when foraging is less important. In winter, the Juǀʼhoansi subsists mainly by purchasing food from the various stores in Tsumkwe, including starches (e.g., maize, rice, and macaroni) and meat (i.e., beef and goat). Research participants were recruited with the assistance of our co-researcher, research facilitator, and interpreter, Leon ≠Oma Tsamkxao, who is fluent in Juǀʼhoansi, Afrikaans, and English, and written informed consent was obtained from all participants. Along with samples, metadata was collected to document (1) the ages of research participants, (2) their former use of antibiotic treatment for tuberculosis, (3) self-identified biological sex (i.e., male or female), (4) whether diarrhea is or had been experienced following the consumption of certain foods, 5) whether participants have ever experienced an intestinal infection, (6) their former or current use of malaria medication, (7) their exposure to local, regional and international travel, and (8) the villages of primary residence of each research participant. All participants provided informed consent for publication of study results of the collected biomaterials, agreeing that all information required for the study (i.e., their location, biological sex, age, and medical history), except for their names, could be disclosed in this study. Ethical clearance for this research was obtained from the Research Ethics Committee, Faculty of Health Sciences at the University of Pretoria. All the research methods occurred in accordance with the Declaration of Helsinki. Method details DNA extraction and sequencing Fecal samples were collected in collection tubes containing 9 mL DNA/RNA ShieldTM (Zymo Research Corp, Irvine, CA, USA) and stored at 4°C. After homogenizing the samples through vortexing, ∼1 mL was transferred to a clean 2 mL tube, centrifuged for 5 min at 10,000 x g, and the supernatant was removed. The average weight of the resulting pellets was 125 mg, which was subsequently resuspended in 750 μL bead solution from the DNeasy PowerLyzer PowerSoil Kit (Qiagen GmbH, Hilden, Germany). The DNA isolation was performed according to the manufacturer’s protocol, with the following adaptations: two rounds of bead beating (1 min at 4,000 rpm, PowerLyzerTM, Mo Bio Laboratories, Inc., Carlsbad, CA, USA) followed by 5 min incubation on ice, subsequently, bead beating tubes were centrifuged for 5 min. After the addition of Solution C6 (elution buffer), the spin columns were incubated at room temperature for 5 min before centrifugation. Paired-end (2 x 300 bp) sequencing of the isolated DNA (V3-V4 16S rRNA for bacteria and ITS1 and ITS2 for fungi) was performed at Applied Biological Materials Inc., Richmond, B.C., Canada, using the MiSeq platform (Illumina, San Diego, CA, USA). Two controls were used in this study. CON-CTRL contained the DNA/RNA ShieldTM used to preserve the samples, while KIT-CTRL comprised the contents of the DNeasy PowerLyzer PowerSoil Kit. Data pre-processing and quality control Raw paired-end 16S and forward ITS reads were imported into QIIME2-2021.2. Quality control with DADA2, 36 including denoising, dereplication, and filtering of chimeras, yielded 4,184 and 1,271 ASVs for 16S and ITS data, respectively. The 3′ ends of the 16S forward reads were truncated to a length of 292 bp, and 25 bp were trimmed from the 5′ end. The 3′ ends of the 16S reverse reads were truncated to a length of 250 bp, and 25 bp were trimmed from the 5′ end. ITS forward reads were truncated to a length of 297 bp at their 3′ ends, and 26 bp were trimmed from the 5′ end. The rest of the parameters were set to default. As initial ITS taxonomic classification of the ASVs resulted in the identification of very few taxa, ITS reads were first clustered at 98% sequence similarity using qiime vsearch 95 closed-reference clustering and then re-classified, which resulted in 167 OTUs. 96 16S taxonomic classification was performed by extracting V3-V4 regions from the SILVA-138-99 database using q2 feature-classifier extract-reads based on the primer sequences used to amplify the 16S data. A naïve-Bayes classifier was then trained on the extracted SILVA sequences and full-length UNITE version 8 dynamic sequences 97 for 16S and ITS data, respectively. The classifiers were then used to taxonomically classify the respective datasets using the qiime fit-classifier naïve-Bayes plug-in. 98 Quantification and statistical analysis Data pre-processing Following import into R-4.2.1, the taxonomic, counts, and metadata tables were imported as phyloseq objects. 37 Six samples were identified as outliers due to either insufficient ASV/OTU count or very low diversity and removed from the analysis. Since analyzing the interaction between the fungal and bacterial IM necessitated equal sample sizes between the two groups, if an outlier was removed from one dataset, it was also removed from the other. The data was then inspected for contamination using decontam 99 at a prevalence threshold of 0.1. Decontam determines the likelihood of an ASV/OTU being a contaminant based on the prevalence of that ASV/OTU between controls and true samples. The control samples were subsequently removed. Phyloseq objects were converted to relative abundance and used in downstream analyses. Tidyverse 38 was instrumental in dataset manipulation. Relative abundance was visualized with Fantaxtic. 100 101 Community composition and differential abundance analyses Community composition was analyzed using the vegan package. The Kruskal–Wallis test was used to test α-diversity if there were only two-factor levels, and the Dunn test was used for more than two-factor levels. β-diversity was tested with ANOSIM. 102 Differential abundance was tested using ALDEx2 103 , 104 , by first filtering the data at a prevalence threshold of 0.1 using metagMisc 105 and then aggregating it to genus level. Two groups were compared at a time with mc.samples set to 16. Mc.samples was also set to 128 with no effect on results. All other ALDEx2 parameters were used at their default values. Genera were considered differentially abundant if their Benjamini-Hochberg corrected p values for Welch’s t test were <0.1. 106 Elucidation of the Ju|’hoansi core microbiome The medium and soft core microbiomes were analyzed by selecting all ASVs/OTUs at a prevalence of 70% and 50%, respectively, and a detection threshold of 0.1%. Relative abundances for ASVs/OTUs belonging to the same genus were collated and plotted as heatmaps with heatmap.2 107 in R. To investigate the soft core microbiome between Ju|’hoansi villages, the data was first divided into villages, then aggregated to genus level. Cytoscape 107 was used to generate the core microbiome network. 108 Metabolic enrichment of the Ju|’hoansi IM To obtain functional profiles of the Ju|’hoansi IM, data were exported from QIIME2-2021.2 and filtered to include only taxa that were prevalent in at least two individuals with a count of more than two reads, which resulted in 485 ASVs. This was used as input into the PICRUST2 36 full pipeline with default settings. Since PICRUST2 does not provide accurate functional enrichment of fungal data, we performed fungal functional prediction with FUNGuild. 109 41 Differentially abundant pathways between multiple groups were assessed with the aldex.glm module in ALDEx2. Effect sizes were plotted for each village with the corrected Benjamini-Hochberg corrected p values. Co-occurrence network between fungi and bacteria A co-occurrence network was generated comprising consistently detected and highly abundant ASVs (16S) and OTUs (ITS) across all villages: the community data were filtered using only ASVs and OTUs with a relative abundance >0.5%. This filtering step resulted in a core community of 834 bacterial ASVs and 91 fungal OTUs. Spearman correlations were calculated between all ASVs and OTUs in the filtered dataset (absolute abundances) with Benjamini-Hochberg FDR p value correction. Significant relationships with a correlation coefficient (ρ) ≥ ±0.7 and p < 0.01 were selected and translated into a network in Cytoscape. The topological properties of the network were subsequently analyzed with the NetworkAnalyzer 108 tool. Modular structures and groups of highly interconnected nodes were identified using the MCODE 110 application with standard parameters. Taxa with the highest degree (>20) and betweenness centrality (>0.02) values were considered keystone taxa as determined by scatterplots. 45 Global IM comparison To compare the Ju|’hoansi core IM with that of a global cohort, we downloaded quality-controlled sequences from MG-RAST using the accession numbers 4576511.3–4576572.3 (PNG and USA) and 16608 49 (BaAka and Bantu). The South African fungal IM was downloaded from the NCBI using accession number PRJNA589500. 17 In each case, the most processed data available were downloaded and further processed using the same workflow as the Ju|’hoansi data. This was done to minimize variability in workflow both between the respective authors and us and between the global populations and the Ju|’hoansi. The core microbiome of the global cohort was analyzed at the same detection and prevalence threshold as the Ju|’hoansi (0.1% and 50%, respectively) and compared. The global IM was first considered collectively and then as individual populations. Heatmaps were constructed using ComplexHeatmap, 50 111 , and the IM network was constructed in Cytoscape. 112 108 Ggplot2 was instrumental in figure creation. 113 Acknowledgments We thank all our study participants and the Juǀʼhoansi Traditional Authority (JUTA) for providing consent for the enrollment of project participants in the study. We thank Loide Uahengo, Edgar Mowa (Namibian National Commission on Research and Technology), and Benetus Nangombe (Namibian Ministry of Health and Social Services) for issuing the relevant research permit. We thank Helvi Elago (National Heritage Council of Namibia) and Alma Nankela (Archaeology Unit, National Heritage Council of Namibia) for providing guidance and supporting documentation concerning our permit application. We thank Manda Smith and members of the Research Ethics Committee, Faculty of Health Sciences at the University of Pretoria, for providing clearance to conduct the research. We acknowledge funding generously provided by a National Geographic Society Scientific Exploration grant (no. NGS-371R-18 ) and the Benjamin R. Oppenheimer Trust , which supports the Oppenheimer Endowed Fellowship in Molecular Archaeology at the University of Pretoria and Oxford Brookes University . Finally, we acknowledge BioRender.com , which was used to create the graphical abstract and compile the figures. Author contributions Conceptualization, ethics approval, sample collection, interview conduction, R.F.R., L.O.T., and S.J.U.; DNA extraction, J.E.K.; bioinformatic analysis, M.T. and K.J.; writing, M.T., K.J., R.F.R., J.E.K., J.-B.R., S.J.U., and D.A.C.; figure creation, M.T. and K.J.; supervision, J.E.K. and R.F.R. Declaration of interests The authors declare no competing interests. Supplemental information Supplemental information can be found online at https://doi.org/10.1016/j.celrep.2024.113690 . Supplemental information Document S1. Figures S1–S4 and Table S4 Table S1. List of soft-core bacterial genera within the Ju|’hoansi villages, detected at a prevalence of 50% and a detection threshold of 0.1%, related to Figure 4 Table S2. List of soft-core fungal genera within the Ju|’hoansi villages, detected at a prevalence of 50% and a detection threshold of 0.1%, related to Figure 4 Table S3. Fungal functional profiling using FUNGuild, related to the results section “metabolic enrichment of the Ju|’hoansi IM” Table S5. Bacterial core microbiome of the Ju|’hoansi, American, Papua New Guinea, BaAka, and Bantu populations, detected at 50% prevalence and a detection threshold of 0.1%, related to Figure 6 Document S2. Article plus supplemental information
REFERENCES:
1. LEDERBERG J (2001)
2. THURSBY E (2017)
3. HANSEN R (2019)
4. JOHN G (2016)
5. DUNNE J (2014)
6. VICHVILA A (2018)
7. AMATO K (2019)
8. WALTER J (2011)
9. SEGATA N (2015)
10. BLASER M (2009)
11. GILLINGS M (2014)
12. SCHNORR S (2016)
13. ROSASPLAZA S (2022)
14. RIFKIN R (2020)
15. SCHNORR S (2014)
16. CLEMENTE J (2015)
17. GOMEZ A (2016)
18. GIRARD C (2017)
19. CRITTENDEN A (2017)
20. PONTZER H (2018)
21. ROOK G (2014)
22. THORBURN A (2014)
23. CARRERABASTOS P (2011)
24. GUPTA V (2017)
25. DAVENPORT E (2017)
26. DEFILIPPO C (2017)
27. YATSUNENKO T (2012)
28. SONNENBURG E (2014)
29. JSS (1992)
30. HITCHCOCK R (2020)
31. ROBBINS L (2000)
32. GARGALLO E (2020)
33. IMAMURAHAYAKI K (1996)
34. SMITS S (2017)
35. DENKER H (2012)
36. BOLYEN E (2019)
37. (2021)
38. DAVIS N (2018)
39. TURNBAUGH P (2007)
40. RISELY A (2020)
41. NGUYEN N (2016)
42. CANNON P (2007)
43. BELL A (1983)
44. NAGPAL R (2018)
45. BADER G (2003)
46. ZHANG F (2022)
47. OTTMAN N (2012)
48. HOFFMANN C (2013)
49. MARTINEZ I (2015)
50. KABWE M (2020)
51. MARCHESI J (2016)
52. MAIER T (2017)
53. ANGELAKIS E (2019)
54. CORDAIN L (2005)
55. SIVAPRAKASAM S (2016)
56. LEE R (1973)
57. AYENI F (2018)
58. MARLOWE F (2002)
59. LEE R (1968)
60. LOUIS P (2014)
61. GE J (2015)
62. JARVIS J (2012)
63. SAM Q (2017)
64. MAAS E (2023)
65. PATERSON M (2017)
66. PEREZ J (2021)
67. PAREEK S (2019)
68. DEVRIES R (2001)
69. STACY A (2016)
70. SHARMA A (2022)
71. LI Q (2014)
72. (2012)
73. NASH A (2017)
74. SUHR M (2015)
75. KUMAMOTO C (2020)
76. DAWSON T (2019)
77. MUSUMECI S (2022)
78. QUINTIN J (2012)
79. PAPPAS P (2018)
80. LIMON J (2019)
81. YANG Q (2022)
82. ALONSOMONGE R (2021)
83. KONG H (2017)
84. HALLENADAMS H (2016)
85. AAGAARD K (2013)
86. BOWYER R (2019)
87. AMATO K (2021)
88. WARD J (1992)
89. DRAPER P (1973)
90. MARSHALL L (1960)
91. EZZY A (2019)
92. SONG S (2016)
93. MIZRAHIMAN O (2013)
94. BAILEN M (2020)
95. CALLAHAN B (2016)
96. ROGNES T (2016)
97. QUAST C (2013)
98. NILSSON R (2019)
99. MCMURDIE P (2013)
100. WICKHAM H (2019)
101. TEUNISSE G (2022)
102. OKSANEN J (2020)
103. GLOOR G (2016)
104. FERNANDES A (2014)
105. FERNANDES A (2013)
106. MIKRYUKOV V (2022)
107. LAHTI L (2017)
108. SHANNON P (2003)
109. DOUGLAS G (2020)
110. DONCHEVA N (2012)
111. GU Z (2016)
112. GU Z (2014)
113. WICKHAM H (2016)
|
10.1016_S0873-2159(15)30564-X.txt
|
TITLE: Índice de massa corporal, obstrução aérea, dispneia e capacidade de exercício na Doença Pulmonar Obstrutiva Crónica (DPOC) – o índice BODE
AUTHORS:
- Celli, B.
- Cote, C.
- Marin, J.
- Casanova, C.
- Oca, M.
- Mendez, R.
- Plata, V.
- Cabral, H.
ABSTRACT:
RESUMO
O estudo foi realizado entre 1997 e 2002 em 3 países (EUA, Espanha e Venezuela) num primeiro grupo de 207 doentes com DPOC, e em que existiu uma mortalidade de 12%.
Dos vários parâmetros analisados e comparados entre sobreviventes e mortes, os que apresentaram maior valor preditivo de mortalidade foram: índice de massa corporal ( B de BMI), o FEV1 em % de referência ( O de obstrução), grau de dispneia pela escala do MRC- Medical Research Council ( D de dispneia) e distância em metros no teste de 6 minutos de marcha ( E de exercício). Cada variável tem uma pontuação e peso diferente, mas é referido que a diferente estruturação do índice não correspondeu diferente valor preditivo.
BODY: No body content available
REFERENCES:
1.
|
10.1016_j.rinp.2024.107837.txt
|
TITLE: Null geodesics, QNMs, emission energy and thermal fluctuation of charged T-duality black hole with simple logarithmic correction
AUTHORS:
- Javed, Faisal
- Alshehri, Mansoor H.
ABSTRACT:
This research investigates the properties of charged T-duality black holes within the context of string theory. The investigation focuses on the implications of T-duality, electric charge, and thermodynamic parameters on the curvature and behavior of a black hole. The findings demonstrate that physical variables like
μ
and
H
have significant impacts on the position of the event horizon, photon radius as well as shadow radius. The study calculates the values of quasinormal modes and observes through the graphical behavior. Additionally, the paper investigates the entropy and free energies of the system by considering simple logarithmic correction and also demonstrates how the T-duality parameter enhances the thermodynamic stability of the Bardeen black hole geometry. We explore the emission energy of the considered black hole. Overall, this research contributes to our understanding of black holes and their role in the universe within the context of string theory.
BODY:
Introduction The nature of gravity has been the subject of numerous fascinating studies, and recent developments in general relativity have had a profound impact on our comprehension of a wide range of astrophysical phenomena in the context of the cosmos. Black holes (BHs) are exceptional manifestations of extremely strong gravitational forces that are of considerable interest in modern science. Because of their strong gravitational attraction, BHs absorb everything in the vicinity and have an event horizon from which nothing can escape. The physical properties of BH geometries have been regularly affected in astonishing ways by quantum fluctuations. The occurrence of singularities, which are center regions of spacetime where density and curvature become infinite and traditional physical principles are no longer valid, is a significant barrier to understanding the physics of BHs. The fundamental principles of quantum physics do not apply to general relativity, despite its extraordinary consequences in astronomy. Curvature singularities in general relativity result from a fundamental problem with the classical interpretation of the gravitational field, which breaks down at small scales. Quantum gravity has made several attempts to solve this problem, but to yet they have not been successful. In particular, the formulation of a singularity-free BH geometry based on quantum gravity principles remains incomplete. Researchers have paid close attention to the theory that spacetime curvature is finite. When applied to BH situations, the tenable theory of nonlinear electrodynamics (NLED) may provide extraordinary results. In the case of BH magnetic charge, the presence of an NLED Lagrangian offers a cutoff for singularities [1,2] . Furthermore, BH engineering which considers conventional BH models with a de Sitter central core has been used in the literature to solve this issue [3–6] . A significant advancement in the elimination of singularities in BHs has been made with the creation of a family of regular BHs by the use of noncommutative geometry-based string-inspired techniques [7,8] . Apart from eliminating the singularity, these BHs provide a fascinating scenario for the ultimate phase of evaporation. They undergo cooling to a zero-temperature extremal configuration, even in the absence of charge and angular momentum, as opposed to continuously producing Hawking radiation [9] . A non-local gravity context that mitigates the curvature singularity is also produced via noncommutative effects [10–14] . It is also possible to use other string-based setups, such as those found in [15] . Among these solutions, a key formalism of T-duality from string theory shows promise in solving the singularity of the BH. It is anticipated that T-duality will enhance BH solutions [16] by making string theory more ultraviolet finite [17] . It should be noted that these corrections to BH solutions cannot be obtained from particular terms in the perturbative series [18] , instead, they can only be obtained through non-perturbative effects in string theory because of T-duality. In string theory, T-duality is a well-known characteristic that manifests as an association between two theories of strings with dissimilar histories. According to Padmanabhan [19,20] , the existence of a field in spacetime is represented by a duality in the route integral, where the integral’s contributions hold if the insignificant path length is exchanged. To achieve this T-duality, the field propagator d S → 1 / d S [21] must have a zero-point length that is aligned to the charge of the Bardeen solution [22] . T-duality also gives rise to finite electrodynamics ruled by a T-dual photon field propagator [23] , which fractalizes to achieve the desired ultraviolet regularity. From contemporary techniques in other frameworks [24] , to attempts to resolve cosmological singularities in early universe models [25–27] , T-duality has been applied. The study of BHs is acknowledged as one of the most significant subjects in the current scenario of research. The most defining characteristics of powerful classical and quantum gravitational fields are shown by such thermodynamical objects. On the classical grounds, it is considered that BH possesses such remarkable gravitational influences that avert any sort of particles and radiations to move across the event horizon and consume each and everything from its environment. By endorsing the quantum mechanical effect, the BHs can produce and release thermal radiations labeled as Hawking radiations [28] . These radiations gradually reduce the BH mass which ultimately results in its evaporation. Being thermal objects, BHs exhibit Hawking temperatures and entropy that vary for different classes of BHs. The entropy and temperature of BHs show how the principles of black holes and classical thermodynamics are interdependent. Energy is related to the BH’s mass, but surface gravity is related to its temperature. A thermodynamic system’s entropy is a key parameter for studying its thermal properties and for understanding the environment around a BH’s event horizon [29] . BHs must possess greater entropy than any other objects of similar volume to avoid violating the second law of thermodynamics. Logarithmic corrections based on thermal fluctuations in the entropy-area relation was proposed by Bekenstein [30] , however, it is impossible to achieve thermal equilibrium between BHs and thermal radiations. A detailed study of the bulk laws of BH chemistry and the laws of the dual conformal field theory is presented in [31] . The fascinating field of study into the quantum oscillatory effects on the geometric characteristics of BHs is worth exploring. Thermal fluctuations, which are variations caused by statistical disturbances in compact celestial bodies, may show interesting features of BH geometry. Many researchers think that BHs become hotter and shrink in size because of Hawking radiation. Porchassan and Faizal [32] improved the logarithmic correction factor for BH entropy computation and investigated the effect of thermal fluctuations on thermodynamic parameters in their study of a spinning Kerr-AdS BH. For higher-dimensional BHs with extra higher-order correction terms, the findings were also similar [33,34] . Pourhassan et al. [35] showed that in the modified Hayward BH, logarithmic corrections affect many parameters including Hawking temperature, entropy, and heat capacity. In their study on non-minimal regular BHs, Jawad and Shahzad [36] used the cosmological constant to analyze their thermodynamic stability and thermal oscillations. Their research shows that regular BHs remain stable as the cosmological constant becomes greater. Zhang [37] studied corrected entropy utilizing Kerr–Newman-AdS and Reissner–Nördström BHs, which changed the thermodynamic features of smaller BHs. Further, for the three-dimensional Godel BH [38] , it was discussed the Hawking temperature and vector particle tunneling. Pradhan [39] examined the stability of thermal oscillations in connection to charged BHs. Upadhyay et al. [40] examined the thermodynamical properties of rotating and charged BTZ BHs by considering leading-order perturbed entropy. The detailed study on the quantum corrections to the thermodynamic properties of charged AdS BH is presented in [41] . The primary focus was on calculating the leading order quantum corrections to the entropy of the charged AdS BH and observed the corrected entropy against the event horizon radius for varying correction parameter values by comparing corrected and uncorrected entropy densities through graphically. In order to determine the transmission probability of tunneling across the event horizon barrier, Chougule et al. [42] examined Hawking radiation for a dilatonic BTZ BH solution. Additionally, the authors have examined the chemistry of the BH solution in relation to thermal fluctuation and have concluded that, in the case of BHs with tiny horizon radiuses, thermal fluctuation is a major factor [42] . Delgado [43] described the possible modifications to the evaporative evolution of a four-dimensional, non-extremal, non-rotating, charged black hole by using quantum gravity as an effective field theory. He arrived up a set of coupled differential equations that, with some approximations, described the mass and charge of the black hole as a function of time [43] . The behavior of the Dirac particles in the charged BH background was investigated by Kanzi and Alipour [44] . Additionally, the thermal characteristics are examined, including energy, heat capacity, and entropy. Accordingly, the evaluation of information theory and holography close to the event horizon confirms that every particle in a close-in redshift black hole contains four bits of information [44] . The literature is rich with studies that have investigated phase transitions and temperature fluctuations in different types of BHs [45–54] . Examining the eigenvalues of quasi-normal modes (QNMs), which stand for damping modes and frequency fluctuations, allows one to examine BH dynamics. Zerilli [55] was the second to present the idea of perturbing BHs, after Regge and Wheeler [56] . Through the interaction of gravitational waves with Schwarzschild BHs, Vishveshwara [57] investigated QNMs. Leaver [58] investigated gravitational QNMs in BHs that were spinning and those that were not, and he also established the analytical foundation for QNM wave functions. Also, in Reissner–Nordström BHs, a connection between QNMs and second-order transitions has been shown [59] . They concluded that the QNMs derived from null geodesics might be understood by using the photon sphere’s angular velocity and Lyapunov exponent. The emission energy of BHs is a topic of interest in quantum gravity. Bekenstein suggests that BHs have a discrete energy spectrum, while Hawking’s semi-classical prediction suggests a continuous spectrum. Hod addresses this discrepancy by considering the zero-point quantum gravity fluctuations of black-hole spacetime and derives the effective temperature of the quantized black-hole radiation spectrum, which agrees with Hawking’s temperature [60] . Additionally, Corda [61] analyzes BH QNMs and interprets them as quantum levels. By improving previous analyses, Corda obtains corrected expressions for the formulas of the horizon’s area quantization, Bekenstein–Hawking entropy, and the number of micro-states. This analysis holds for both scalar, gravitational, and vector perturbations, and is consistent with the idea that BHs represent highly excited states in quantum gravity. The logarithmic nature of the corrections to the Bekenstein–Hawking entropy of black holes in AdS resulting from both energy and volume fluctuations was observed by Ghosh et al. [62] . Afterwards, they extended the general form of these entropy corrections to the situation of four-dimensional Kerr-AdS black holes [62] . For a more general class of their electromagnetic coupling parameters, Ghosh and SenGupta [63] explored the various thermodynamic features for a dilaton–axion linked black hole solutions. The thermodynamic properties are shown to be diverse for various parts of the coupling parameter space [63] . Further David et al. [64] explored logarithmic corrections to asymptotically AdS5 supersymmetric extremal, rotating, electrically charged BHs and black strings. Jawad [65] investigates the thermal stability of BHs in the presence of thermal fluctuations and develops various thermodynamical quantities for different types of BHs. Researchers also explored the various characteristics of well-known BH solutions [66–79] . It has been demonstrated that the NLED candidates are effective testing grounds for avoiding several issues with the classical Maxwell theory. The curiosity in NLED began with the early work of Born and Infeld [80] , whose main goal was to alter the classical Maxwell theory to solve the issue of the electron’s infinite energy. Currently, the reason behind the huge attraction of the NLED theories is because of their appearance in the framework of the less-energy limit of heterotic string theory in which a Gauss–Bonnet factor, coupled with quartic contractions of Maxwell field strength, occurs and solutions of BHs can be determined. It is also considered that these theories act as a strong gadget for the construction of BH solutions. Some other NLED theories like the exponential, logarithmic, and the power law Maxwell fields have attained a greater attraction than those presented by Born–Infeld [81] . Under the influence of NLED, Balart and Vagenas [82] presented the distinct charged regular BH solutions and observed that the conduct of a few BHs is similar to RN BH. The intriguing thermal properties of nonlinear electrodynamic theories are the subject of current research. A second-order phase transition was discovered by Tharanath et al. [83] while studying the thermal parameters of four typical BHs. The impact of nonlinear electrodynamic theories on several aspects related to BHs, such as shadows, deflection angles, QNMs, and greybody terms, has been investigated [84] . The study of thermal fluctuations, QNMs and BH stability of various BH solutions are explored in [85–91] . In the present paper, we examine the T-duality charged BH by using logarithmic correction. The current T-duality charged BH and the observation of null geodesics, QNMs, as well as BH shadows are covered in Section “Physical Characteristics of charged T-duality BH”. In Section “Thermal Fluctuations with simple logarithmic corrections”, we delve further into the thermodynamics by examining the impact of logarithmic correction. Also, the phase transition and emission energy were explored. We present some conclusions in the last Section “Concluding Remarks”. Physical characteristics of charged T-duality BH Charged T-duality BH refers to a specific type of BH solution in string theory that exhibits electric charge and is invariant under T-duality transformations. This means that the BH solution remains unchanged when certain spatial dimensions are compactified and interchanged with their dual counterparts. The concept of charged T-duality BHs play a crucial role in understanding the interplay between string theory, gravity, and the behavior of fundamental particles at the quantum level. The existence of charged T-duality black holes can have far-reaching astrophysical and cosmological consequences, offering insights into fundamental phenomena such as dark matter, dark energy, and the early universe. Studies like [92] have shown that these black holes may serve as valuable probes for understanding the gravitational effects associated with dark matter and potentially influencing models of dark energy dynamics. Additionally, the presence of charged T-duality black holes can inform our understanding of the early universe’s evolution, impacting theories related to cosmic inflation and structure formation [21,93,94] . By examining the properties and behaviors of these black holes, we gain a unique perspective on the underlying mechanisms governing these cosmological phenomena. The line element of the charged T-duality BH can be written as [21] where the metric function is defined as (1) d s 2 = − F ( r ) − 1 d r 2 − r 2 d θ 2 − r 2 sin 2 θ d ϕ 2 + F ( r ) d t 2 , [21] with (2) F ( r ) = B ( r ) H 2 r 2 μ 2 + r 2 2 − 2 r 2 m μ 2 + r 2 3 / 2 + 1 , here B ( r ) = 3 μ 2 8 r 2 + 3 π 16 μ μ 2 + r 2 1 − 2 π r 3 μ 2 + r 2 3 / 2 tan − 1 r μ + 5 8 , represents the zero-point length and μ denotes the BH mass, m denotes the T-duality parameter. It is interesting to mention that the zero-point length coincides with a nonlinear electromagnetic charge of Bardeen BH. We derive the following spacetimes for various physical parameter selections. H • Bardeen BH is recovered if and H = 0 . μ ≠ 0 • Schwarzschild BH is recovered if . μ = 0 = H • For non zero values of and H , it represents the charged T-duality BH μ [21] . Schwarzschild BHs are affected by T-duality transformation and NLED, as seen in Fig. 1 . On the left side of the image, we can see the Schwarzschild BH setup without the and μ . It is noted that the physical parameters H and μ greatly effects the position BH horizon. The horizon radius for supermassive BHs becomes obvious at bigger H values. As shown in H Fig. 1 , the event horizon shifts towards the central position of the BH geometry for massive BHs. Null geodesics In curved spacetime, massless particles follow trajectories called null geodesics, which are straight lines because of their tangent vectors are of zero magnitude. We derive the motion equations from the Hamiltonian applied to the trajectory of a massless photon inside the BH solution (2) that is restricted to the equatorial plane with . Hence, we obtain θ = π / 2 [95–97] It is feasible to obtain the canonically conjugate momentum of the charged T-duality BH by using the following formula: (3) 2 H = − p t 2 H 2 r 2 B ( r ) μ 2 + r 2 2 − 2 r 2 m μ 2 + r 2 3 / 2 + 1 + H 2 r 2 B ( r ) μ 2 + r 2 2 − 2 r 2 m μ 2 + r 2 3 / 2 + 1 p r 2 + p ϕ 2 r 2 . here angular momentum is denoted by the letter p t = H 2 r 2 B ( r ) μ 2 + r 2 2 − 2 r 2 m μ 2 + r 2 3 / 2 + 1 t ̇ = E , p r = H 2 r 2 B ( r ) μ 2 + r 2 2 − 2 r 2 m μ 2 + r 2 3 / 2 + 1 − 1 r ̇ , p θ = r 2 θ ̇ , p ϕ = r 2 sin 2 θ ϕ ̇ = L , , while energy is denoted by the letter L . As a result, the EoM can be deduced as follows: E Now, we get the equation of radial null geodesics in the following form t ̇ = E H 2 r 2 B ( r ) μ 2 + r 2 2 − 2 r 2 m μ 2 + r 2 3 / 2 + 1 , r 2 r ̇ = ± R , r 2 θ ̇ = ± Θ , ϕ ̇ = L r 2 . . The potential function becomes r ̇ 2 + V e f f ( r ) = 0 The following is a list of the restrictions that are followed by null circular geodesics: (4) V e f f = H 2 r 2 B ( r ) μ 2 + r 2 2 − 2 r 2 m μ 2 + r 2 3 / 2 + 1 L 2 r 2 + E 2 H 2 r 2 B ( r ) μ 2 + r 2 2 − 2 r 2 m μ 2 + r 2 3 / 2 + 1 . and it gives (5) V e f f = 0 , and ∂ V e f f ∂ r = 0 , with (6) − 18 H 2 μ 2 + r p 2 7 / 2 tan − 1 r p μ + 2 μ r p 24 μ 2 r p 2 2 μ 2 + H 2 μ 2 + r p 2 + U ( r p ) + 9 π H 2 r p 5 μ 2 + r p 2 r p 4 μ 2 + r p 2 7 / 2 − 6 H 2 tan − 1 r p μ r p 3 + H 2 4 μ μ 2 + r p 2 2 + 3 π μ 2 + r p 2 3 / 2 + 2 μ 3 H 2 μ 2 + r p 2 + 8 r p 2 − 32 μ m μ 2 + r p 2 3 / 2 = 0 , This equation does not yield an analytical solution for the photon radius, even though the term U ( r p ) = μ 4 16 μ 2 + 9 H 2 μ 2 + r p 2 + 16 r 6 μ 2 + r p 2 − 3 m + r p 4 48 μ 2 + 23 H 2 μ 2 + r p 2 − 48 μ 2 m . represents the photon radius. The numerical values for the photon radius are displayed in r p Table 1 . These values are different for each of the distinct values of the magnetic charge and μ . It is important to take note of the fact that larger values of both H and μ result in a decrease in the photon radius. H BH shadow The term “BH shadow” refers to the region of space that surrounds a BH and is characterized by the fact that light is unable to escape due to the intense gravitational pull. To produce this shadow, light is bent around the event horizon of the BH, which results in the formation of a dark region at the center of the black hole. The pioneering image of a BH shadow that was revealed in 2019 offered a fresh and astonishing depiction of these mysterious happenings in the cosmos when it was first revealed. Through the use of calculations, the size of the shadow radius of the BH can be estimated as [98] (7) r s h = r p H 2 r p 2 B ( r p ) μ 2 + r p 2 2 − 2 r p 2 m μ 2 + r p 2 3 / 2 + 1 . The numerical values of the shadow radius are presented in Table 1 . These values correspond to the photon radius values that were produced by adjusting and μ . It is possible to observe the graphical depiction of the respective dashed circles denote the photon radius and the solid circles represent the shadow radius as shown in H Figs. 2 and 3 . An increase in both and μ leads to a reduction in the shadow radius as well. H Quasinormal modes Wave equations describing the behavior of perturbations in compact objects and BHs are described by QNMs, which are oscillatory solutions to these equations. To comprehend the stable configurations of these systems, one must understand the modes that gradually diminish over time and are crucial for this process. As part of our investigation into the stable configurations of the BH solution that is now under consideration, we are concentrating on the crucial role that QNMs play. It is possible to divide QNM frequencies into real and imaginary components, as shown by the equation . To comprehend the dynamics and stability of BHs, as well as to determine whether or not the predictions of general relativity about BH behavior are accurate, it is essential to have a solid understanding of these frequencies. A reflection of the decay rate of disturbances that occur near a BH can be seen in the imaginary part of the overall frequency. When the imaginary component is negative, it shows that the system is stable, which means that disturbances are decreasing exponentially. In contrast, a positive imaginary portion is indicative of instability, which indicates that disturbances get more severe over time. When it comes to the dynamics of BHs, stability is denoted by a negative imaginary component of the QNMs frequency. Through determining the solution of the scalar field equation that is presented in Ref. ω = ω R + i ω I [99] , it is possible to compute these frequencies while considering the geometry of the BH as When the separation variable is taken into consideration, the corresponding solution gets (8) 1 − g ∂ μ − g g μ ν ∂ ν ϕ = 0 . (9) ϕ = 1 r ∑ l m e i ω t u l m ( r ) Y l m ( θ , ϕ ) , represents spherical harmonics in this context. In the Schrodinger-like formed, the radial equation by taking into account the tortoise co-ordinate Y l m becomes d r ∗ = d r / F ( r ) utilizing the equation (10) d 2 d r ∗ 2 + ω 2 − V 0 ( r ∗ ) u ( r ) = 0 , . This is something that has been observed. The WKB formula is considered in big V 0 ( r ∗ ) = F F ′ F + l ( l + 1 ) r 2 limit to construct QNFs, as stated in the Refs. l [100–104] . As a result, we obtain with (11) ω = l Ω − i n + 1 2 | λ | , where Ω = 1 r p r p 2 H 2 B ( r p ) r p 2 + μ 2 2 − 2 r p 2 m r p 2 + μ 2 3 / 2 + 1 , λ = 1 2 − Y 1 ( r p ) 4 r p 4 H 2 r p 2 + μ 2 r p 2 − 5 μ 2 B ( r p ) + r p 2 + μ 2 Y 3 ( r p ) − 2 Y 2 ( r p ) r p 2 r p 2 + μ 2 7 , Y 1 ( r p ) = r p 2 H 2 r p 2 + μ 2 B ( r p ) + r p 2 + μ 2 μ 2 r p 2 + μ 2 + r p 2 r p 2 + μ 2 − 2 m , Y 2 ( r p ) = μ 6 r p 2 + μ 2 + 3 r p 2 μ 4 r p 2 + μ 2 + r p 6 r p 2 + μ 2 + 3 r p 4 μ 2 r p 2 + μ 2 − 5 m , Y 3 ( r p ) = r p 4 H 2 r p 2 + μ 2 3 / 2 B ′ ′ ( r p ) + 4 B ′ ( r p ) r p 3 H 2 μ 2 − r p 2 r p 2 + μ 2 . According to the information provided in Table 2 , we do calculations to find the respective values of QNMs together with their associated real as well as imaginary parts. Fig. 4 provides a visual representation of components of the QNMs frequencies. Additionally, this representation can be observed. The fact that the imaginary part of QNMs is showing a downward trend is reflective of the fact that the BH configuration is consistently stable. when can be seen in the left plot of Fig. 4 , it has been noticed that the real part of QNMs grows when the values of and H increase. Smaller values of μ increase the size of the imaginary component, while higher values decrease it. μ Fig. 4 shows this in the right plot. Furthermore, the magnitude of the imaginary component of the equation reduces as the value of increases. H Thermal fluctuations with simple logarithmic corrections Incorporating basic logarithmic corrections into the study of thermal fluctuations of charged BHs with t-duality are crucial for improving our knowledge of the system’s behavior. A dual description of charged BHs is made possible by T-duality, a basic symmetry in string theory, which shows how many physical phenomena are interrelated. To better understand how thermal variations impact the charged BH system, researchers can incorporate basic logarithmic corrections into their study. By making these adjustments, we can better anticipate changes across different temperature regimes, which clarifies the complex interplay of charge, temperature, and entropy in the thermodynamics of BHs. By including basic logarithmic corrections into this framework, we may enhance the accuracy of thermal fluctuation modeling and gain a significant understanding of the charged BH’s thermodynamic characteristics. In general, the application of basic logarithmic adjustments to the investigation of charged BH thermal fluctuations with t-duality provide a more complete picture of the system dynamics and helps to enhance theoretical models in BH physics. Furthermore, in terms of event horizon, the mass of the BH is given as By utilizing the Bekenstein area entropy connection, we can obtain the BH entropy as (12) m = r h 2 + μ 2 2 + r h 2 H 2 B ( r h ) 2 r h 2 r h 2 + μ 2 . [105] We can use the formulation (13) S = ∫ 0 2 π ∫ 0 π g θ θ g ϕ ϕ d θ d ϕ = π r h 2 . to determine the Hawking temperature ( T = d M d S ) [28] . Mathematically, it can be obtained as where (14) T = r h 3 H 2 r h 2 + μ 2 B ′ ( r h ) − r h B ( r h ) 4 π r h r h 2 + μ 2 3 + Z ( r h ) , In the BH geometry, the heat capacity is given by the expression Z ( r h ) = r h 6 − 3 r h 2 μ 4 − 2 μ 6 4 π r h r h 2 + μ 2 3 . ℂ = T ∂ S ∂ T [105,106] . As computed, To ascertain whether or not the BH configuration is thermodynamically stable, it is essential to identify the phase transition or Davies point, which is the critical point when ℂ = 2 π r h r h 2 + μ 2 r h 2 H 2 r h 2 + μ 2 B ′ ( r h ) − r h B ( r h ) + 4 π r h 2 + μ 2 3 Z ( r h ) 3 r h 2 H 2 ( r h − μ ) ( r h + μ ) B ( r h ) + r h 2 + μ 2 r h H 2 r h r h 2 + μ 2 B ′ ′ ( r h ) + 2 μ 2 − 3 r h 2 B ′ ( r h ) + 4 π r h 2 + μ 2 3 Z ′ ( r h ) . diverges. The zone that is positive before the Davies point is a representation of stability, whereas the region that is negative after Davies’s point is a representation of instability. This shift from one phase to another might take place in either a positive or undesirable direction ℂ [105,106] . The concept of corrected entropy is one approach that can be utilized to incorporate the thermal fluctuations that are associated with a BH. The quantum effects that occur close to the event horizon is taken into account by corrected entropy. These effects are important for understanding the behavior of BHs. In addition to providing a more accurate portrayal of thermal fluctuations, these corrections also provide significant insights into the dynamics of these strange cosmic phenomena. In this context, we determine the influence of heat fluctuations on the T-duality charged BH by applying the partition function approach to obtain the system’s corrected entropy. This allows us to analyze the T-duality charged BH as [106–108] the density state is denoted by the symbol (15) R ( ξ ) = ∫ 0 ∞ ρ ( E ) exp ( − ξ E ) d E , , and the average energy is denoted by the symbol ( ρ ( E ) ). Utilizing the inverse Laplace transform of the partitioned function described above, one can arrive at the solution: E The mathematical statement that accurately represents the corrected entropy can be expressed as (16) ρ ( E ) = 1 2 i π ∫ − i ∞ + ξ 0 i ∞ + ξ 0 exp ( ξ E ) R ( ξ ) d ξ = 1 2 i π ∫ − i ∞ + ξ 0 i ∞ + ξ 0 exp ( S ̃ ( ξ ) ) d ξ , , where S ̃ ( ξ ) = β E + ln Z ( ξ ) is greater than zero. With the Taylor series expansion about ξ , we are able to obtain the following: ξ 0 In addition, it is necessary to verify the following conditions in order to determine the equilibrium entropy (17) S ̃ ( ξ ) = S + 1 2 ( ξ − ξ 0 ) 2 ∂ 2 S ̃ ( ξ ) ∂ ξ 2 | ξ = ξ 0 + O ( ξ − ξ o ) 2 . . This condition states that S and ∂ S ∂ ξ = 0 . By utilizing Eq. ∂ 2 S ∂ ξ 2 > 0 (17) in (16) , we are able to obtain Further, it yields (18) ρ ( E ) = 1 2 π i exp ( S ) ∫ d ξ exp ( 1 2 ( ξ − ξ 0 ) 2 ∂ 2 S ̃ ( ξ ) ∂ ξ 2 ) . [106–108] which becomes (19) ρ ( E ) = 1 2 π exp ( S ) ( ( ∂ 2 S ̃ ( ξ ) ∂ ξ 2 ) | ξ = ξ 0 ) − 1 2 , We may use a broader parameter (20) S ̃ = S − 1 2 ln ( S T 2 ) + η S . without losing generality, except for the factor γ , which increases the influence of correction terms on the entropy of BH. Taking into consideration of this circumstance, the corrected entropy can be expressed as 1 2 [109] which leads (21) S ̃ = S − γ ln ( S T 2 ) + η S , (22) S ̃ = η π r h 2 + π r h 2 − γ log π r h 2 r h 2 H 2 r h 2 + μ 2 B ′ ( r h ) − r h B ( r h ) 4 π r h 2 + μ 2 3 + Z ( r h ) 2 . As shown in Fig. 5 , we investigate the influence that logarithmic corrections and T-duality have on the Bardeen black hole, taking into account a range of different values for and μ . According to the findings of our investigation, the entropy of the system continuously increases across the full parameter range for all of the scenarios that involve larger BHs. In the presence of the T-duality parameter, we make the interesting observation that the graph that depicts the fluctuation moves away from the BH center. To be more specific, as the value of H is increased, the corrected entropy displays a greater degree of fluctuation away from the larger BHs in comparison to the smaller ones, as demonstrated in the right plot of μ Fig. 5 . Following this, we will determine the corrected energies of the BH geometry that is being studied. Using the following relations, the Helmholtz free energy is calculated as [106,110] : yields (23) F ̃ = − ∫ S ̃ d T , with F ̃ = − 1 4 π 2 ∫ 3 r h 2 H 2 r h 2 − μ 2 B ( r h ) + r h 2 + μ 2 Y 4 ( r h ) π 2 r h 4 + η − π r h 2 γ log π r h 2 Y 5 ( r h ) + Z ( r h ) 2 r h 2 r h 2 + μ 2 4 d r h , Y 4 ( r h ) = r h H 2 r h r h 2 + μ 2 B ′ ′ ( r h ) + 2 μ 2 − 3 r h 2 B ′ ( r h ) + 4 π r h 2 + μ 2 3 Z ′ ( r h ) , Y 5 ( r h ) = r h 2 H 2 r h 2 + μ 2 B ′ ( r h ) − r h B ( r h ) 4 π r h 2 + μ 2 3 . There is an effect that fluctuations in temperature have on the physical features of the geometry of the BH. It is worth noting that the graph of Helmholtz free energy for the Bardeen BH exhibits a constant drop for small radii values. However, this tendency changes when the T-duality parameter is included, as illustrated in Fig. 6 . The charged T-duality BH configuration is highlighted by this behavior, which displays the unique properties of the configuration. Within the context of corrected entropy, the BH internal energy is determined from the expression : U ̃ = S ̃ T + F ̃ [106,111] . This can be discovered as U ̃ = Y 5 ( r h ) + Z ( r h ) η π r h 2 + π r h 2 − γ log π r h 2 Y 5 ( r h ) + Z ( r h ) 2 − 1 4 π 2 ∫ 3 r h 2 H 2 ( r h − μ ) ( r h + μ ) B ( r h ) + r h 2 + μ 2 Y 4 ( r h ) π 2 r h 4 + η − π r h 2 γ log π r h 2 Y 5 ( r h ) + Z ( r h ) 2 r h 2 r h 2 + μ 2 4 d r h . We study the internal energy of BHs under a variety of parameter selections within the context of logarithmic corrections, as shown in Fig. 7 . When is at its smallest value, the internal energy exhibits a declining tendency for larger BHs across all μ values. This effect is observed for all black holes. The internal energy, on the other hand, becomes positive and displays a growing pattern for larger BHs, as shown in the right plot of H Fig. 7 . This is the case for higher values of both and H . μ Also, BH volume is determined as [106,109] and the respective expression for the pressure turns out to be (24) V = 4 3 π r h 3 , hence (25) P ̃ = − d F ̃ d V = − d F ̃ d r h d r h d V , P ̃ = 3 r h 2 H 2 ( r h − μ ) ( r h + μ ) B ( r h ) + r h 2 + μ 2 Y 4 ( r h ) π 2 r h 4 + η − π r h 2 γ log π r h 2 Y 5 ( r h ) + Z ( r h ) 2 16 π 3 r h 4 r h 2 + μ 2 4 . BH enthalpy ( ) become ( H ̃ = U ̃ + V P ̃ ) [109] H ̃ = π 2 r h 4 + η − π r h 2 γ log π r h 2 Y 5 ( r h ) + Z ( r h ) 2 − 6 r h 3 μ 2 H 2 B ( r h ) + r h Y 6 ( r h ) + 12 π r h 2 + μ 2 4 Z ( r h ) 12 π 2 r h 2 r h 2 + μ 2 4 − 3 12 π 2 ∫ 3 r h 2 H 2 ( r h − μ ) ( r h + μ ) B ( r h ) + r h 2 + μ 2 Y 4 ( r h ) π 2 r h 4 + η − π r h 2 γ log π r h 2 Y 5 ( r h ) + Z ( r h ) 2 r h 2 r h 2 + μ 2 4 d r h . In Fig. 8 , we investigate the influence that logarithmic corrections have on the enthalpy of BHs over a range of parameter options. For any possible combination of physical factors, the mathematical representation of enthalpy steadily increases, reaching its highest point for BHs that are in the bigger size range. Moreover, it is worth noting that the enthalpy increases as the T-duality parameter is increased, as demonstrated in the right plot of Fig. 8 . It is possible to obtain the Gibbs free energy by using the equation ( G ̃ = − S ̃ T + H ̃ ) [109] : G ̃ = − 3 12 π 2 ∫ 3 r h 2 H 2 ( r h − μ ) ( r h + μ ) B ( r h ) + r h 2 + μ 2 Y 4 ( r h ) π 2 r h 4 + η − π r h 2 γ log π r h 2 Y 5 ( r h ) + Z ( r h ) 2 r h 2 r h 2 + μ 2 4 d r h + 3 r h 2 H 2 ( r h − μ ) ( r h + μ ) B ( r h ) + Y 6 ( r h ) π 2 r h 4 + η − π r h 2 γ log π r h 2 Y 5 ( r h ) + Z ( r h ) 2 12 π 2 r h r h 2 + μ 2 4 . Throughout all T-duality parameter selections, the Gibbs free energy of the BH that was investigated displays a downward tendency with smaller values of . According to the data presented in μ Fig. 9 , as the value of is increased, smaller BHs exhibit a diminishing pattern, but larger BHs exhibit a rising trend. In the case of smaller BHs, the Gibbs free energy is at its highest for the Bardeen BH. However, as shown in μ Fig. 9 , the maximum value of the Gibbs free energy declines with larger BHs. It is interesting to mention that our obtained results are good agreement with the already published research paper on quantum corrected of charged AdS BH [41] . Phase transition To determine the local thermodynamic stability of the BH, one alternative is to estimate the specific heat, which may be expressed as . At the point where ℂ S = d U ̃ d T equals zero, the phase transition point is discovered. Furthermore, it is found that the BH is locally unstable when ℂ S remains less than zero. The BH is considered to be locally stable if the quantity ℂ S . Within the realm of mathematics, it can be stated as ℂ S > 0 [106,111] with ℂ S = 2 Y 9 ( r h ) − r h 2 B ( r h ) Y 8 ( r h ) + r h 2 + μ 2 H 2 π r h 3 γ r h 2 + μ 2 B ′ ′ ( r h ) + Y 7 ( r h ) + 4 π 2 r h γ r h 2 + μ 2 3 Z ′ ( r h ) π r h 3 3 r h 2 H 2 ( r h − μ ) ( r h + μ ) B ( r h ) + r h 2 + μ 2 Y 4 ( r h ) , Y 7 ( r h ) = B ′ ( r h ) π r h 2 γ 3 μ 2 − 2 r h 2 + η r h 2 + μ 2 − π 2 r h 4 r h 2 + μ 2 , Y 8 ( r h ) = r h H 2 2 π r h 2 γ r h 2 − 2 μ 2 − η r h 2 + μ 2 + π 2 r h 4 r h 2 + μ 2 , Y 9 ( r h ) = 4 π r h 2 + μ 2 4 Z ( r h ) π 2 r h 4 − π r h 2 γ − η . Currently, we are in the process of evaluating the specific heat graph with four different values of and the T-duality parameter, as shown in μ Fig. 10 . This allows us to investigate the thermodynamic stability or stability of the system. It is important to note that the inclusion of the T-duality parameter helps to improve the stability of the Bardeen BH. The BH is shown to demonstrate stability for smaller sizes, with the stability rising for greater values of and μ . This is a very interesting observation. The stability of the BH closely approaches that of the T-duality charged BHs, as demonstrated in the right plot of the second panel in H Fig. 10 . This is the case when the value of is significantly raised. When the T-duality parameter is incorporated into the Bardeen BH geometry, the result is an improvement in the thermodynamically stable configuration of the BH. μ Emission energy Quantum fluctuations inside BHs are thought to be the source of the emission energy that is responsible for the discharge of extra particles close to the horizon. Quantum phenomena close to the event horizon of a BH are responsible for the production of this kind of energy, which is referred to as Hawking radiation. One of the particles is absorbed by the BH, while the other is released as radiation. The mass of the BH and the pace at which it is accreting matter both have an effect on this process, which is known as Hawking radiation. The investigation of the emission energy of charged T-duality black holes is the primary topic of our working research at the moment. The absorption cross-section of these BHs tends to fluctuate around a constant value ( ), which is strongly tied to the event horizon. This occurs as the BH approaches very high energy. It can be written as σ l i m [112–114] : As a result, we get (26) σ l i m ≈ π r h 2 . [112–114] : Through the use of the Hawking temperature of the BH that is being studied, we can (27) d 2 ɛ d ω d t = 2 π 2 σ l i m e x p ω T − 1 ω 3 . (28) d 2 ɛ d ω d t = 2 π 3 r h 2 ω 3 exp − 64 π r h μ ω r h 2 + μ 2 3 − 16 r h 6 μ − 6 r h H 2 r h 2 + μ 2 2 tan − 1 r h μ + 6 r h 2 μ 3 8 μ 2 + H 2 + r h 4 H 2 3 π r h 2 + μ 2 + 10 μ + 32 μ 7 − 1 . Fig. 11 demonstrates the correlation between emission energy rate and frequency based on suitable physical parameter values. The graph illustrates that when the frequency rises, reaches its maximum point, and then declines, the rate of energy output also rises. Without the T-duality parameter, there is a notable change in the emission energy behavior, which subsequently declines for larger values of . As both the parameters H and μ grows, and the emission energy rate goes down. H Concluding remarks The goal of studying charged T-duality BHs is to comprehend their characteristics within the framework of string theory. A basic symmetry of string theory that connects various string backgrounds is called T-duality. This symmetry indicates that BHs in string theory can be invariant under T-duality transformations and show electric charges, which has significant consequences for BH physics. We can learn more about how T-duality, electric charge, and the thermodynamic characteristics of BHs interact in string theory by examining charged T-duality BHs. This contributes to a better comprehension of the underlying properties of BHs and their place in the cosmos. In the framework of string theory, the study concentrated on the thermal fluctuation, emission energy, null geodesics, and QNMs of this BH solution. The findings provide insight into how the BH’s curvature behaves and reacts to various disturbances. The results advance our knowledge of the complex interactions that exist in string theory between electric charge, T-duality transformations, and the thermodynamic characteristics of BHs. Some important features of the present study are itemized below: • The research indicates that the physical parameters and μ have a significant impact on the location of the event horizon in the Schwarzschild BH. Specifically, larger values of H cause the event horizon to move closer to the massive BH’s geometry’s center see H Fig. 1 . • The shadow radius and photon radius, which are displayed numerically in Table 1 , are the results of the study’s manipulation of the parameters and μ . The shadow radius (solid circles) decreases when H and μ are raised, however, the photon radius (dashed circle) stays by its acquired values, as illustrated by the graphical depiction in H Figs. 2 and 3 . • For given values of physical parameters, the research calculated the values of QNMs and displayed their graphical behavior. The findings suggested that the stable properties of the BH configuration are reflected in the imaginary component of QNMs’ negative behavior. It is observed that as and H grow, the real component of QNMs increases, while as μ and μ increase, the imaginary part’s size reduces. These results shed light on how black holes behave when T-duality transformations and nonlinear electrodynamics are present see H Fig. 4 . • The analysis demonstrates that, in the presence of T-duality and logarithmic correction, the system’s entropy grows monotonically for larger BHs. For higher values of , the corrected entropy fluctuation graph exhibits more fluctuation and shifts away from the center as shown in μ Fig. 5 . • It is noted that the Helmholtz free energy decreases for the choice of Bardeen BH while it increased for charged T-duality BH (see Fig. 6 ). As can be seen in the Figs. 7 and 8 , internal energy and enthalpy increases for higher values of and H for larger BHs. As seen in μ Fig. 9 , it is discovered that the Gibbs free energy for Bardeen BH is maximal for smaller BHs and decreases for larger ones. • The study indicates that the thermodynamic stability of the Bardeen BH geometry is improved by the T-duality parameter and BH stability is enhanced for higher and μ values (Fig. H Fig. 10 ). For higher values, similar stability is observed for BHs charged with T-duality. μ • The rate of emission energy is observed and found that when both and μ increase, it decreases see H Fig. 11 . The examination of charged T-duality BHs in the context of string theory sheds light on the complicated interactions that exist between electric charge, T-duality transformations, and BH thermodynamic properties. The paper illustrates how physical factors affect the photon, shadow, and event horizons as well as QNMs, emission energy, entropy, and free energies. It also shows how the T-duality parameter enhances the thermodynamic stability of the Bardeen BH geometry. In general, findings enhance our comprehension of BHs and their role in the universe. CRediT authorship contribution statement Faisal Javed: Writing – original draft, Visualization, Supervision, Software, Project administration, Methodology, Investigation, Funding acquisition, Data curation, Conceptualization. Mansoor H. Alshehri: Writing – review & editing, Methodology, Funding acquisition. Declaration of competing interest There is no conflict of interest. Acknowledgments F. Javed acknowledges Grant No. YS304023917 to support his Postdoctoral Fellowship at Zhejiang Normal University . This research is also supported by Researchers Supporting Project number RSP2024R411 , King Saud University, Riyadh, Saudi Arabia .
REFERENCES:
1. AYONBEATO E (2000)
2. AYONBEATO E (1998)
3. AURILIA A (1989)
4. FROLOV V (1990)
5. DYMNIKOVA I (1992)
6. HAYWARD S (2006)
7. NICOLINI P (2006)
8. NICOLINI P (2010)
9. NICOLINI P (2011)
10. MODESTO L (2011)
11. ISI M (2013)
12. NICOLINI P (2014)
13. FROLOV V (2015)
14. FRASSINO A (2016)
15. CANO P (2019)
16. EDELSTEIN J (2019)
17. HOSSENFELDER S (2013)
18. DOUGLAS M (1997)
19. POURHASSANA B (2020)
20. PADMANABHAN T (1997)
21. PADMANABHAN T (1998)
22. GAETEA P (2022)
23. NICOLINI P (2019)
24. GAETE P (2022)
25. MONDAL V (2022)
26. ALDAY L (2007)
27. BRANDENBERGER R (1989)
28. VENEZIANO G (1991)
29. GASPERINI M (1993)
30. HAWKING S (1975)
31. BEKENSTEIN D (1973)
32. MORE S (2005)
33. MANN R (2024)
34. POURHASSAN B (2016)
35. POURHASSAN B (2017)
36. POURHASSAN B (2018)
37. POURHASSAN B (2016)
38. JAWAD A (2017)
39. ZHANG M (2018)
40. GONZALEZ P (2018)
41. PRADHAN P (2019)
42. UPADHYAY S (2022)
43. NADEEMULISLAM (2023)
44. CHOUGULE S (2023)
45. DELGADO R (2023)
46. KANZI S (2022)
47. WEI S (2018)
48. BHATTACHARYA K (2019)
49. SOROUSHFAR S (2019)
50. SHARIF M (2020)
51. SHARIF M (2020)
52. SHARIF M (2021)
53. POURHASSAN B (2021)
54. SHARIF M (2022)
55. AMATULMUGHANI Q (2022)
56. AMATULMUGHANI Q (2022)
57. ZERILLI F (1970)
58. REGGE T (1957)
59. VISHVESHWARA C (1970)
60. LEAVER E (1985)
61. JING J (2008)
62. HOD S (2015)
63.
64. GHOSH A (2022)
65. GHOSH T (2008)
66. DAVID M (2022)
67. JAWAD A (2020)
68. JAVED F (2023)
69. SHARIF M (2021)
70. JAVED F (2023)
71. DITTA A (2023)
72. ASHRAF A (2024)
73. MUSTAFA G (2024)
74. JAVED F (2024)
75. JAVED F (2023)
76. MUSTAFA G (2023)
77. WASEEM A (2023)
78. JAVED F (2023)
79. JAVED F (2024)
80. JAVED F (2024)
81. LIU Y (2023)
82. BORN M (1934)
83. HENDI S (2015)
84. DEHGHANI M (2016)
85. BALART L (2014)
86. THARANATH R (2015)
87. OKYAY M (2022)
88. JAVED F (2023)
89. JAVED F (2023)
90. JAVED F (2023)
91. YASIR M (2024)
92. GULZODA R (2023)
93. JAVED F (2024)
94. FATIMA G (2024)
95. RUSLI S (2013)
96. JUSUFI K (2023)
97. PARTOUCHE H (2012)
98. BELHAJ A (2021)
99. BELHAJ A (2022)
100. BELHAJ A (2021)
101. PERLICK V (2015)
102. SINGH D (2022)
103. SCHUTZ B (1985)
104. IYER S (1987)
105. KONOPLYA R (2003)
106. MYRZAKULOV Y (2023)
107. CARDOSO V (2009)
108. DAS S (2002)
109. SHARIF M (2022)
110. POURHASSAN B (2018)
111. PRADHAN P (2019)
112. POURHASSAN M (2015)
113. JAWAD A (2017)
114. POURHASSAN B (2021)
115. PAPNOI U (2014)
116. WEI S (2013)
117. ESLAMPANAH B (2020)
|
10.1016_j.heliyon.2024.e40318.txt
|
TITLE: Synthesis, crystal structure, Hirshfeld surface analysis, and DFT calculation of 4-(5-(((1-(3,4,5-trimethoxyphenyl)-1H-1,2,3-triazol-4-yl)methyl)thio)-4-phenyl-4H-1,2,4-triazol-3-yl)pyridine
AUTHORS:
- El-Naggar, Mohamed
- Hasan, Kamrul
- Khanfar, Monther A.
- Delmani, Fatima-Azzahra
- Shehadi, Ihsan A.
- Al-Qawasmeh, Raed
- Elmehdi, Hussein M.
ABSTRACT:
Triazole is considered as a privileged scaffold in medicinal chemistry by virtue of it is diverse biological activity. several drugs currently in the market possess triazole moiety. In this study click chemistry was performed on the pyridine based 1,2,4-triazole-tethered propargyl moiety to afford 4-(5-(((1-(3,4,5-trimethoxyphenyl)-1H-1,2,3-triazol-4-yl)methyl)thio)-4-phenyl-4H-1,2,4-triazol-3-yl)pyridine. The new compound was fully characterized by 1H NMR, 13C NMR, HRMS and X-ray diffraction (XRD). XRD data indicated that, the structure shows: triclinic, space group P −1, a = 6.4427(3) A, ° b = 11.4352(4) A, ° c = 15.4510(5) A, ° α = 97.980(2)°, β = 96.043(2)°, γ = 92.772(2)°, V = 1118.75(7) Å 3, Z = 2, T = 152(2) K, μ(MoKα) = 0.094 mm−1, Dcalc = 1.364 g/cm3. Density functional theory (DFT) method along with Hirshfeld analysis of the optimized X-ray structure of the final product were used to confirm the molecular and the electronic structure of the reported compound.
BODY:
1 Introduction Heterocyclic organic chemistry is one of the most important and well-studied branches of medicinal chemistry. Nitrogen based heterocyclic compounds encompass fascinating pharmacological and biological studies such as antibacterial, antifungal, antitubercular, antioxidant and antitumor agents [ 1 , 2 ]. Among the nitrogen heterocyclic compounds are triazoles, family of five-membered heterocyclic compounds, they are an important motif of many compounds with medicinal applications [ 3–6 ]. The five membered ring triazoles are of special interest due to the diversity of their biological activities as well as the diversity of their synthesis [ 7–16 ]. Classes of triazoles such as 1,2,3 and 1,2,4 triazoles are continuously proven to inspire the scientific community [ 3 , 11 , 16–20 ]. Furthermore, sulfur-linked 1,2,4-triazoles considered an important class of sulfur containing compounds with diverse potential applications in drug discovery, especially 1,2,4-triazole-3-thione ring system [ 21 ]. several biologically active 1,2,4-triazole-3-thione derivatives have been reported with broad spectrum of bioactivities such as antioxidant [ 22 ], antiviral [ 23 ], and anticancer [ 24 ]. On the other hand, 1,2,3-triazoles are considered a privileged structure scaffold. This is due to its linker properties and ease of synthesis, diverse 1,2,3-triazole derivatives were prepared and screened for biological activities [ 25 ]. Based on the above and in a continuation of our research in the copper(I)-catalyzed alkyne-azide cycloaddition (CuAAC) reaction (click chemistry) [ 26 ], we are reporting the synthesis, crystal structure, Hirshfeld surface analysis, and DFT Calculation of 4-(5-(((1-(3,4,5-trimethoxyphenyl)-1 H -1,2,3-triazol-4-yl)methyl)thio)-4-phenyl-4 H -1,2,4-triazol-3-yl)pyridine. 2 Experimental section 2.1 General All chemicals and reagents were purchased from Sigma-Aldrich and Acros and used without further purification. Melting point was measured with a Stuart melting point apparatus and was uncorrected. Nuclear magnetic resonance (NMR) spectra were measured on a Bruker Avance ІІІ-500 MHz spectrometer, 13 C spectra were measured at 125 MHz in deuterated dimethylsulphoxide (DMSO- d 6 ) as a solvent,. Chemical shift values ( δ ) were reported in parts per million (ppm) with reference to residual resonance of the solvent used. High-resolution mass spectrum (HRMS) is measured (in positive) using the electrospray ion trap (ESI pos low mass) technique by collision-induced dissociation on a Bruker APEX-IV (7 T) instrument. Single-crystal X-ray diffraction data were collected using Bruker-D8 Venture single crystal XRD, equipped with (Mo & Cu), X-ray Source (λ = 0.71073 Å) at 100–293 K. 2.1.1 Procedure for the synthesis of 4-(4-phenyl-5-(prop-2-yn-1-ylthio)-4 H -1,2,4-triazol-3-yl)pyridine (1) Compound 1 was synthesized according to the published [ 4 , 27 ], in brief: Isoniazid (1equiv.) and phenylisothiocyanate (1.5 equiv.) were refluxed in CH 3 OH for 3hr, where upon completion the solution was cooled, and the precipitate was collected. The produced intermediate (thiosemicarbazide derivative) was refluxed in 2N NaOH. Thereafter, the solution was cooled and acidified with hydrochloric acid to pH = 5–6. Precipitate formed was collected by filtration. The collected precipitate (1equiv.) was mixed with triethylamine (1equiv.) in methanol and stirred at 0–5 °C and propargyl bromide (1equiv.) was added slowly. The reaction mixture was stirred at room and monitored by tlc. Upon completion, the solution was evaporated under vacuum and the precipitate formed was collected and recrystallized from ethanol to afford compound 1 . 2.1.2 Procedure for the preparation of 5-azido-1,2,3-trimethoxybenzene (2) The desired azide derivative was prepared following the standard published procedure [ 26 ], where 1,2,3-trimethoxyaniline (10.0 mmol) was dissolved in cooled HCl (aq) (6.0 mL, 6 M) T = 0–5 °C. To this cold solution, cold solution of sodium nitrite (10.0 mmol) was added, followed by the dropwise addition of an aqueous solution of NaN 3 (10.0 mmol). The mixture was stirred for 20 min at r.t. The produced azide was extracted with ethylacetate. The organic phase was dried over anhydrous sodium sulfate, filtered and evaporated at room temperature to afford the crude azide which was used without further purification. 2.1.3 Procedure for the preparation of 4-(5-(((1-(3,4,5-trimethoxyphenyl)-1 H -1,2,3-triazol-4-yl)methyl)thio)-4-phenyl-4 H -1,2,4-triazol-3-yl)pyridine (3) The target compound 3 , was synthesized following the published procedure [ 4 ], in which, a mixture of 1 (1.0 eq) and azide 2 (272 mg, 1.3 mmol, 1.3 eq) was stirred in DMF (10.0 mL). To this mixture, sodium ascorbate (0.5 mmol) and CuSO 4 ·5H 2 O (0.16 mmol) was added. The reaction progress was monitored using tlc, upon completion, the formed precipitate was filtered off and recrystallized from EtOH/H 2 O to produce a nice pure crystalline white product 3 . Yield 83 %, m. p. 151–152 °C. 1 H NMR (500 MHz, DMSO- d ): 6 δ = 8.77 (s, 1H, triazole H), 8.76–8.50 (br s, 2H, Ar-H), 7.65–7.53 (m, 3H, Ar-H), 7.50–7.73 (m, 2H, Ar-H), 7.30 (br s, 2H, Ar-H), 7.17 (s, 2H, Ar-H), 4.49 (s, 2H, SCH 2 ), 3.86 (s, 6H, ArOCH 3 ), 3.70 (s, 3H, ArOCH 3 ). 13 C NMR (125 MHz, DMSO- d ): 6 δ = 153.98, 152.95, 150.61, 143.85, 137.86, 134.24, 133.80, 132.88, 131.00, 130.65, 128.04, 122.83, 98.60, 60.68, 56.78, 27.29. HRMS (ESI) ( m / z ): 502.1661 [M+H] + calcd. C 25 H 24 N 7 O 3 S, found 502.1649. 2.2 Results and discussion The designed compound was synthesized from the propargyl tagged 1,2,4-triazole derivative 1 through the well-established CuAAC reaction click methodology with the azide derivative 27 2 as shown in 26 Scheme 1 . Compound 3 was isolated as a white solid in 83 % yield. The success of this synthesis was clear from the 1 H and 13 C NMR analysis of the produced product, the non-existence of any terminal alkyne proton which is characteristic signal for compound 1 along with the downfield singlet signal resonating at δ = 8.77 ppm which is assigned to the H-4 proton of the newly formed 1,2,3 triazole ring, indicated the successful approach and the formation of the new ring system, on the top of that the 13 C NMR of compound 3 showed no signals for any sp-hybridized carbons. Furthermore, 1 H NMR of compound 3 showed the three methoxy groups signals resonating at δ = 3.86 ppm (6H) and δ = 3.70 ppm (3H). Compound 3 showed moderate activity against Escherichia coli ATCC 25,922, and Candida albicans NRRL Y–477 at a concentration of 2 mg/100 μL. No significant cytotoxic activity was observed for compound 3 when tested in vitro against human colon carcinoma (HCT116), human cervix carcinoma (HeLa) and human breast adenocarcinoma (MCF7)) at 10 μM concentration [ 4 ]. 2.3 Structure determination The molecular structure for compound 3 is shown in Fig. 1 with atom labelling. Single crystal of C 25 H 25 N 7 O 4 S was obtained through crystallization of pure compound from ethanol /water . A suitable needle-like colorless specimen was selected and mount on Bruker D8-VENTURE diffractometer. The frames were integrated with the Bruker SAINT software package using a narrow-frame algorithm. The crystal was kept at 100(2) K during data collection. Using APEX 5 software, the structure was solved and refined with Bruker SHELXTL Software Package. Further refinements were performed using Olex2. Data were corrected for absorption effects using the Multi-Scan method (SADABS). Structure solution program using Intrinsic Phasing and refined with the SHELXL refinement package using Least Squares minimization. ORTEP plots were generated using ORTEP-3 program (version 2020.1 for windows). The crystal structure data, parameters and measurement conditions corresponding to compound 3 are listed in Table 1 . Crystallographic data for the structure of compound 3 , in this study have been deposited with the Cambridge Crystallographic Data Centre under the depository No. CCDC 2355975. Copies of these data can be obtained, free of charge, on application to CCDC, 12 Union Road, Cambridge CB2 IEZ, UK, (fax: +44-1223-336033 or e-mail: deposit@ccdc.com.ac.uk or http://www.ccdc.ac.Uk ). The asymmetric unit of compound 3 comprises a single molecule with one lattice water as depicted in Fig. 1 . It is composed of five unsaturated rings, two triazole, one pyridyl and two phenyl rings. The molecule is not planar due to steric repulsion between rings. In the 1,2,4-triazole, the pyridyl group shows angle of 28.35(7)° between the two rings normal plane while the angle is 77.65(8)° between normal plane of the phenyl group attached to C6 and the normal plane of the 1,2,4-triazole. On the other hand, in 1,2,3-triazole the angle is 23.79(7)° between normal plane of the phenyl group attached to N2 and the normal plane of the 1,2,3-triazole. The N–N [N5–N6: 1.390 Å, N1–N7: 1.310 Å; N1–N2: 1.356 Å] and C–N [N2–C16: 1.354 Å, N7–C15: 1.362 Å, C7–N5: 1.315 Å, C7–N3 1.370 Å, C6–N3: 1.378 Å, C6–N6: 1.315 Å] bond lengths of triazole rings are within the values reported for N-N and C-N bonds in triazole rings. The angle around sulfur atom (C14–S1–C7) shows a normal bent angle of 99.20°. The water molecule of crystallization in 3 form hydrogen bonds with the pyridyl nitrogen N4 with distance of 1.94 Å. The structure shows more than one hydrogen bonding interactions as shown in Fig. 2 . The molecules are linked in the crystal via hydrogen bonding interactions between N5 and N6 in 1,2,4-triazole and the hydrogen at 1,2,3 triazole H16(1-X, 1-Y, 1-Z) and a hydrogen on the phenyl at the 1,2,3 triazole H22(1-X, 1-Y, 1-Z) with 2.735 and 2.559(2) Å distance, respectively. Another hydrogen bonding interaction link these dimers between the sulfur atom S1 and hydrogen at phenyl ring attached to 1,2,4-triazole H9 (1-X, 2-Y, 1-Z) with 2.7694(6) Å distance. 2.4 Computational studies Density Functional Theory (DFT) and time dependent Density Functional Theory methods were used to provide an insight into the ground and excited of 3 in gas phase. The structure of the final product 3 was optimized at the level of theory B3LYP using 6-311+G (2d, p) basis set [ 28–30 ]. The single point energy of the optimized energy calculations of the gas phases excited state was determined at the same level of theory using the time dependent DFT methods. All DFT calculations were conducted using G16 (Linux) through input files prepared by GV6 software. The obtained X-ray crystal structure was used an input on CrsytalExplorer platform [ 31 ] this to determine the Hirshfeld surface analysis, hence providing an insight into the types of interactions between different groups. 2.4.1 Results and discussion A comparative structural analysis between the optimized and X-ray structure revealed that there are not any major discrepancies with bond lengths, angles and dihedral angles as shown in Table 2 . In addition, the Frontier Molecular Orbital (FMO) analysis using the time dependent density functional theory showed that the major transition occurs from the HOMO-131 (n) to LUMO-132 (π∗) with absorbance of 308.73 nm and oscillator strength of 0.3455, Fig. 3 . The transitions from lower HOMO-m to higher LUMO + m are not observed through the TDDFT where the recorded oscillator strengths are zeros. Furthermore, the FMO analysis revealed that the main contribution of the n-type orbital (donor) are the lone pair of electrons on the sulfur atoms. On the other hand, the main constituent of the π∗ (acceptor) are the extended pi system on the aromatic rings. The Hirshfeld analysis showed that the bulk of interaction is due to hydrogen bonding, as the hydrogen atoms constitute the major surface area exposure of 63 %. The sulfur and triazoles nitrogen are exposed to external interactions as revealed in Hirshfeld surfaces as highlighted red signifies the exposure through de and di (the distances to the internal and external distance from surface to the nearest inner and external nucleus), Fig. 4 . The Mulliken charges distribution showed that the negative charges are located on sulfur and triazole nitrogens, Fig. 5 . The N36 possessed a positively Mulliken positive charge due to the electron withdrawing effect of negatively charged oxygen on the adjacent phenyl groups. In addition, all hydrogen atoms were positively charged. 3 Conclusion The electronic and molecular properties of the optimized product 3 through the DFT and TDDFT calculation along with Hirshfeld surface analysis of X-ray structure, could provide a clear insight about the role of sulfur and triazole nitrogen in the crystal packing and reactivities. The C-H ….N interactions was confirmed by the analysis of the fingerprint plots of the Hirshfeld molecular surface where that H-H contact contribution was recorded to be 62.6 %. Moreover, Hirshfeld surface analysis also showed the reactive electrophilic and nucleophilic sites are located around the nitrogen and hydrogen atoms respectively. In addition, the optimized molecular structure using DFT methods with high level of theory and basis set was in great agreement with X-ray determined structure which provided an additional affirmation of the reached conclusions and results. The FMO analysis predicted that the HOMO- LUMO gap to be 4.0159 eV which indicated the great extent of stability of the obtained structure. CRediT authorship contribution statement Mohamed El-Naggar: Writing – review & editing, Writing – original draft, Methodology, Funding acquisition, Conceptualization. Kamrul Hasan: Writing – review & editing. Monther A. Khanfar: Writing – review & editing, Software, Methodology, Investigation, Conceptualization. Fatima-Azzahra Delmani: Writing – review & editing, Data curation. Ihsan A. Shehadi: Writing – review & editing, Writing – original draft, Validation, Software, Methodology. Raed Al-Qawasmeh: Writing – original draft, Methodology, Investigation, Conceptualization. Hussein M. Elmehdi: Writing – review & editing, Methodology, Data curation. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgements The authors are grateful to the Deanship of Scientific Research, University of Sharjah , for funding this work through a Competitive Research Projects No. ( 21021440109 and 22021440134 ).
REFERENCES:
1. ALQAWASMEH R (2014)
2. ELNAGGAR M (2023)
3. ABDELLI A (2021)
4. ELNAGGAR M (2024)
5. SALAMEH B (2020)
6. HERSI F (2020)
7. VALDOMIR G (2018)
8. CARREIRO E (2022)
9. GRACIANO I (2022)
10. ALSHEIKHALI A (2021)
11. ELNAGGAR M (2020)
12. DAI J (2022)
13. KUMAR S (2024)
14. KUMAR A (2024)
15. DIXIT D (2021)
16. KUMAR V (2023)
17. VIJESH A (2013)
18. ZAMPIERI D (2019)
19. PALASKA E (2002)
20. BALAKIT A (2020)
21. KUCUKGUZELS G (2015)
22. AYHANKILCIGIL G (2007)
23. KHARB R (2011)
24. SHIVARAMAHOLLA B (2003)
25. BOZOROV K (2019)
26. SABER S (2023)
27. RADWAN A (2013)
28. DENNINGTON R (2016)
29. FRISCH M (2016)
30. BECKE A (1993)
31. SPACKMAN P (2021)
|
10.1016_j.aej.2020.05.025.txt
|
TITLE: Computational Fluid Dynamics (CFD) simulation of liquid column separation in pipe transients
AUTHORS:
- Warda, H.A.
- Wahba, E.M.
- El-Din, M. Salah
ABSTRACT:
The present study deals with the computational simulation of transient events of water hammer and column separation in a water pipeline system. Three-dimensional CFD simulations based on the Finite-Volume (FV) approach are performed to predict pressure fluctuations and to visualize liquid column separation/rejoining caused by the sudden closure of a valve located at the upstream end of the pipe. Explanation of vaporous cavitation phenomena is also presented. The Volume of Fluid (VOF) model and Schnerr-Sauer cavitation model are used to describe the multiphase flow and the transient vaporous cavitation, respectively. Moreover, the shear-stress transport (SST) turbulence model is applied to model the Reynolds stresses using the Boussinesq hypothesis. Present results are compared with available experimental and numerical results from the literature. The comparisons show that the present method gives adequate results. Also, the 3D model adopted is deemed physically superior to the existing 1D models as it removes the restriction of the 1D models that vapor cavities, when formed, fill the whole cross-section of the pipe without radial variation. In addition, 1D models are not able to predict the stratification effect due to density variation of the two phases. Consequently, the 3D model can better visualize the phenomenon of liquid column separation/rejoining in pipes than 1D models.
BODY:
Nomenclature ρ density ( k g / m 3 ) α volume fraction (dimensionless) μ molecular viscosity ( K g / m . s ) μ t turbulent viscosity ( K g / m · s ) δ Kronecker delta function (dimensionless) ω turbulence specific dissipation rate ( 1 / s ) Γ k effective diffusivity of k ( K g / m · s ) Γ ω effective diffusivity of ω ( K g / m · s ) ν kinematic viscosity ( m 2 / s ) t time ( s ) n b number of bubbles per unit volume of liquid ( m - 3 ) R B bubble radius ( m ) R e mass transfer term due to evaporation ( K g / m 3 · s ) R c mass transfer term due to condensation ( K g / m 3 · s ) u velocity ( m / s ) p static pressure ( P a ) P v vapor pressure ( P a ) P b bubble surface pressure ( P a ) g gravitational acceleration ( m / s 2 ) k turbulence kinetic energy ( m 2 / s 2 ) G k production of k ( K g / m · s 3 ) G ω generation of ω ( K g / m 3 · s 2 ) Y k dissipation of due to turbulence k ( K g / m · s 3 ) Y ω dissipation of due to turbulence ω ( K g / m 3 · s 2 ) D ω cross diffusion term ( K g / m 3 · s 2 ) H pressure head ( m ) H 0 pressure head at steady state ( m ) a wave speed ( m / s ) V 0 velocity at steady state ( m / s ) L pipe length ( m ) Subscripts v water vapor phase l water liquid phase i , j , k directions of the Cartesian coordinates 1 Introduction Water hammer is a transient fluid phenomenon that occurs in hydraulic systems as a result of the sudden change of fluid velocity. The sudden shutdown of a pump or closure of a valve in pressurized pipeline systems causes fluid transients which may produce extremely high pressures. Very low pressures may also result during transient events resulting in column separation. Column separation is the breaking of liquid columns in fully filled pipelines [1] , and occurs when the fluid pressure drops to the vapor pressure of the liquid assuming that the liquid is gas-free [2] . Two different types of column separation may occur [1,3] ; (1) localized vapor cavity with a large void fraction (the ratio of the volume occupied by the vapor to the total volume of the liquid/vapor mixture, (2) distributed vaporous cavitation with a small void fraction (close to zero). Localized vapor cavity may form at a pipe boundary (e.g., closed valve) or at high points along the pipe which grows and diminishes according to the dynamics of the system. The collision of two liquid columns, or of one liquid column with a closed end, may result in a high and nearly instantaneous pressure rise. Distributed vaporous cavitation zone forms over long sections of the pipe and is compressed back to the liquid phase when a shock wave front moves into it. Consequently, both water hammer and column separation effects may be encountered during fluid transient events. Water hammer and column separation may impose severe structural loadings on pipeline systems that, unless fully understood and catered for in the design phase, can lead to catastrophic consequences; a renowned example is the accident at Oigawa hydropower station in 1950 in Japan which resulted in three casualties [1] . Amongst all models used to simulate liquid column separation, the Discrete Vapor Cavity Model (DVCM) is considered the most widely used [1] . It is simply applied together with the classical one-dimensional (1-D) water hammer equations, and it can capture many of the physical aspects of column separation phenomena [1] . Simpson and Wylie [4] implemented the DVCM to analyze a horizontal reservoir-pipeline-valve system during the occurrence of column separation. Warda et al. [5] investigated the failure of a major steel oil transport pipeline in the Middle East using a coupled fluid hammer/low cycle fatigue analysis. In their study, they used the DVCM for the simulation of the successive formation/collapse of cavities caused by the repeated shutdown/startup operations of the pipeline and concluded that the failure was due to the high pressure rise following the cavities collapse at the peak point of the pipe. Other well-known methods for the simulation of liquid column separation include: the Discrete Gas Cavity Model (DGCM) and the Generalized Interface Vaporous Cavitation Model (GIVCM). The GIVCM can be used to designate the different column separation phenomena (i.e., discrete cavities, vaporous cavitation zones) [3] . Bergant and Simpson [3] presented the results of the comparison between both the experimental measurements and the numerical results, obtained by the DVCM, the DGCM, and the GIVCM, for different flow conditions following the sudden closure of a valve at the downstream end of a straight sloping copper pipe. Soares et al. [6] studied the transient vaporous cavitation in an elastic pipe experimentally and numerically, their numerical study included implementation of the DGCM and the DVCM. They inferred that the unsteady friction losses were vital to accurately calculate the pressure traces over the course of the transient event. Adamkowski and Lewandowski [7] used their own developed New Single-Zone DVCM model to simulate liquid column separation. Cavitation characteristics of the closing valve and the unsteady friction losses were considered in their model. Furthermore, they experimentally visualized liquid column separation by means of installing a short transparent pipe section behind the closing valve. Simpson and Bergant [8] studied the numerical consistency of the DVCM and several of its adapted models documented in the literature [9–11] . They also analyzed the consistency of the DGCM. Soares et al., [12] presented the results of water hammer accompanied by column separation in viscoelastic pipes. Tijsseling and Vardy [13] conducted experimental tests to report the coupled effects of column separation and Fluid-Structure Interaction (FSI) in a closed tee pipe system. Recently, Wang et al. [14] carried out a two-dimensional CFD simulation to depict water hammer with column separation events, the work was based on the FV discretization method and the homogenous mixture multiphase model. It reveals, from the aforementioned brief review, that almost all of the computational efforts to simulate water hammer together with the anticipation of vaporous cavitation are based on 1D models. And despite the improvements imposed on 1D models to better predict pressure fluctuations, there is always a discrepancy between the predicted cavity behavior and the physical cavity visualized by experiments [15] . This discrepancy is attributed to the inherent 1D assumption in the existing models. In this context, the current work aims at removing that 1D assumption by utilizing the Volume of Fluid (VOF) CFD model for the prediction of water hammer and column separation phenomena in pipeline systems especially the stratification pattern of the two phases due to the their differing densities. The adopted CFD procedure is first tested against the numerical/experimental results of laminar and turbulent fluid transients, without column separation, available in the literature. Then, the computed results of turbulent water hammer events accompanied by column separation are validated against the experimental and DVCM results of Bergant and Simpson [16] . 2 Mathematical model ANSYS Fluent software is used in the analysis and the details of the adopted numerical procedure are provided in the following subsections. 2.1 VOF model VOF model is used to capture the interface between the water liquid/vapor phases. The model is typically used in the applications of stratified flows, the motion of large bubbles in a liquid, and the steady or transient tracking of any liquid–gas interface [17] . In the VOF model, volume fraction is introduced to represent the respective volume occupied by each phase in every cell of the fluid domain. In each cell, the sum of volume fractions of all phases is equal to unity. The continuity equation for the volume fraction of the secondary phase, water vapor phase, has the following Cartesian tensor form [17] : where (1) ∂ ∂ t α v ρ v + ∂ ∂ x i · α v ρ v u i = R e - R c and R e are the mass transfer terms due to evaporation and condensation, respectively. In the current study, Eq. R c (1) is solved by using implicit time discretization. The volume fraction of the primary phase, water liquid phase, is calculated based on the following relation [17] : (2) α v + α l = 1 2.2 Schnerr and Sauer cavitation model Schnerr and Sauer cavitation model [17] is included to calculate the mass transfer terms associated with the growth and collapse of the vapor bubbles ( and R e , respectively). This model, together with the simultaneous application of VOF model, was validated for simulating unsteady cavitating flows in turbomachinery applications R c [18] . The model uses Eq. (3) to correlate the vapor volume fraction ( ) to the number of bubbles per unit volume of liquid ( α v ), together with Eq. n b (4) – the Rayleigh-Plesset equation for spherical bubble dynamics [19] , to model the evaporation and condensation processes as follows: (3) α v = n b 4 3 π R B 3 1 + n b 4 3 π R B 3 A default value of 1e+13 is used. n b = where (4) D R B Dt = 2 3 P b - P ρ l is the bubble pressure at its surface, and P b is the local far field pressure. P When P v ≥ P , (5) R e = ρ v ρ l ρ α v ( 1 - α v ) 3 R B 2 3 P v - P ρ l When P v ≤ P , where (6) R c = ρ v ρ l ρ α v ( 1 - α v ) 3 R B 2 3 P - P v ρ l in Eqs. ρ (5) and (6) denotes the mixture density. 2.3 Governing equations and numerical methods The unsteady Reynolds-Averaged Navier-Stokes equations (URANS), implementing Boussinesq hypothesis, are incorporated to simulate the flow turbulence. A single set of continuity and momentum equations is shared by all fluid phases, and applied throughout the domain by using volume-fraction-averaged fluid properties (i.e., density, viscosity) depending upon the volume fraction of each phase in every control volume as follows [17,20] : (7) ∂ ρ ∂ t + ∂ ∂ x i ρ u i = 0 (8) ∂ ∂ t ρ u i + ∂ ∂ x j ρ u i u j = - ∂ p ∂ x i + ∂ ∂ x j μ + μ t ∂ u i ∂ x j + ∂ u j ∂ x i - 2 3 δ ij ∂ u k ∂ x k + ρ g i (9) ρ = α v ρ v + α l ρ l (10) μ = α v μ v + α l μ l The shear-stress transport (SST) turbulence model, developed by Menter [21] , is chosen to predict the turbulent/eddy viscosity. This model is opted in the study as it can effectively capture the boundary layer alongside the freestream independence in the far field [17] . Also, it is considered accurate and reliable for a wide range of flows (for example, adverse pressure gradient flows) [21] . The transport equations of the SST turbulence model are given by: (11) ∂ ∂ t ρ k + ∂ ∂ x i ρ k u i = ∂ ∂ x j Γ k ∂ k ∂ x j + G k - Y k (12) ∂ ∂ t ρ ω + ∂ ∂ x j ρ ω u j = ∂ ∂ x j Γ ω ∂ ω ∂ x j + G ω - Y ω + D ω For full details about the different terms of the SST turbulence model, refer to [17,21] . The pressure-based coupled algorithm is used to solve the momentum equations and the pressure-based continuity equation in a simultaneous manner. Interpolation of pressure values at cell faces from cell centers is achieved using the pressure staggering option (PRESTO!) scheme. Spatial discretization is done using second-order upwind scheme for convective term of all equations, second order central-differencing scheme for diffusive terms, whereas first-order implicit discretization is applied for temporal discretization with a time step size of 0.0001 s for all reported cases. 2.4 Physical properties of the fluid phases The densities of the vapor/liquid phases are determined by using a User Defined Function (UDF) in ANSYS Fluent; the vapor phase density is obtained from the isothermal ideal gas law, whereas the density of the liquid phase is calculated by considering the bulk modulus of elasticity and the wave speed of the slightly compressible liquid as follows [14,22] : where (13) ρ l = ρ l 0 1 + P - P 0 K leq is the density of liquid at reference liquid pressure, ρ l 0 (absolute), P 0 is the absolute liquid pressure, and P is the equivalent bulk modulus of elasticity of the liquid. K leq where (14) a l = K l / ρ l 0 1 + K l E D e = K leq ρ l 0 is the wave speed, a l is the bulk modulus of elasticity at liquid reference pressure, D is the pipe diameter, K l is the Young’s modulus of elasticity of the pipe, and E is the pipe wall thickness. e 3 Test cases The numerical procedure is used to simulate the experimental cases of laminar fluid transients of Holmboe and Rouleau [23] , and turbulent fluid transients of Bergant et al. [24] . For both cases, the experimental setup is a reservoir-pipe-valve system and the transient evets are initiated by the sudden closure of the downstream valve. Table 1 presents the parameters considered for the two test cases. For each of the test cases, the numerical simulation is performed in two steps; (1) steady-state simulation with: pressure-inlet boundary representing the total head of the upstream reservoir, and velocity-inlet for the downstream boundary. (2) Transient-state simulation with: the obtained steady-state results as the initial conditions, and the downstream boundary is altered to wall boundary condition to represent the sudden closure of the outlet valve. A set of three different grids is used to investigate grid independence results. For each case, the fine grid consists of 1,638,000 (273 × 6000) hexahedral cells. Grid refinement is applied near the wall of the internal diameter of the pipe. A cross-section of the fine grid is illustrated in Fig. 1 . Numerical uncertainity is estimated using the grid convergence index (GCI) approach [25] . Figs. 2 and 3 show the simulated pressure–time history, of all three grids, for the cases of laminar flow and turbulent flow, respectively. Tables 2 and 3 provide the numerical uncertainities associated with the obtained 3D results at the pipe midpoint for the cases of laminar flow and turbulent flow, respectively, which are shown to be minimal. The 3D results of the fine grid, from the present study, are validated against available experimental data from the literature [23,24] in Figs. 4 and 5 , for the laminar flow and the turbulent flow cases. Moreover, Fig. 5 shows the comparison between the simulated results and the results of the 1D Method of Characteristics (MOC) models [26] for the case of tutbulent flow. Figs. 4 and 5 show that the results of the 3D model are in good agreement with the experimental data for both of the test cases. Also, Fig. 5 demonstrates that the numerical procedure of the present study is more accurate to predict the transient pressure fluctuations, in terms of phase and peaks damping, than the 1D models for the case of turbulent water hammer events. 4 Water hammer with column separation case The 3D procedure, including the cavitation and VOF models, is applied to simulate the turbulent water hammer events accompanied by column separation of Bergant and Simpson experiment [16] . The main components of the case, illustrated in Fig. 6 , are: an upstream ball valve, a downstream pressurized tank, and a downward sloping pipe connecting the inlet valve and the outlet tank. The pipe is made of copper with a length of 37.23 m and an internal diameter of 22.1 mm. The pipe slope is 1 (vertical) to 18.3 (horizontal). The fluid used is water with ( 1.184e−6 ν = ). The Reynolds number is 28,000 and the measured wave speed is 1319 m/s. The transient events are initiated by the fast closure of the upstream valve in 0.009 s. The fine grid of the test cases (1,638,000 hexahedral cells) is used for this simulation. Similar to the two test cases, the simulation is performed in two steps; (1) steady-state simulation with: velocity-inlet boundary condition, and pressure-outlet boundary corresponding to the static head of the downstream tank. (2) transient-state simulation with: the obtained steady-state results as the initial conditions, and the velocity-inlet magnitude was decreased linearly to zero over the duration of the valve closure by using a UDF. m 2 / s 4.1 Model validation Comparison between the simulated results and the experimental/DVCM results of Bergant and Simpson [16] is given in Fig. 7 . The results presented are at two locations; at the valve and at the midpoint of the pipe. The comparison shows that the results of the 3D model are in good agreement with the experimental data and the DVCM results. Fig. 7 shows that the 3D model is slightly faster in predicting the timing of the pressure peaks than the DVCM, which is most likely attributed to the dissipative nature of the deployed first-order temporal discretization scheme [27] . Also, the 3D model produces more damping to the overestimated second pressure peak calculated by the DVCM at the valve location. The most significant benefit of the 3D model appears in the ability to provide physical explanation of vaporous cavitation phenomena which cannot be accurately visualized by the DVCM. Consequently, the proposed 3D model is a feasible method to simulate transient column separation events in pipes. 4.2 Discussion Upon the sudden closure of the inlet valve, the pressure head decreases to the vapor head of the water (−9.8 m, at 15.5 °C). The velocity is not reduced to zero since complete stoppage of the flow requires, according to Joukowsky equation ), a head decrease of 202 m which would lead to a head less than the vapor head. Consequently, the velocity only decreases as the minimum head is not permitted to drop below the vapor pressure. The liquid separates from the valve/pipe walls at the inlet and a vapor cavity starts to develop in the initial flow direction, as shown in ( Δ H = a V 0 / g Fig. 8 (a). Distributed vaporous cavitation zones form and extend over long sections of the pipe. The produced negative pressure wave propagates towards the downstream reservoir where it reflects-off at t = L/a and at successive time periods (3L∕a, 5L∕a, etc.). The vapor cavity at the inlet acts as a pressure boundary condition and reflects-off the pressure waves, travelling from the downstream reservoir, at t = 2L/a and at successive time periods (4L∕a, 6L∕a, etc.). The cavity at the inlet, as shown in Fig. 9 , does not occupy the full cross-section of the pipe; there is a thin water liquid film separating the cavity and the bottom wall of the pipe. The cavity reaches a maximum length of approximately 0.15 m at about t = 0.17 s. Then, the water flows in the reverse direction (i.e., towards the closed valve), as shown in Fig. 8 (b). Hence, the vapor cavity stops growing and begins to shrink, and it collapses at the inlet valve wall at about t = 0.297 s. The short-duration pressure head at the valve caused by the cavity collapse superimposed on the positive pressure wave reflected-off the reservoir is approximately 253 m, and occurs when the positive pressure wave travelling from the reservoir arrives at the inlet valve at about t = 0.347 s. Due to the relatively long time interval (0.05 s, about 2L/α) between the cavity collapse and the arrival of the reflected pressure waves from the reservoir, the short-duration pressure peak of the first high pressure cycle has a narrow pattern. At about t = 0.35 s a rarefaction pressure wave propagates from the inlet valve towards the downstream reservoir. As a consequence, a localized vapor cavity forms over the first few meters downstream the valve and distributed vaporous cavitation zones reach approximately the whole pipe length. The localized vapor cavity downstream the inlet valve is thinner than the one formed in the first low pressure cycle, as shown in Fig. 10 . Distributed vaporous cavitation zones extend through a very thin layer adjacent to the top wall of the pipe only, as shown in Fig. 11 . The pressure rise from the cavity collapse at the inlet valve superimposed on the pressure waves, propagating from the downstream reservoir, is less than the preceding pressure rise and happens at t = 0.64 s. 5 Concluding remarks A Three-Dimensional CFD method is used to simulate laminar and turbulent fluid transients, and water hammer events accompanied by column separation in pipes. The results of the present method are in good agreement with the experimental data and with 1D models available in the literature. Moreover, the 3D results of the turbulent fluid transients case are shown to be superior to the 1D models in predicting pressure fluctuations, as shown in Fig. 5 . For the case of liquid column separation, the most advantage of the present method appears in removing the 1D assumption of DVCM to depict cavity development and collapse which shows a physical contradiction with experimental observations. Hence, the proposed 3D model is deemed physically superior to the existing 1D DVCM as it can better visualize liquid column separation/rejoining in pipes and give more insight into the dynamics of the system, as depicted in Figs. 8–11 . Declaration of Competing Interest The authors declared that there is no conflict of interest.
REFERENCES:
1. BERGANT A (2006)
2.
3. BERGANT A (1999)
4. SIMPSON A (1991)
5.
6.
7. ADAMKOWSKI A (2013)
8. SIMPSON A (1994)
9. SAFWAT H (1973)
10. KOT C (1978)
11. MIWA T (1990)
12. SOARES A (2012)
13. TIJSSELING A (2006)
14. WANG H (2016)
15.
16.
17.
18.
19. BRENNEN C (1995)
20.
21. MENTER F (1994)
22. WYLIE E (1978)
23. HOLMBOE E (1967)
24.
25.
26. WAHBA E (2016)
27.
|
10.1016_j.ttbdis.2023.102218.txt
|
TITLE: Function-guided selection of salivary antigens from Ornithodoros erraticus argasid ticks and assessment of their protective efficacy in rabbits
AUTHORS:
- Carnero-Morán, Ángel
- Oleaga, Ana
- Cano-Argüelles, Ana Laura
- Pérez-Sánchez, Ricardo
ABSTRACT:
The identification of new protective antigens for the development of tick vaccines may be approached by selecting antigen candidates that have key biological functions. Bioactive proteins playing key functions for tick feeding and pathogen transmission are secreted into the host via tick saliva. Adult argasid ticks must resynthesise and replace these proteins after each feeding to be able to repeat new trophogonic cycles. Therefore, these proteins are considered interesting antigen targets for tick vaccine development. In this study, the salivary gland transcriptome and saliva proteome of Ornithodoros erraticus females were inspected to select and test new vaccine candidate antigens. For this, we focused on transcripts overexpressed after feeding that encoded secretory proteins predicted to be immunogenic and annotated with functions related to blood ingestion and modulation of the host defensive response. Completeness of the transcript sequence, as well as a high expression level and a high fold-change after feeding were also scored resulting in the selection of four candidates, an acid tail salivary protein (OeATSP), a multiple coagulation factor deficiency protein 2 homolog (OeMCFD2), a Cu/Zn-superoxide dismutase (OeSOD) and a sulfotransferase (OeSULT), which were later produced as recombinant proteins. Vaccination of rabbits with each individual recombinant antigen induced strong humoral responses that reduced blood feeding and female reproduction, providing, respectively, 46.8%, 45.7%, 54.3% and 31.9% protection against O. erraticus infestations and 0.7%, 3.9%, 3.1% and 8.7% cross-protection against infestations by the African tick, Ornithodoros moubata. The joint protective efficacy of these antigens was tested in a second vaccine trial reaching 58.3% protection against O. erraticus and 18.6% cross-protection against O. moubata. These results (i) provide four new protective salivary antigens from argasid ticks that might be included in multi-antigenic vaccines designed for the control of multiple tick species; (ii) reveal four functional protein families never tested before as a source of protective antigens in ticks; and (iii) show that multi-antigenic vaccines increase vaccine efficacy compared with individual antigens. Finally, our data add value to the salivary glands as a protective antigen source in argasids for the control of tick infestations.
BODY:
1 Introduction Ticks infestations represent a growing medical and veterinary concern because ticks are efficient vectors of a large range of microbial pathogens, which cause severe diseases to wild and domestic animals, pets and humans, resulting in significant economic losses worldwide ( Schorderet-Weber et al., 2017 ; Rashid et al., 2019 ). Ornithodoros erraticus is an argasid tick distributed in the Iberian Peninsula, northern and western Africa and western Asia ( Boinas et al., 2014 ). This tick is the main vector of the African swine fever (ASF) virus and of the tick-borne human relapsing fever (TBRF) spirochetes, Borrelia hispanica and Borrelia crocidurae , in the Mediterranean Basin ( Arias et al., 2018 ; Talagrand-Reboul et al., 2018 ). In this region, O. erraticus colonises anthropic environments and lives closely associated with swine on free-range pig farms, buried inside and around pig premises, which facilitates the transmission and persistence of ASF and TBRF in the affected areas ( Oleaga et al., 1990 ; Pérez-Sánchez et al., 1994 ; Boinas et al., 2014 ). O. erraticus is also the type-species of the ‘ O. erraticus complex’, which includes species such as O. asperus, O. lahorensis, O. tartakovsky and O. tholozani. These species are distributed through the Middle East, the Caucasus, the Russian Federation and the Far East, where they transmit local species of Borreliae spirochetes that cause TBRF ( Masoumi et al., 2009 ; Chen et al., 2010 ; Schorderet-Weber et al., 2017 ). In recent years, the ASF virus has spread across this wide region, where it is suspected that local tick species belonging to the O. erraticus complex might be competent vectors for the ASF virus ( EFSA, 2014 , 2015 ; Jurado et al., 2018 ; Dixon et al., 2019 ; Tao et al., 2020 ). Should this suspicion be experimentally confirmed, the presence of these ticks in anthropic environments would contribute to the transmission and persistence of ASF throughout this region. Thus, an effective strategy for the prevention and control of ASF and TBRF should include the elimination of Ornithodoros vectors from the anthropic environments at least. Most strategies for tick control rely on chemical acaricide application, but this selects resistant tick strains and accumulates chemical residues in animal products and the environment ( Abbas et al., 2014 ). In addition, acaricide application against Ornithodoros ticks is inefficient because acaricides do not reach these parasites inside their shelters ( Astigarraga et al., 1995 ). Hence, alternative methods for tick control are urgently needed and tick vaccines have emerged as an effective and sustainable method for the control of tick infestations and tick-borne diseases ( Šmit and Postma, 2016 ; de la Fuente, 2018 ; Valle and Guerrero, 2018 ; Ndawula and Tabor, 2020 ). The first step for tick vaccine development is the identification of highly protective tick antigens. This task may be approached by selecting candidate protective antigens that have essential biological functions for ticks and share conserved structural and sequence motifs, which would enable the simultaneous control of several tick species ( de la Fuente and Contreras, 2015 ; de la Fuente et al., 2016 ). In this approach, the molecules and biological processes specifically evolved by ticks to adapt to their strict haematophagous lifestyle are considered attractive vaccine targets. These processes include host attachment, blood ingestion and modulation of the host defensive responses, which are carried out by salivary proteins secreted into the host by tick saliva ( Simo et al., 2017 ; Nuttall et al., 2019 ; Neelakanta and Sultana, 2022 ). Also included are the processes related to blood digestion, such as nutrient transport and metabolism, management of iron and haem groups, detoxification and oxidative stress responses, which are accomplished by proteins mainly expressed in the midgut ( Kocan et al., 2004 ; Sojka et al., 2013 ; Chmelar et al., 2016 , 2016 ; Araujo et al., 2019 ). Argasid ticks, including O. erraticus , typically are fast-feeders, which complete blood ingestion in minutes. This means that most of the salivary proteins necessary to enable blood feeding should have been synthesised and stored in the tick salivary glands and are ready to be secreted before tick specimens access the host. After feeding, O. erraticus adult ticks must synthesise and replace all the bioactive proteins consumed during blood ingestion to be able to repeat a new trophogonic cycle. Accordingly, it can be expected that the salivary genes differentially upregulated between two consecutive blood meals are those that encode the bioactive proteins necessary to complete blood ingestion, and therefore, these proteins are regarded as interesting antigen targets for drugs and vaccines aimed at preventing O. erraticus infestations. Recently, the transcriptome of the salivary glands of O. erraticus female ticks taken before feeding and at 7- and 14-days post-feeding has been assembled and functionally annotated (BioProject PRJNA666995) ( Pérez-Sánchez et al., 2021 ). Subsequently, this transcriptome was used as a reference database to obtain the proteome of the saliva of O. erraticus adult ticks ( Pérez-Sánchez et al., 2022 ). These studies have provided abundant information on the physiology of blood ingestion and the functionally relevant genes/proteins that are differentially upregulated in the O. erraticus salivary glands after blood feeding. These omics datasets can thus be mined for salivary antigen targets for vaccine development, following a function-guided selection strategy. In this way, amongst the functionally relevant proteins that are secreted into saliva and upregulated in the O. erraticus salivary glands in response to blood feeding, we have focused on the so-called acid tail salivary proteins (ATSP), the multiple coagulation factor deficiency protein 2 homologs (MCFD2) and the superoxide dismutases (SOD). ATSPs, together with basic tail and tailless proteins, constitute a superfamily of tick-specific proteins abundantly found in the sialomes of both ixodid and argasid species ( Francischetti et al., 2009 ; Chmelar et al., 2019 ; Ribeiro and Mans, 2020 ; Oleaga et al., 2021a , 2021b ; Pérez-Sánchez et al., 2021 , 2022 ). They are thought to play important and specific roles at the tick–host feeding interface, but only very few members of this family have been functionally characterised as anti-coagulants ( Narashiman et al., 2002 , 2004 ; Assumpção et al., 2018 ) and specific complement inhibitors ( Schuijt et al., 2011 ), while most of them have as yet unknown functions. Thus, vaccines targeting the tick-specific ATSPs might be useful, but no studies on the protective efficacy of tick ATSPs have been undertaken hitherto. MCFD2 is a 16 kDa soluble protein with two calcium-binding EF-hand domains in its C-terminal half, which is conserved in vertebrates and invertebrates ( Liu et al., 2013 ). Human MCFD2 is involved in transport and secretion of newly synthesised coagulation factors FV and FVIII, and missense mutations in MCFD2 induce combined deficiency of FV and VIII ( Guy et al., 2008 ; Liu et al., 2013 ). Homologs of MCFD2 have been identified in up to 18 tick species with 35 and 48 MCFD2 tick sequences accessible in the UniProt and NCBInr databases, respectively (last accessed on 22 December 2022). However, neither the functional role of MCFD2 in ticks or tick saliva nor its protective efficacy as a vaccine target has been explored hitherto. SODs are antioxidant metalloenzymes present in aerobic and anaerobic organisms, which protect cells against oxidative stress from elevated levels of superoxide anion (O 2 − ). SODs catalyse the conversion of superoxide to molecular oxygen (O 2 ) and hydrogen peroxide (H 2 O 2 ), which is then decomposed by catalase to water and oxygen. SODs are classified into four groups according to the metal present in the active site. Amongst them, Cu/Zn-SODs are present in the cytoplasm of eukaryotic organisms and are secreted to the extracellular medium ( Ibrahim et al., 2013 ; Sabadin et al., 2019 ). Tick blood meal digestion generates high levels of reactive oxygen species (ROS), including superoxide anions. ROS help maintain the balance between natural tick microbiota and potential pathogens, and participate in fighting off invading pathogens, but ROS may also interact with tick proteins, lipids and DNA, causing severe detrimental effects ( Hernández et al., 2022 ). Thus, by regulating the levels of superoxide, tick SODs have an important function in (i) the regulation of bacterial microbiota associated with ticks; (ii) in the protection of tick tissues against oxidative stress; and (iii) they may regulate tick vector competence by promoting the colonisation of tick tissues by tick-borne pathogens ( Crispell et al., 2016 ; Hernández et al., 2022 ). These functions make tick SODs attractive targets for new vaccines or drugs aimed at controlling ticks and their associated pathogens. Besides these three proteins, we were also interested in an additional one, annotated as a hypothetical secreted protein, which is highly upregulated after blood feeding and abundantly detected in saliva. Despite having no annotated function, this protein showed sequence similarity to tick sulfotransferases (SULT) encouraging further studies. Sulfotransferases catalyse the transfer of a sulfonyl group (SO 3 − ) from the universal donor 3′-phosphoadenosine-5′-phosphosulfate (PAPS) to hydroxyl or amine groups in substrate molecules, including macromolecules and many small molecules such as endogenous hormones, and neurotransmitters, as well as xenobiotics such as drugs, environmental chemicals and natural products ( James and Ambapadi, 2013 ; Esposito Verza et al., 2022 ). Tick SULTs are thought to be involved in the control of blood clotting in the host, protein secretion in the tick, and regulation of steroid and neurotransmitter activity in the host or tick ( Yalcin et al., 2011 ). Two cytosolic SULTs from Ixodes scapularis have recently been demonstrated to sulfonate dopamine and octopamine neurotransmitters. As dopamine is known to stimulate salivation in ticks, this suggests that the I. scapularis SULTs may be involved in the regulation of saliva secretion by inactivation of the salivation signal ( Pichu et al., 2011 ). The entire above underscores the potential of the tick ATSP, MCFD2, SOD and SULT protein families as candidate protective antigens and encouraged the study of their vaccine efficacy. Herein, one member of each of these protein families, including the SULT-like hypothetical secreted protein, were selected from the O. erraticus sialome, and their individual and joint vaccine efficacy tested against O. erraticus and the African soft tick vector of ASF and TBRF, Ornithodoros moubata . 2 Material and methods 2.1 Ticks and tick material The O. erraticus and O. moubata ticks used in this study came from two laboratory colonies kept at IRNASA (CSIC), Spain. The O. erraticus colony originated from specimens captured in nature in the Salamanca province (Spain). The O. moubata colony was established from specimens kindly provided by the Institute for Animal Health in Pirbright (Surrey, UK). The ticks were regularly fed on New Zealand white rabbits and maintained at 28 °C, 85% relative humidity and a 12 h light–dark cycle. O. erraticus salivary glands were obtained from unfed females as described by Pérez-Sánchez et al. (2021) . Tick dissection and salivary gland removal were carried out in cold (4 °C) phosphate-buffered saline (PBS) at pH 7.4, treated with 1% diethyl pyrocarbonate (DEPC) and the salivary gland tissue was immediately stabilised in RNA later (Sigma) until RNA extraction. Tick saliva was obtained from O. erraticus and O. moubata females after stimulation of secretion with pilocarpine according to the protocol described by Díaz-Martín et al. (2013) , with minor modifications. Ticks were first washed by successive immersions in tap water, 3% hydrogen peroxide, two washes in distilled water, 70% ethanol and two final washes in distilled water. Then, ticks were dried on paper towel and immobilised ventral-side up on a glass plate using double-sided adhesive tape in groups of five individuals. One microlitre of a solution of 1% pilocarpine hydrochloride (Sigma) in PBS was administered through the female genital pore using a 5 µl Hamilton syringe, 33 gauge, 25 mm length. Soon after stimulation, ticks started to move the chelicerae and emit small droplets of clear viscous saliva. To collect saliva droplets, 1 μl of PBS was placed on the tick mouthparts using a micropipette, and immediately harvested and deposited on 50 μl of ice-cooled PBS. Saliva was continuously collected until perceptible emission stopped, usually 15–20 min after stimulation. Protein concentration in the saliva samples was measured using the Bradford assay (Bio-Rad) and the samples were stored at −20 °C. Additionally, midguts from O. erraticus and O. moubata unfed females and from fed females at 48 h post-feeding (hpf) were also obtained and used to prepare midgut protein extracts as previously described ( Oleaga et al., 2015 ). Briefly, batches of 25 midguts from each species and physiological condition were suspended in fresh PBS supplemented with proteinase inhibitors (Roche Diagnostics), homogenised on ice using an Ultra-Turrax T10 disperser (IKA-Werke), and then sonicated six times 60 s/each in ice cold PBS. Tissue homogenates were then centrifuged for 20 min at 10 4 g and 4 °C to remove particulate remnants, and the supernatants recovered. Protein concentration in these extracts was assessed using the BCA Protein Assay Reagent kit (Thermo-Fisher), and the extracts were stored at −20 °C until use. 2.2 Selection and analysis of O. erraticus ATSP, MCFD2, Cu/Zn-SOD and SULT-like hypothetical secreted protein The recently obtained salivary transcriptome (BioProject PRJNA666995) and saliva proteome of O. erraticus female ticks ( Pérez-Sánchez et al., 2021 , 2022 ) were screened for the selection of protective candidate antigens by sequentially applying the following filters. First, transcripts differentially upregulated (log2 of Fold-change > 1, FDR < 0.05) in female salivary glands at seven days after feeding versus unfed females. Second, transcripts encoding secreted proteins: namely, proteins predicted to bear secretory signals (SignalP) without transmembrane (TM) domains or GPI anchors, and/or proteins that were detected in the proteome of saliva. Third, transcripts encoding proteins predicted to be immunogenic. For this, the VaxiJen 2.0 software ( http://www.ddg-pharmfac.net/vaxijen/VaxiJen/VaxiJen.html ) was applied using the 0.5 immunogenicity threshold established by default for parasites ( Doytchinova et al., 2007a , 2007b ; Flower et al., 2010 ). The resulting list of candidates was then manually inspected and refined, and a set of candidates were selected for recombinant production and vaccine efficacy testing. In this selection, priority was given to candidates predicted to be extracellularly expressed and to have functions related to blood feeding and modulation of host defence mechanisms. Additional criteria such as the completeness of the transcript sequence, as well as a high expression level and a high fold-change after feeding were also scored. Topographical predictions of the amino acid sequence of candidates were performed with the DeepTMHMM software ( https://dtu.biolib.com/DeepTMHMM ) ( Hallgren et al., 2022 ). The presence of signal peptides, non-classical secretion signals and GPI anchors on the candidates were predicted using SignalP-5.0 ( https://services.healthtech.dtu.dk/service.php?SignalP-5.0 ) ( Almagro Armenteros et al., 2019 ), SecretomeP 2.0 ( https://services.healthtech.dtu.dk/service.php?SecretomeP-2.0 ) ( Bendtsen et al., 2004 ) and GPI-SOM ( http://gpi.unibe.ch/ ) ( Fankhauser et al., 2005 ), respectively. BLASTp was used to search the UniProt and NCBInr databases for orthologues of each candidate in argasid and ixodid tick species. Multiple sequence alignment of orthologous sequences and identification of conserved regions was done using the Clustal Omega alignment tool ( https://www.ebi.ac.uk/Tools/msa/clustalo/ ). Phylogenetic and molecular evolutionary analyses of orthologues were conducted using MEGA version 11 ( Tamura et al., 2021 ). The neighbour-joining method was used to build the phylogenetic trees, where gaps were treated as pairwise deletions, amino acid distances calculated using the Poisson model and branch supports assessed by bootstrap analysis (10,000 bootstraps). Secondary structure prediction and three-dimensional (3D) modelling of the candidate proteins were done at the Phyre2 ( http://www.sbg.bio.ic.ac.uk/phyre2 ) ( Kelley et al., 2015 ) and Swiss model ( https://swissmodel.expasy.org/ ) servers. The resulting 3D models were visualised using the PyMOL package ( DeLano, 2002 ). Continuous linear B-cell epitopes on the selected candidate proteins were predicted with ABCpred ( http://www.imtech.res.in/raghava/abcpred/index.html ) ( Saha and Raghava, 2006 ); BCEpred ( http://www.imtech.res.in/raghava/bcepred/ ) ( Saha et al., 2004 ); and BepiPred-2.0 ( https://services.healthtech.dtu.dk/service.php?BepiPred-2.0 ) ( Larsen et al., 2006 ). The overlapping amino acid sequences in B-cell epitopes predicted by at least two of these tools were defined as the consensus predicted epitopes. The presence of MHC class I and II binding peptides (T-cell epitopes) were also predicted using NetMHCpan 4.1 ( https://services.healthtech.dtu.dk/service.php?NetMHCpan-4.1 ) and NetMHCIIpan 4.0. ( https://services.healthtech.dtu.dk/service.php?NetMHCIIpan-4.0 ). Information on MHC alleles from pigs and other vertebrate hosts of O. erraticus is currently limited to pig MHC class I; accordingly, the available, well-known mouse MHC class II allelic datasets were used to extrapolate the vertebrate host with unknown MHC-II alleles such as pig. 2.3 Molecular cloning of candidates and production as recombinant proteins The cDNAs encoding the mature forms of the candidates, i.e. without signal peptide (tOeATSP, tOeMCFD2, tOeSOD and tOeSULT-like), were cloned and expressed as recombinant proteins following standard procedures described elsewhere ( Oleaga et al., 2022 ). Briefly, total RNA was purified from tick salivary glands using the RNeasy Mini Kit (Qiagen) and reverse-transcribed using the 1st Strand cDNA Synthesis kit (Roche). For PCR, specific primers were designed with the Primer3 Plus software ( Untergasser et al., 2012 ) from the nucleotide sequences obtained in previous studies (BioProject PRJNA666995) ( Pérez-Sánchez et al., 2021 ): O. erraticus ATSP (OeATSP) (GenBank, GIXX02003429.1), O. erraticus MCFD2 (OeMCFD2) (GenBank, GIXX02002475.1), O. erraticus SOD (OeSOD) (GenBank, GIXX02013702.1) and O. erraticus SULT-like hypothetical secreted protein (OeSULT-like) (GenBank, GIXX02011269.1). To facilitate later subcloning of cDNAs into the expression vector, BamHI or KpnI restriction sites were included in the primers. Supplementary Table 1 shows the primer sequences and the PCR conditions. The PCR products were purified from agarose gels and cloned into the pSC-A sequencing vector using the StrataClone PCR Cloning kit (Stratagene). After verify their sequences, the cDNA inserts were digested and subcloned into the pQE-30 expression vector (Qiagen), and transformed into Escherichia coli M15 cells (Qiagen). After protein expression induction with 1 mM IPTG (Fisher Scientific), recombinant proteins rtOeMCFD2, rtOeSOD and rtOeSULT-like were all expressed in a 100% insoluble form. Hence, they were solubilised with 8 M urea, purified by nickel affinity chromatography in denaturing conditions, and dialysed against PBS pH 7.4, for 24 h at 4 °C according to the procedure described by Díaz-Martín et al. (2011) . By contrast, rtOeATSP was expressed as 100% soluble and purified by nickel affinity chromatography in native conditions, without adding urea. Purity and concentration of the recombinant proteins were measured by band densitometry in Coomassie blue-stained SDS-PAGE gels, followed by interpolation into a bovine serum albumin (BSA) standard curve, using the ChemiDoc XRS+ equipment and Image Lab software (BIO-RAD). Purified recombinants were preserved at −20 °C. 2.4 Vaccine trial 1 The purpose of this trial was to evaluate the individual capacity of each recombinant antigen to protect rabbits against infestations by O. erraticus and O. moubata ticks. 2.4.1 Rabbit immunisation Every candidate antigen was formulated in Montanide ISA 65 VG (Seppic) and administered to a group of three New Zealand white rabbits. One additional group of three rabbits was treated with the adjuvant alone and used as control group. Each animal was immunized with three subcutaneous doses of 100 µg of recombinant antigen administered 15-days apart. The antigen dose and immunisation schedule were based on those used in similar studies ( Obolo et al., 2018 ; Pérez-Sánchez et al., 2019b ) Blood samples were taken from all rabbits before the first antigen dose (pre-immune sera), 14 days post-immunisation before tick challenge (14 dpi) and 28 days post-immunisation (28 dpi), that is, 14 days after tick challenge. Blood samples were allowed to clot at room temperature and sera were removed and stored at −80 °C. 2.4.2 Analysis of humoral response The antibody titre of the immune sera to the homologous recombinant antigen was assessed by serial dilution ELISA following standard procedures ( García-Varas et al., 2010 ; Pérez-Sánchez et al., 2019b ). For this, the ELISA plates were coated with 100 ng/well of recombinant antigen in 100 µl/well, the sera were diluted in TPBS (PBS supplemented with 0.05% Tween 20) in a twofold dilution from 1/100 to 1/12,800, and anti-rabbit IgG was used 1/10,000 diluted. The serum titre was defined as the highest dilution giving more than twice the reactivity of the corresponding pre-immune serum at the same dilution. After titration, the reactivity of the immune sera to saliva and midgut protein extracts from both species ( O. erraticus and O. moubata ) was tested by ELISA and western blotting following standard procedures ( García-Varas et al., 2010 ) with minor modifications ( Pérez-Sánchez et al., 2019b ). For ELISA, plates were coated with 1 µg/well of saliva or midgut extract, sera were used 1/100 diluted and anti-rabbit IgG 1/10,000 diluted. For western blotting, sera were used 1/50 diluted, anti-rabbit IgG 1/2000 diluted, and the reactions were revealed using a chemiluminescent substrate (ClarityTM Western ECL Substrate, BIO-RAD). 2.4.3 Tick challenge and evaluation of vaccine efficacy The tick specimens used in this experiment were taken from homogeneous populations regarding species, developmental stage, sex, age and physiological condition. For both species, adult ticks that have had only one previous trophogonic cycle were used. This is, they have fed, mated and reproduced only once. After that, they have fasted for 3 months before they were allowed to feed on the vaccinated rabbits. Newly moulted nymphs-3 fasted during 2 months were also used for rabbit infestation. Fifteen days after the last antigen dose, homogeneous batches of 15 females, 30 males and 50 nymphs-3 of O. erraticus , and similar batches of O. moubata were weighted and allowed to feed on every rabbit for a maximum of 2 h. Most ticks completed feeding in less than 1 h. After that time, any tick remaining on the animal was removed. The fed ticks were kept at 28 °C, 85% relative humidity for 24 h to allow emission of coxal fluid. After that, ticks were weighted in batches and each female was placed in a plastic vial together with two fed male ticks to allow mating and reproduction. Nymphs-3 fed on the same rabbit were placed together in the same vial. All the ticks were then kept at 28 °C, 85% relative humidity and a 12 h light–dark cycle. The effect of the vaccine on these specimens was determined by measuring the following parameters: (i) quantity of blood ingested calculated as the difference in weight before and 24 h after feeding; (ii) female oviposition and fertility rates, namely, number of eggs laid per female, and subsequent number of newly hatched larvae per female (or nymphs-1 for O. moubata ); (iii) moulting rate of nymphs-3; and (iv) mortality rates of all tested developmental stages. The values obtained for the parameters analysed were summarised as the mean ± standard deviation per group. Statistical differences between the vaccinated and control group were assessed using one-way ANOVA followed by Dunnett's T-test. Differences were considered significant for a probability of error > 95% ( p < 0.05). All statistical analyses were performed using SPSS v. 29 software (IBM, Armonk, USA). The vaccine efficacy (E) of each individual antigen was calculated according to the formula established by Contreras and de la Fuente (2016) , which is based on the reduction in the studied developmental processes in ticks fed on vaccinated animals, compared to ticks fed on controls. Herein, vaccine efficacy was calculated as E = 100 × [1 − (S × F × N × M)], where, S and F represent reductions in survival and fertility of females, and N and M are reductions in survival and moulting of nymphs-3 ( Pérez-Sánchez et al., 2019 ). 2.5 Vaccine trial 2 The purpose of this trial was to measure the vaccine efficacy of a multi-antigenic cocktail vaccine made of the four recombinant antigens tested in Trial 1. For this, three doses of a cocktail containing 50 μg/dose of each recombinant antigen (to a total 200 μg/dose) were formulated in 1 ml of PBS, emulsified in an equal volume of Montanide ISA 65 VG and administered subcutaneously to a group of three rabbits following the same procedures described for Trial 1. Three additional rabbits were treated with Montanide ISA 65 VG alone and used as control group. At 14 and 28 days after the last antigen dose, two consecutive tick infestations similar to those described in Trial 1 were performed. The effect of the cocktail vaccine on these specimens and the vaccine efficacy were evaluated in each infestation following the same procedures described in Trial 1. 3 Results The salivary gland transcriptome of O. erraticus (BioProject PRJNA666995) consisted of 18,959 annotated transcripts, 2088 of which were differentially upregulated in the first seven days after feeding ( Pérez-Sánchez et al., 2021 ). Amongst the proteins encoded by these transcripts, 69 were detected in the tick saliva proteome ( Pérez-Sánchez et al., 2022 ), and 23 of them showed both antigenic prediction (Vaxijen score > 0.5) and classical secretion signals (SignalP) without bearing transmembrane domains (TM) or GPI anchor sites (Supplementary Table 2). After manual inspection of these 23 candidates, those showing complete sequences, high expression level, high fold-change and annotated functions related to blood feeding were prioritised, and four of them were selected for further analysis, production as recombinant proteins and vaccine efficacy testing, namely: OeATSP, OeMCFD2, OeSOD and Oe SULT-like. 3.1 Features, phylogeny and predicted B- and T-cell epitopes of the candidate proteins 3.1.1 Acid tail salivary protein (OeATSP) Transcript OE_8323 (GenBank GIXX02003429.1) encoded a protein of 176 amino acids long and 19.1 kDa (GenBank MBZ3998647.1), with a signal peptide comprising the first 20 amino acids, without TM domains and GPI anchors, which was annotated as an acid tail salivary protein (Supplementary Table 2). NCBInr and UniProt database searches for tick orthologues of OeATSP retrieved 11 top matches, 10 from argasid and one from ixodid species. All of them had an E-value < 10 −7 , amino acid sequence identity to OeATSP between 34% and 47%, and sequence coverage between 26% and 49%. Multiple sequence alignment of these 12 ATSP family members did not show conserved motives, except six cysteine residues in the central part of the alignment (Supplementary Fig. 1a). The phylogenetic analysis of these ATSPs clustered most of the Ornithodoros sp. sequences together with OeATSP into one main cluster supported by an 89% bootstrap value. The Argas monolakensis and Ixodes scapularis sequences remained outside this cluster (Supplementary Fig. 1b). Linear B-cell epitope predictions for OeATSP are shown in Fig. 1 a. Each immuno-informatics tool predicted a similar set of linear B-cell epitopes. The overlapping sequences between predictions were considered as the consensus linear B-cell epitopes, which virtually covered the whole protein sequence, except the signal peptide. T-cell epitope predictions anticipated the presence of eight peptides that can bind to MHC-I molecules and seven peptides that can bind to MHC-II molecules and be potentially presented to T cells. These peptides were distributed throughout the OeATSP sequence, and most of them overlapped with the predicted B-cell epitopes ( Fig. 2 a). Three-dimensional (3D) modelling of OeATSP modelled up to 78 amino acid residues (aa 22-99), 44% of its amino acid sequence, with 99.8% confidence by the single highest scoring 3D template (Supplementary Fig. 1c). The resulting 3D model consisted of mostly unfolded loops and two short β-sheets. According to this model, the linear B-cell epitopes predicted inside the modelled region would be exposed on the protein surface, where they could easily be accessed by host antibodies. This prediction should be taken with caution, since less than half the protein was modelled. 3.1.2 Multiple coagulation factor deficiency protein 2 (OeMCFD2) Transcript OE_5173 (GenBank GIXX02002475.1) encoded a protein of 139 amino acids and 15.9 kDa (GenBank MBZ3997693.1), with a signal peptide comprising amino acids 1 to 25, without TM domains and GPI anchors, which was annotated as a putative EF-hand domain-containing protein (Supplementary Table 2). NCBInr and UniProt database searches for tick orthologues of this protein retrieved more than 60 sequences annotated as ‘multiple coagulation factor deficiency 2 protein’, all of them from ixodid species. The top 10 matches, with an E-value lower than 10 −33 , sequence identity between 44% and 54%, and sequence coverage from 79% to 99% were selected for further comparisons. The amino acid sequence alignment of these 11 MCFD2 family members showed low conservation in their N-terminus, but high conservation in their central and carboxy terminal region, where the two EF-hand domains characteristic of this protein family were localised (Supplementary Fig. 2a). The calcium-binding motif in each EF hand was formed by six amino acid residues ( Guy et al., 2008 ). In the OeMCFD2 protein, the calcium-binding residues were Asp79, Asp81, Asn83, Leu85, Asp87, Glu90 for the first EF hand, and Asp123, Asp125, Asp127, Tyr129, Ans131, Glu134 for the second EF hand. These residues showed very high conservation amongst the tick MCFD2 orthologues analysed (Supplementary Fig. 2a). The phylogenetic analysis of these tick MCFD2s clustered them into two tight clusters of isoforms supported by 99% and 95% bootstrap values, respectively, with the OeMCFD in between both clusters (Supplementary Fig. 2b). Immuno-informatic tools predicted four linear B-cell epitopes for OeMCFD2. One long epitope spanned over the N-terminal half of the protein. The other three epitopes were shorter and located on the C-terminus, where two of them overlapped with the calcium-binding motifs ( Fig. 1 b). T-cell epitope predictions found six peptides than can bind to MHC-I molecules and three overlapping peptides than can bind to MHC-II molecules. Together, these T-cell epitopes covered most of the protein sequence ( Fig. 2 b). Three-dimensional modelling of OeMCFD2 modelled 72 C-terminal residues (aa 64–136), 51.8% of its amino acid sequence, with 99.8% confidence by the single highest scoring 3D template (Supplementary Fig. 2c). The resulting 3D model showed the two consecutive EF hands of the protein, each of them composed by two α-helices connected by a loop, which held the six amino acid residues that bind the Ca 2+ ion. According to this 3D model, the B-cell epitopes predicted for the C-terminal half of the OeMCFD2 were all exposed on the protein surface, where they could easily be accessed by host antibodies. 3.1.3 Cu/Zn-Superoxide dismutase (OeSOD) Transcript OE_57782 (GenBank GIXX02013702.1) encoded a protein of 185 amino acids long and 19.3 kDa (GenBank MBZ4008920.1), with a signal peptide comprising amino acids 1 to 25, without TM domains and GPI anchors, which was annotated as a superoxide dismutase [Cu-Zn]-like isoform X4 (Supplementary Table 2). This transcript was also detected in the O. erraticus midgut transcriptome (Bioproject PRJNA401392, GenBank accession GFWV01010682.1), where it was highly upregulated in response to blood feeding ( Oleaga et al., 2018 ). NCBInr and UniProt database searches for tick orthologues of OeSOD retrieved up to 80 sequences of Cu/Zn-SODs, all of them from ixodid species except one from Argas monolakensis . We selected the top 18 matches, including the A. monolakensis orthologue (top 18), for further comparisons. All of them showed an E-value lower than 10 −45 , sequence identity to OeSOD between 46% and 62%, and sequence coverage from 72% to 100%. These 19 Cu/Zn-SODs showed high conservation in their central and carboxy terminal regions. Particularly well conserved were the OeSOD amino acid residues His77, His79, His94 and His151, which are involved in Cu 2+ ion binding, and residues His102, His111 and Asp8114, which together with His94 are involved in Zn 2+ ion binding (Supplementary Fig. 3a). Also highly conserved were the OeSOD Cys88 and Cys177 residues, equivalent to Cys57 and Cys156 of human cytoplasmic Cu/Zn-SOD, which form a disulphide bond that contributes to stabilising the secondary structure of the protein ( Miller, 2004 ; Culotta et al., 2006 ) (Supplementary Fig. 3a). The phylogenetic analysis of these tick SODs clustered them into two tight clusters of isoforms supported by 97% and 96% bootstrap values, respectively. Cluster 1 consisted of isoforms from genus Dermacentor and Rhipicephalus , while cluster 2 included the OeSOD and isoforms from genus Dermacentor, Ixodes and Rhipicephalus . The only SOD from the genus Argas fell apart in both clusters (Supplementary Fig. 3b). Linear B-cell epitope predictions identified two short B-cell epitopes on the N-terminal end, downstream of the signal peptide, and three more epitopes on the C-terminal half of OeSOD, covering the Cu/Zn-binding region ( Fig. 1 c). T-cell epitope prediction found five peptides that can bind MHC-I molecules distributed through the N-terminal and central part of the protein and four peptides that can bind to MHC-II molecules and overlap to the MHC-I peptides ( Fig. 2 c). Three-dimensional modelling of OeSOD modelled 152 residues (aa 32–182), 82.2% of its amino acid sequence, with 100% confidence by the single highest scoring 3D template (Supplementary Fig. 3c). The resulting 3D model was a conserved homodimer structure, typical of Cu/Zn-SODs. Each monomer consisted of an eight-stranded β-barrel with two large loops, the so-called ‘electrostatic loop’ and the ‘metal-binding’ loop, the last one containing many of the residues necessary for binding of the metals. To form the homodimer, monomers orientate their active sites (metal-binding loops) towards opposite sides, while interacting with each other primarily through hydrophobic contacts on the interface of the two monomers, which reduces the solvent accessible surface area, greatly increasing its stability ( Rakhit and Chakrabartty, 2006 ). According to this 3D model, all the predicted B-cell epitopes would be exposed on the dimer surface, covering both the electrostatic and metal-binding loops, where they could easily be accessed by host antibodies (Supplementary Fig. 3d). 3.1.4 Sulfotransferase (OeSULT-like) Transcript OE_39965 (GIXX02011269.1) encoded a protein of 218 amino acids long and 23.7 kDa (GenBank MBZ4006487.1), with a signal peptide comprising amino acids 1 to 18, without TM domains and GPI anchors, which was annotated as putative secreted protein 1669 (Supplementary Table 2). NCBInr and UniProt database searches for tick orthologues of this protein retrieved numerous related sequences, from which the top 13 were selected. They included four argasid sequences annotated as sulphotransferases, one ixodid sequence annotated as sulphotransferase and eight ixodid sequences annotated as uncharacterised proteins. They all showed E-values lower than 10 −32 and sequence identities to our O. erraticus query protein between 30% and 40% and sequence coverage between 92% and 100%. Consequently, we designated it as OeSULT-like. Multiple sequence alignment of OeSULT-like and these 13 proteins showed that all of them aligned to the C-terminal domain of the EEC08826 sulfotransferase of I. scapularis (Supplementary Fig. 4a). This I. scapularis SULT was 503-amino acids long and consisted of two domains, an N-terminal sulphotransferase domain and a C-terminal domain with no annotated function (UniProt B7PQF4 ). By contrast, OeSULT-like and the other 12 orthologues were shorter proteins, close to 220 amino acids long, which aligned to the C-terminal domain of EEC08826. This suggests that OeSULT-like and its short orthologues may be incomplete proteins, lacking their N-terminal domains. The phylogenetic analysis of these putative tick SULTs clustered them into three tight clusters supported by 98%, 82% and 99% bootstrap values, respectively. Cluster 1 included four sulfotransferases previously found in the O. erraticus midgut (Bioproject PRJNA401392) ( Oleaga et al., 2018 ). Cluster 2 consisted of sequences from genus Ixodes only, including the sulfotransferase EEC08826. Cluster 3 included the OeSULT-like together with uncharacterised orthologous proteins from genus Ixodes, Rhipicephalus and Dermacentor (Supplementary Fig. 4b). Supplementary Fig. 4c compares the 3D models for sulfotransferase EEC08826 of I. scapularis and OeSULT-like, showing that the C-terminal domain of EEC08826 and the OeSULT-like share the same 3D structure. The OeSULT-like 3D model comprised 190 residues (aa 21–211), 87.2% of its amino acid sequence and was obtained with 100% confidence by the single highest scoring 3D template. Additionally, Supplementary Fig. 4d shows that the N-terminal sulfotransferase domain of EEC08826 shares a 3D structure similar to that of a well-characterised arthropod sulfotransferase, the AGAP001425-PA cytosolic sulfotransferase from Anopheles gambiae ( Esposito Verza et al., 2022 ). Linear B-cell epitope predictions for OeSULT-like identified up to seven B-cell epitopes that covered the whole sequence except the signal peptide ( Fig. 1 d). T-cell epitope prediction found seven peptides that can bind MHC-I molecules distributed along the whole sequence of the protein and five additional peptides that can bind to MHC-II molecules ( Fig. 2 d). According to the 3D-model of OeSULT-like, the predicted B-cell epitopes were all exposed on the molecular surface, where they could easily be accessed by host antibodies (Supplementary Fig. 4e). 3.2 Recombinant protein production The four selected candidates were successfully expressed into the pQE-30 vector as recombinant truncated forms without a signal peptide (Supplementary Fig. 5). All of them migrated in SDS-PAGE gels as single bands, with estimated purities of 95.6%, 99.8%, 96.2% and 96.6% for rtOeATSP, rtOeMCFD2, rtOeSOD and rtOeSULT-like, respectively. Only the rtOeMCFD2 showed an experimental molecular weight (MW) similar to its predicted MW (14.5 kDa). Recombinants rtOeATSP, rtOeSOD and the rtOeSULT-like showed experimental MWs somewhat larger than their predicted ones (25, 22 and 26 kDa vs 18, 18.5 and 23 kDa, respectively). Hence, the identity of these recombinants was further confirmed by mass spectrometry analysis (peptide mass fingerprint, PMF) of the corresponding gel band (not shown). 3.3 Vaccine trial 1 3.3.1 Humoral immune response The four recombinant antigens induced strong antibody responses in the vaccinated rabbits. The sera obtained at 14 dpi, before infestation, showed antibody titres higher than 1/12,800 and optical densities (OD) over 0.5 ( Fig. 3 a). The sera obtained at 28 dpi (14 days post-infestation) showed slightly higher ODs than sera obtained at 14 dpi, suggesting that the tick challenge of immunised animals provided a natural boosting effect observed as an increase in serum reactivity ( Fig. 3 a). The reactivity of the rabbit IgG antibody response to saliva from O. erraticus and O. moubata was analysed using ELISA and western blot. ELISA results showed that 14 dpi sera reacted to the saliva of O. erraticus , although with low (anti-ATSP, anti-MCFD2) or medium (anti-SOD, an-SULT-like) intensity, revealing the presence of native forms of the OeATSP, OeMCFD2, OeSOD and OeSULT-like in this fluid ( Fig. 3 b). The anti-MCFD2 and anti-SULT-like also reacted with the O. moubata saliva, though with low intensity, suggesting the presence of cross-reacting epitopes on orthologous proteins from this species ( Fig. 3 c). As expected, the 28 dpi sera (obtained 14 days after tick challenge) reacted with the saliva of both species with medium–high intensity ( Fig. 3 b, 3 c). On western blot, the 14 dpi sera recognised some bands in the range of 15–24 kDa on the O. erraticus saliva, compatible in size with the native forms of the antigens ( Fig. 4 b). These sera also recognised bands of similar size on the O. moubata saliva ( Fig. 4 d), indicating the presence of cross-reacting epitopes on the O. moubata orthologous proteins. Moreover, the reactivity of the rabbit IgG antibody response to midgut extracts from O. erraticus and O. moubata was analysed using ELISA ( Fig. 5 ) and western blot ( Fig. 6 ). High background reactivity in both the ELISA and western blot was due to the unspecific recognition of dietary rabbit IgG in these extracts, which was also revealed by pre-immune sera ( Fig. 6 b and d). After subtracting the unspecific reactivity (i.e. the OD of pre-immune sera), ELISA results showed low or very low serum reactivity to O. erraticus midgut (OD around 0.1), except for the anti-SOD sera, which showed medium reactivity (0.3–0.5 OD) to midgut of unfed females ( Fig. 5 a). The reactivity to O. moubata midgut was also low, except for the anti-SOD sera, which showed around 0.2 OD to midgut of unfed females ( Fig. 5 b). On western blot ( Fig. 6 ), the anti-ATSP sera revealed a faint band of 38 kDa on the unfed extracts of both species that was not revealed by the pre-immune sera, suggesting specific recognising and epitope sharing between OeATSP and intestinal proteins in O. erraticus and O. moubata . The anti-MCFD2 did not reveal any perceptible band on any extract, in agreement with the negligible or non-reactivity of these sera in ELISA ( Figs. 5 and 6 ). The anti-SOD recognised two intense bands of 15 and 40 kDa on the unfed midgut extracts from both species, which may be compatible with monomer and oligomer forms of native SOD ( Fig. 6 ). This reactivity is in agreement with the reactivity observed in ELISA ( Fig. 5 ) and indicates that the OeSOD and its native orthologue in O. moubata share cross-reacting epitopes and are both expressed in saliva/salivary glands and the midgut. The presence of OeSOD in the midgut correlates with detection of its coding mRNA in the O. erraticus midgut transcriptome ( Oleaga et al., 2018 ). Finally, the anti-SULT-like sera recognised a band of 30 kDa on the unfed midguts of both species and the fed midgut of O. moubata ( Fig. 6 b and d). As occurred for OeATSP, this result suggests that OeSULT-like shares cross-reacting epitopes with intestinal proteins of O. erraticus and O. moubata . 3.3.2 Protective effects of the immune response on the ticks On the O. erraticus ticks, the immune response induced by each recombinant antigen caused (i) generalised reductions in the amount of blood ingested by most developmental stages, which was significant for females fed on rabbits vaccinated with rtOeSOD and rtOeSULT-like, and (ii) significant reductions in female oviposition and fertility in all vaccinated groups. Additionally, all the antigens induced increases in nymph mortality, which were significant for rtOeATSP and rtOeMCFD2, while rtOeATSP also induced a significant increase in mortality of females. Accordingly, the vaccine efficacy of the rtOeATSP, rtOeMCFD2, rtOeSOD and rtOeSULT-like antigens against O. erraticus infestations was 46.8%, 45.7%, 54.3% and 31.9%, respectively ( Table 1 ). On the O. moubata ticks, the immune response to the recombinant antigens produced less intense, non-significant protective effects ( Table 2 ). They all induced reductions in tick feeding and female tick reproduction, without affecting or non-significantly affecting tick mortality and nymph moulting. This resulted in low vaccine efficacies against O. moubata infestations for each individual candidate. Namely, 0.7%, 3.9%, 3.1% and 8.7% for rtOeATSP, rtOeMCFD2, rtOeSOD and rtOeSULT-like, respectively. 3.4 Vaccine trial 2 3.4.1 Humoral immune response All the immune sera taken at 14 dpi showed IgG antibody titres higher than 1/12,800 and ODs higher than 0.5 to each single recombinant antigen, confirming all animals developed strong humoral responses ( Fig. 7 ). The reactivity level of sera remained the same at 28 dpi, 14 days after tick challenge, suggesting that antibody levels may have reached a plateau in immunised animals. 3.4.2 Protective effects of the immune response on the ticks Table 3 summarises the protective effects induced by this multi-antigenic vaccine against O. erraticus and O. moubata in both infestations. The anti- O. erraticus protective response significantly reduced female feeding and fertility, resulting in 58.3% vaccine efficacy in the first infestation, which was 7% higher than the best protection reached with candidates tested individually ( Table 1 ). Unexpectedly, the vaccine efficacy dropped to 46.6% at the second tick challenge, although this protection was still above the average protection obtained with the individual candidate antigens. Regarding O. moubata , the protective effects induced by the multi-antigenic vaccine were similar but weaker than those observed on O. erraticus . That is, low but significant reductions in feeding performance, and a low, non-significant reduction in female reproduction, which resulted in 18.6% and 16.4% vaccine efficacy in the first and second infestation, respectively. Despite the fact these protections were still low, they were more than twice the best protection reached with the candidates tested individually ( Table 2 ). 4 Discussion A well-recognised approach to identifying highly protective antigens for tick vaccine development is the selection of candidates that have key functions for tick biology and share conserved sequence motifs to allow the simultaneous control of different tick species ( de la Fuente and Contreras, 2015 ; de la Fuente et al., 2016 ; Pérez-Sánchez et al., 2019a , 2019b ). The salivary proteins inoculated into the host via tick saliva have important biological functions for tick feeding and pathogen transmission, thereby being considered promising targets for anti-tick and transmission-blocking vaccines ( Nuttall et al., 2019 ; Neelakanta and Sultana, 2022 ). A number of salivary proteins have been tested as vaccine targets to date, several of which have provided different degrees of protection against tick infestations ( Díaz-Martín et al., 2015 ; Abbas et al., 2023 ; Antunes and Domingos, 2023 ). This partial success has encouraged the search for and validation of new and more protective tick salivary antigens. With this aim, in the current work we applied a function-guided approach to identify new protective antigens from the O. erraticus sialome. Our candidates were selected from four protein families that have biological functions related to the processes of tick feeding and tick modulation of host defensive responses, namely ATSPs, MCDF2s, Cu/Zn-SODs and SULTs. Topological analysis of the four antigen candidates, OeATSP, OeMCFD2, OeSOD and OeSULT-like protein showed that they all have classical secretory signals and lack membrane anchor sites, either GPI or transmembrane domains, indicating that all of them are secreted into tick saliva through classical ways ( Díaz-Martín et al., 2013 ; Pérez-Sánchez et al., 2022 ). According to the definition by Nuttall et al. (2006) , they can be considered as ‘exposed’ antigens and, therefore, are expected to boost the vaccine-induced immune response after each tick feeding. Linear B-cell and T-cell epitope predictions verified the presence of both kinds of epitopes throughout the whole sequence of the mature secreted proteins, supporting their antigenicity ( Figs. 1 and 2 ). Additionally, the 3D models of the four candidates showed that the predicted B-cell epitopes are localised on the protein surface, where they would be easily accessible by host antibodies (Supplementary Figs. 1c, 2c, 3d and 4e). The multiple alignment of each candidate with its orthologues in other argasid and ixodid tick species showed that, except for OeATSP, tick orthologues share conserved structural and sequence motifs, and most of these motifs are located in the predicted antigenic regions of the candidates (Supplementary Figs. 1a–4a). We assume that the protective effects of the vaccine are the result of an antibody-mediated loss of function of the antigen target. Hence, conservation of epitopes on antigenic motifs that are easily accessible to antibodies may facilitate simultaneous targeting of different tick species if the candidates induce cross-protective immune responses. All candidates induced strong immune responses in rabbits demonstrating high immunogenicity ( Fig. 3 a), in agreement with the predictions of linear B-and T-cell epitopes and VaxiJen prediction of antigenicity. As expected, these responses recognised the inducing recombinant protein, as well as the native forms of the candidates expressed in the saliva of O. erraticus ( Figs. 3 b and 4 b). Not surprisingly, these native forms acted as boosting antigen doses during tick challenge, increasing the vaccine-induced humoral response ( Fig. 3 a), which in turn is expected to increase the protection against subsequent tick challenges ( Díaz-Martín et al., 2015 ). Most immune sera also recognised native forms of the candidates in the O. erraticus midgut ( Fig. 6 ). This may indicate that these candidates are expressed in both salivary glands and midgut. This appears to be the case for OeSOD, as its coding mRNA was detected in the O. erraticus midgut transcriptome ( Oleaga et al., 2018 ). Another possibility is that these salivary candidates share epitopes with midgut antigens. This has been reported for well-known tick salivary antigens such as the Rhipicephalus appendiculatus cement antigen 64P ( Trimnell et al., 2005 ) and this could also be the case for OeATSP and OeSULT-like. The absence of mRNAs encoding OeATSP and OeSULT-like from the O. erraticus midgut transcriptome ( Oleaga et al., 2015 ) and the different size of the bands revealed by the immune sera on saliva and midgut extracts ( Figs. 5 b and 6 b) support this possibility. Both possibilities, i.e. multiple tissue expression or epitope sharing, make these candidates interesting dual-action antigens according to the definition by Trimnell et al. (2002) . Namely, they elicited cross-reacting antibodies targeting exposed and concealed antigens due to conserved epitopes and have the potential to impact on several tick organs and physiological processes. All the candidates tested herein induced immune responses that provided significant protection against O. erraticus , validating them as protective antigens for anti- O. erraticus vaccine development. The main protective effects consisted of reductions of the amount of blood ingested by ticks and subsequent reductions in female oviposition and fertility, while other parameters such as moulting and mortality were affected to a lesser extent ( Table 1 ). The reductions in ingested blood confirmed that the candidates have biological functions related to the processes of tick feeding and modulation of host defensive responses, so that an antibody-mediated loss of their function results in impaired tick feeding. However, the reductions observed for blood ingested (7.4–29.5%), and consequently in nutrients available for egg laying, would not completely account for the reductions in reproduction, which were much more pronounced (36.2–49.3%). This suggests that the vaccine-induced immune response may have impacted other tick processes besides blood feeding, such as those related to blood digestion, nutrient transport and metabolism, or management of waste products and oxidative stress, which are performed by proteins mainly expressed in the midgut ( Chmelar et al., 2016 ; Sojka et al., 2016 ; Araujo et al., 2019 ). The recognising of antigenic bands on the tick midgut extracts by the immune sera supports this hypothesis. The least effective candidate was OeSULT-like (31.9%). Its molecular characterisation and 3D modelling showed that this protein was incomplete and lacked its amino terminal domain, which holds the sulfotransferase active site (Supplementary Fig. 4). Hence, we assumed that the antibodies raised to OeSULT-like were not directed to the protein active site and may have been poorly effective in blocking the molecular function of the antigen target. Despite this fact, the immune response induced by this incomplete recombinant version of OeSULT-like provided significant protection. Therefore, it can be supposed that rabbit immunisation with the complete OeSULT-like protein, including its sulfotransferase domain, could significantly increase the protective efficacy of this antigen. Future experiments will clarify this point. The results of Trial 2 demonstrated that the joint administration of the four candidates targeting different molecular functions increased vaccine efficacy against O. erraticus compared with individual antigens ( Table 3 ). Similar increases in vaccine efficacy were reported for other multicomponent vaccines containing salivary antigens of O. moubata ( Díaz-Martín et al., 2015 ) and midgut antigens of O. erraticus ( Pérez-Sánchez et al., 2019b ). However, the efficacy of the multi-antigenic vaccine decreased in the second tick challenge of vaccine Trial 2. This was unexpected because the study by Díaz-Martín et al. (2015) with O. moubata salivary antigens demonstrated increasing vaccine efficacy with the successive tick challenges of the vaccinated rabbits. We do not know the reason behind this fact, but according to results shown in Fig. 7 , it can be supposed that three doses of the multicomponent vaccine may have been enough to raise the antibody levels to a very high level, then reaching a plateau. Thus, the antigenic boosting provided by the first tick challenge would not have increased the antibody levels further and the antibody-mediated protection would not necessarily have increased in the second tick challenge. Despite this observation, these results demonstrate higher efficacy for multicomponent vaccines than for individual antigens and support the usefulness and convenience of developing multicomponent vaccines for the control of ticks. Since O. moubata it is the main African soft tick vector of ASF and TBRF, development of vaccines for its control is worth pursuing ( Obolo et al., 2018 ) and a vaccine targeting both Ornithodoros species would be desirable. This is why we also tested the protective effect of the O. erraticus candidates against this species. The recognising of salivary ( Figs. 3 c and 4 d) and midgut ( Figs. 5 b and 6 d) antigens on O. moubata by the immune sera raised against O. erraticus indicated epitope sharing between proteins of both species and anticipated the possibility of some cross-protection against O. moubata . On the contrary, the lack of O. moubata sequences amongst the tick orthologues retrieved by BLASTp searching using the O. erraticus candidates as queries (Supplementary Figs. 1a–4a) reduced the probabilities of cross-protection. In fact, cross-protection provided by individual candidates was low or insignificant ( Table 2 ), but it increased noticeably when O. moubata ticks were fed on the rabbits vaccinated with the multicomponent vaccine ( Table 3 ). These results encourage further studies on cross-species protection and lend additional support to the usefulness and convenience of developing multicomponent vaccines for the control of ticks. Multi-antigenic vaccines can be used to enhance the protection efficacy against a particular tick species, increase the tick species host protection range or interfere with tick-borne pathogen infection and transmission ( Ndawula and Tabor, 2020 ). Development of efficient multi-antigen vaccines requires a number of issues to be carefully addressed and refined. These include the rational selection and combination of the target antigens. Combining salivary and midgut antigens seems a very promising way to increase multi-antigen vaccine efficacy by targeting a wider range of tick physiological processes and organs. Including antigens of several tick species and pathogen antigens would be a way to increase the range of protection and prevent tick-borne disease transmission. Also important for multi-antigen vaccine development is the protein expression system and delivery platform. Recent pioneering work has shown that the mRNA-lipid nanoparticles’ vaccine platform provides stronger and more protective immune responses against ticks than the recombinant- and DNA-based vaccines, opening new ways to develop highly effective multi-antigen vaccines against ticks ( Matias et al., 2021 ; Sajid et al., 2021 ). 5 Conclusions The recently obtained transcriptome of the salivary glands and proteome of saliva from O. erraticus has allowed applying a function-based approach to select several exposed salivary antigens that are potentially protective: an acid tail salivary protein, a multiple coagulation factor deficiency protein 2 homolog, a Cu/Zn-superoxide dismutase and a sulfotransferase. Vaccination of rabbits with these antigens confirmed their predicted immunogenicity, since they all induced robust humoral immune responses. The antigen candidates showed individual medium–high protection against O. erraticus ticks, and low cross-protection against O. moubata ticks. Protective effects were supposed to be the result of an antibody-mediated loss of function of the antigen targets. The present study provides new protective antigens from argasids that belong to protein families never tested before as a source of protective antigens against ticks. Hence, these families can be interesting as a source of additional protective antigens deserving further investigation. The results of this study also demonstrate that multi-antigen vaccines increase protective efficacy compared to individual antigens underscoring the usefulness and convenience of multi-antigen vaccines for tick control. New and more protective antigens from Ornithodoros spp. are still needed and will probably be identified by targeting tick proteins having significant biological functions for tick survival and pathogen–tick–host interactions. Proteogenomics approaches focusing the pathogen–tick–host interactions will allow abundant data to be acquired, whose integration and analysis will facilitate the identification of potential antigen targets. Novel vaccine platforms such as mRNA-lipid nanoparticles will facilitate testing and validation of new candidates, boosting development of effective anti-tick and transmission-blocking vaccines. Animal welfare statement The procedures for tick feeding and experimentation with rabbits were approved by the Ethical and Animal Welfare Committee of the Institute of Natural Resources and Agrobiology (IRNASA) and the Ethical Committee of the Spanish National Research Council (CSIC, Spain) (Permit Number 742/2017). Funding This work was supported by project ‘RTI2018-098297-B-I00′’ (MCIU/AEI/FEDER, UE), granted by the Spanish Ministry of Science, Innovation and Universities, the State Research Agency (AEI) and the European Regional Development Fund (ERDF), and project ‘CLU-2019-05-IRNASA/CSIC Unit of Excellence’, granted by the Junta de Castilla y León and co-financed by the European Union (ERDF ‘Europe drives our growth’). CRediT authorship contribution statement Ángel Carnero-Morán: Formal analysis, Investigation, Conceptualization, Data curation, Methodology, Software, Writing – review & editing. Ana Oleaga: Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Validation, Visualization, Writing – review & editing. Ana Laura Cano-Argüelles: Validation, Data curation, Methodology, Software, Writing – review & editing. Ricardo Pérez-Sánchez: Conceptualization, Data curation, Funding acquisition, Investigation, Methodology, Project administration, Supervision, Validation, Writing – original draft, Writing – review & editing. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgments The authors are grateful to Rocío Vizcaíno Marín and María González Sánchez from the Instituto de Recursos Naturales y Agrobiología de Salamanca (IRNASA, CSIC) (Spain) for their skilful technical assistance. We acknowledge support of the CSIC Open Access Publication Support Initiative Supplementary materials Supplementary material associated with this article can be found, in the online version, at doi:10.1016/j.ttbdis.2023.102218 . Appendix Supplementary materials Image, application 1 Image, application 2 Image, application 3 Image, application 4 Image, application 5 Image, application 6 Image, application 7
REFERENCES:
1. ABBAS R (2014)
2. ABBAS M (2023)
3. ALMAGROARMENTEROS J (2019)
4. ANTUNES S (2023)
5. ARAUJO R (2019)
6. ARIAS M (2018)
7. ASSUMPCAO T (2018)
8. ASTIGARRAGA A (1995)
9. BENDTSEN J (2004)
10. BOINAS F (2014)
11. CHEN Z (2010)
12. CHMELAR J (2016)
13. CHMELAR J (2019)
14. CONTRERAS M (2016)
15. CRISPELL G (2016)
16. CULOTTA V (2006)
17. DIAZMARTIN V (2011)
18. DIAZMARTIN V (2013)
19. DIAZMARTIN V (2015)
20. DELAFUENTE J (2015)
21. DELAFUENTE J (2016)
22. DELAFUENTE J (2018)
23. DELANO W (2002)
24. DIXON L (2019)
25. DOYTCHINOVA I (2007)
26. DOYTCHINOVA I (2007)
27. ESPOSITOVERZA A (2022)
28. (2014)
29. (2015)
30. FANKHAUSER N (2005)
31. FLOWER D (2010)
32. FRANCISCHETTI I (2009)
33. GARCIAVARAS S (2010)
34. GUY J (2008)
35. HALLGREN J
36. HERNANDEZ E (2022)
37. IBRAHIM M (2013)
38. JAMES M (2013)
39. JURADO C (2018)
40. KELLEY L (2015)
41. KOCAN K (2004)
42. LARSEN J (2006)
43. LIU H (2013)
44. MASOUMI H (2009)
45. MATIAS J (2021)
46. MILLER A (2004)
47. NARASIMHAN S (2002)
48. NARASIMHAN S (2004)
49. NDAWULA C (2020)
50. NEELAKANTA G (2022)
51. NUTTALL P (2006)
52. NUTTALL P (2019)
53. OBOLOMVOULOUGA P (2018)
54. OLEAGA A (1990)
55. OLEAGA A (2015)
56. OLEAGA A (2018)
57. OLEAGA A (2021)
58. OLEAGA A (2021)
59. OLEAGA A (2022)
60. PEREZSANCHEZ R (1994)
61. PEREZSANCHEZ R (2019)
62. PEREZSANCHEZ R (2019)
63. PEREZSANCHEZ R (2021)
64. PEREZSANCHEZ R (2022)
65. PICHU S (2011)
66. RAKHIT R (2006)
67. RASHID M (2019)
68. RIBEIRO J (2020)
69. SABADIN G (2019)
70. SAHA S (2006)
71. SAHA S (2004)
72. SAJID A (2021)
73. SCHORDERETWEBER S (2017)
74. SCHUIJT T (2011)
75. SIMO L (2017)
76. SMIT R (2016)
77. SOJKA D (2013)
78. SOJKA D (2016)
79. TALAGRANDREBOUL E (2018)
80. TAMURA K (2021)
81. TAO D (2020)
82. TRIMNELL A (2002)
83. TRIMNELL A (2005)
84. UNTERGASSER A (2012)
85. VALLE M (2018)
86. YALCIN E (2011)
|
10.1016_j.jshs.2023.07.002.txt
|
TITLE: Effects of personalized exercise prescriptions and social media delivered through mobile health on cancer survivors’ physical activity and quality of life
AUTHORS:
- Gao, Zan
- Ryu, Suryeon
- Zhou, Wanjiang
- Adams, Kaitlyn
- Hassan, Mohamed
- Zhang, Rui
- Blaes, Anne
- Wolfson, Julian
- Sun, Ju
ABSTRACT:
Purpose
This study aimed to examine the effects of a multi-component mobile health intervention (wearable, apps, and social media) on cancer survivors’ (CS') physical activity (PA), quality of life, and PA determinants compared to exercise prescription only, social media only, and attention control conditions.
Methods
A total of 126 CS (age = 60.37 ± 7.41 years, mean ± SD) were recruited from the United States. The study duration was 6 months and participants were randomly placed into 4 groups. All participants received a Fitbit tracker and were instructed to install its companion app to monitor their daily PA. They (1) received previously established weekly personalized exercise prescriptions via email, (2) received weekly Facebook health education and interacted with one another, (3) received both Conditions 1 and 2, or (4) were part of the control condition, meaning they adopted usual care. CS PA daily steps, quality of life (i.e., physical health and mental health), and PA determinants (e.g., self-efficacy, social support) were measured at baseline, 3 months, and 6 months.
Results
The final sample size included 123 CS. The results revealed only the multi-component condition had greater improvements in PA daily steps than the control condition post-intervention (95% confidence interval (95%CI): 368–2951; p < 0.05). Similarly, those in the multi-component condition had significantly greater increased physical health than the control condition (95%CI: –0.41 to –0.01; p < 0.05) over time. In addition, the social media condition had significantly greater increased perceived social support than the control condition (95%CI: 0.01–0.93; p < 0.05). No other significant differences on outcomes were identified.
Conclusion
The study findings suggest that the implementation of a multi-component mobile health intervention had positive effects on CS PA steps and physical health. Also, offering social media intervention has the potential to improve CS perceived social support.
BODY:
1 Introduction Cancer remains a critical public health issue in the United States. Studies have demonstrated that engaging in regular physical activity (PA) following a cancer diagnosis can offer numerous health benefits, including reducing the risk of all-cause and cancer-related mortality as well as cancer events among cancer survivors (CS). 1 2 , Adopting a healthy and active lifestyle can also minimize the risk of cancer and improve the prognosis and quality of life for individuals with cancer. 3 4 , However, like the general population, the majority of CS do not meet the recommended minimum of 150 min of moderate-to-vigorous PA (MVPA) per week. 5 Therefore, it is crucial to develop innovative and scalable PA interventions to promote healthy behaviors and provide appropriate supportive care and guidance to CS. 6 Advancements in technology, including wearables, apps, and big data analysis, have enabled the delivery of mobile health (m-health) interventions to encourage PA in CS. These interventions are well-suited for broad distribution and offer individualized and timely feedback on behavior modification, and thus are a promising area of technology focused on increasing healthy behaviors. M-health utilizes modern technologies, such as smartphone apps, wearables, and social media, to improve the quality of healthcare. These commonly used technologies offer various ways to support the self-regulation of health behavior. Users can monitor different metrics, such as daily steps, activity duration, and energy usage, which can be accessed on the device or a connected mobile app. They can also receive immediate feedback or reminders, set objectives, log activities, and track their advancement. Additionally, these devices can integrate personal data into a social network to encourage self-motivation and peer support. 7–13 Recently, researchers have applied such technologies to promote health by increasing PA and reducing sedentary behavior in CS, and the findings have been promising. 14–18 Despite the positive findings, the literature does have some limitations, such as small sample sizes, a lack of personalized exercise prescriptions, and a lack of using big data from mobile devices. 19–26 27–29 Over the past few decades, the Social Cognitive Theory is frequently applied to promote health and facilitate changes in PA behavior. The Social Cognitive Theory incorporates 3 interdependent factors: (1) personal factors such as age, weight, and self-efficacy; (2) environmental factors including social support; and (3) behavior, such as levels of PA. 30 31 , Researchers who utilize the Social Cognitive Theory aimed to enhance individuals’ PA by addressing and improving both personal and environmental determinants. 32 For example, when an individual has a high level of self-efficacy and enjoyment for PA and is in a setting that encourages PA, that person is more likely to participate in PA. According to behavior change literature, multi-component interventions typically yield better outcomes than single component interventions. 33–37 27 , Yet few studies have examined the interactive effects of multi-component m-health technologies on PA and other outcomes in CS. 38–43 4 , 27 , This highlights a major gap for advancing tailored PA interventions with multi-component m-health programs. 44 The purpose of this project, therefore, was to examine the effects of a combination of a personalized exercise prescription and Facebook health education intervention, as compared to personalized exercise prescriptions only, Facebook health education only, and attention control conditions, on PA and health outcomes in CS. Based on the literature review and previous studies, we formulated the following specific aims: (1) Examine the effects of the 3 m-health interventions on PA in CS as compared to the control condition over 6 months (Hypothesis 1a: CS in the 3 m-health groups would show greater increases in PA at 6 months compared to those in the control condition; and Hypothesis 1b: CS in the multi-component intervention group would show greater increases in PA than those in the exercise prescriptions only and Facebook only groups); and (2) Determine the effects of the 3 m-health interventions on CS’ health-related quality of life (HRQoL) and PA determinants (Hypothesis 2a: CS in the interventions would show greater increases in these outcomes at 6 months compared to those in the control condition; and Hypothesis 2b: CS in the multi-component intervention would show greater increases in these outcomes at 6 months compared to the other 2 m-health conditions). This project attempted to examine innovative remote m-health interventions on PA and health outcomes in CS while offering personalized exercise prescriptions based on smart device data. If successful, it can significantly impact the development of effective and remote PA programs to promote health and protect against diseases in CS. Moreover, its findings can guide health professionals and cancer communities to initiate these feasible remote intervention programs to promote PA and health in CS. 2 Methods 2.1 Participants To detect a mean difference in changes of PA (the primary outcome) across the 4 intervention and control conditions with the power at 80% and significance level at 5%, assuming a SD of 1500 steps/day, this project required 30 participants per condition. Recruitment occurred in the United States. Inclusion criteria for CS were: (1) aged ≥50 years old (this age group was selected because age is a risk factor for CS); (2) had 1 or more of the cancers of interest (i.e., breast, colon, bladder, prostate, endometrium, esophagus, lung, kidney and renal, pelvis, stomach, etc .) because research evidence suggests that regular participation in PA is beneficial in preventing and managing these types of cancers; (3) completed active cancer treatment at least 3 months prior to enrollment, with the exception of anti-hormonal therapy; (4) had an Android or iOS operating system smartphone; (5) possessed basic English communication capability; (6) had a Facebook account, or were willing to make one; (7) male or female CS; (8) was willing to provide consent and accept randomization assignment; and (9) engaged in some type of PA as assessed by the Physical Activity Readiness Questionnaire. Exclusion criteria for participation in this study were: (1) diagnosed with Stage IV cancer; (2) completed primary cancer treatment (e.g., surgery, radiotherapy) less than 3 months ago with a new cancer diagnosis or recurrence; and (3) declined completion of the informed consent and/or the Physical Activity Readiness Questionnaire. 45 2.2 Design and procedures This study was registered at Clinical Trials (ClinicalTrials.gov Identifier: NCT05069519). In this study, all CS received a Fitbit Charge 5 tracker (Google, San Francisco, CA, USA) and installed its companion app on their mobile phones. We then randomized 126 CS to 4 groups for a 6-month intervention period: (1) a personalized exercise prescriptions group (tracked daily PA via Fitbit (Google), shared PA data remotely, and received personalized exercise prescriptions from researchers); (2) a Facebook health education group where participants had access to weekly health education and interacted with one another on a private page; (3) a combination of personalized exercise prescriptions and Facebook health education group (tracked daily PA via Fitbit, received personalized exercise prescriptions, had access to weekly health education, and interacted with one another on Facebook); and (4) an attention control group where participants continued with standard care. The primary outcome was daily steps, and secondary outcomes included HRQoL and PA determinants. Each condition lasted 6 months. All CS underwent identical assessments at baseline (pre-test), 3 months (mid-test), and 6 months (end-point or post-test). After obtaining institution review board approval in April 2021, we worked closely with cancer communities and used social media (e.g., Facebook cancer groups, Nextdoor, etc .) and email to send the study information and/or flyer to potentially eligible participants. All eligible participants joined a remote Zoom meeting with the research staff to collect demographic data. During this virtual meeting, the participants also discussed their use of the apps and their respective functions with the researchers. Questionnaires assessing HRQoL and PA determinants were administered remotely via Qualtrics (Qualtrics XM, Provo, UT, USA) at baseline and follow-up tests. Along with the study instructions, the research staff mailed the Fitbit trackers (Google) to participants at the beginning of this study to record weekly average PA steps during the 3 waves of data collection. With assistance from several research assistants, the project coordinator led the process of tracking and checking PA steps during each period to ensure data consistency and validity. To minimize data contamination and impact on intervention implementation, we advised that, during the 6-month intervention period, the participants could not discuss protocol-directed PA and study interventions with participants from another group, and vice versa . They could communicate with participants (within the survivor network) from other groups on non-project-related issues. All participants received monetary incentives (a total of USD210) via prorated Clincards as compensation for following the protocols of each of the different conditions (as measured by process evaluation) and taking part in the assessments. 2.3 Intervention conditions The m-health utilized in this study were Fitbit Charger 5 tracker (Google), Facebook, and their companion apps, as well as emailed weekly personalized exercise prescriptions using previous week Fitbit PA data. All participants received a Fitbit tracker (Google) and were instructed to install its companion app only to monitor their daily PA; they did not use other Fitbit app features. 2.3.1 Personalized exercise prescription condition Throughout the intervention period participants assigned to this condition continued with standard care and were encouraged to participate in at least 150 min of MVPA per week if their body condition allowed. Participants tracked their PA using Fitbit trackers (Google) and synchronized the Fitbit PA data to its app where they were uploaded to the Fitbit server. Researchers retrieved the data from the Fitbit's Application Programming Interface, then used a predetermined algorithm to select appropriate levels of exercise prescriptions based on participants’ average daily steps for the previous week. They then emailed the previously established weekly exercise prescriptions to participants for the next week. Exercise prescriptions included aerobic exercises, strength training exercises, flexibility exercises, and balance exercises. Among them, the flexibility and balance exercises were mainly adopted from 2 fitness books for CS. 46 , For detailed personalized exercise prescriptions and weekly PA contents (Zeng et al., 47 Pages 11–16). 29 2.3.2 Facebook condition Participants continued with standard care but also received health education tips (developed in our previous studies 4 , ) from a private Facebook group that only group members and researchers could access. Additionally, the researchers tracked CS’ login counts and encouraged engagement and peer interactions to facilitate social support. The researchers built 2 private Facebook groups, one for the Facebook only group and another for the combination group in which participants received weekly health education tips for improving 4 PA beliefs: (1) increasing self-efficacy; (2) improving outcome expectancy; (3) promoting social support; and (4) enhancing enjoyment. Please note that participants simply needed to use their Facebook account to join the groups and that they did not need to disclose any of their medical diagnoses to other participants in the study. No personal health information was visible in the Facebook groups. Rather, Facebook gave participants a platform to receive health education and interact with one another in terms of the contents. The project coordinator and several research assistants posted health education content 3 times; they also posted a single PA-related question every week to encourage interactions among participants ( 34 Supplementary Fig. 1 ). Members could only interact with peers within their own private Facebook group. The health education tips were only available to 2 of the private groups (the Facebook and multi-component groups), and non-members could not access them. Additionally, participants were encouraged to access the health education via the Facebook app on their smartphones or mobile devices (i.e., iPad) as opposed to logging into their Facebook account on a public computer, personal laptop, or home computers. Finally, the researchers reminded participants not to share their medical information in the groups. As such, participants’ Facebook accounts and personal information were not breached. The research team monitored the Facebook groups weekly for safety and privacy reasons. 2.3.3 Multi-component intervention Participants assigned to this condition received both personalized exercise prescriptions and the Facebook health education program. That is, researchers formulated weekly personalized exercise prescriptions based on Fitbit PA data via email as well as created a private Facebook page for the health education of participants in this group. 2.3.4 Attention control Participants assigned to the control condition continued with standard care (not changing their current routine) and did not receive any other intervention during the intervention period. For process evaluation, monthly phone check-ups were conducted for all the groups to confirm whether participants were following the research protocols. The project coordinator and a research assistant checked CS’ Fitbit engagement by obtaining the Fitbit step data of all participants every Wednesday and Saturday through an Application Programming Interface from the Fitbit server (Google). This data was then coded using Python 3.4 (Python Software Foundation, Beaverton, OR, USA). In cases where participants failed to synchronize their data with the server or did not wear the Fitbit tracker (Google), reminders were sent via email, followed by phone calls or text messages. Through these efforts, it was ensured that all participants wore the Fitbit tracker (Google) and synchronized their data on a weekly basis. In addition, the project coordinator and 3 research assistants monitored the names of participants who read each post, as well as the likes and comments made by participants in both the Facebook and combination groups. More information is available upon request. Overall, intervention fidelity was continuously monitored for all intervention components in this project. 2.4 Measures Participants self-reported their height and weight at home. Then, body mass index (BMI) was calculated using the height and weight provided by the participant. 43 2.4.1 PA CS' 1-week daily steps were assessed using Fitbit trackers (Google). Participants were instructed to wear the Fitbit on their non-dominant wrist at all time throughout the study. Their daily steps at 3 time points were used as the primary outcome. The Fitbit-generated data have been widely used in assessing PA among cancer clinical populations. In this study, the researchers retrieved the Fitbit data from its Application Programming Interface feature. The change scores between baseline and 6 months were used as outcomes in this study. 44 2.4.2 HRQoL The Patient Reported Outcome Measurement Information System was used for the assessment of HRQoL—physical and mental health. We defined physical health according to 3 factors, namely physical function, pain interference, and capability to participate in social roles and activities; to determine mental health, we looked at anxiety and depressive symptoms. This scale has demonstrated acceptable validity and reliability in the clinical populations. 48 4 , 33 , Details of the subscale items and scorings can be found at Gao et al. 34 It is important to note that, the lower scores meant better physical and mental health in the present study. 27 2.4.3 PA determinants CS’ PA determinants were assessed by standardized self-report (i.e., self-efficacy, outcome expectations, social support, enjoyment) through previously established questionnaires. All questionnaires used 5-point Likert scale (e.g., 1 = 49–53 almost never , 5 = almost always for the Social Support Scale; 1 = strongly disagree , 5 = strongly agree for the Enjoyment Scale). They have demonstrated acceptable validity and internal consistency in previous studies as well as this one (Cronbach's 34 α ranged from 0.72 to 0.86). These outcomes were measured at baseline, mid-test, and post-test. The change scores of these 4 variables between baseline and 6 months were used as outcomes in the present study. 2.5 Data analysis In this study, data were imported into SPSS 27.0 (IBM, Corp., Armonk, NY, USA) for analysis. A descriptive analysis was conducted to describe the characteristics of CS. The unit of analysis was the individual participant. Aim 1: The 4-way analysis of covariance with repeated measures was performed to examine changes in CS’ daily steps from baseline to 6 months, with age and BMI as the covariates. Aim 2: Two separate 4-way multivariate analyses of covariance with repeated measures were used to examine changes in the outcome variables of CS’ HRQoL and PA determinants from baseline to 6 months, respectively. In the present study, the within-subject factor was time (3 times of measurements), the between-subject factor was Group membership, and the covariates were age and BMI. The multiple comparisons among the 4 groups were adjusted by the Bonferroni approach. The pairwise group comparisons were also performed between 3 intervention groups vs. the control group, as well as among the 3 intervention groups. The significance level was set at 0.05 for all analyses, and effect sizes were reported for each comparison. Also, eta -squared ( η 2 ) with small, medium, and large effect sizes being designated as 0.01, 0.06, and 0.14 were used as an indices of effect size, respectively. 54 3 Results The CONSORT diagram illustrates detailed information regarding participant flow in this study ( Fig. 1 ). Two CS dropped out from the study and another did not strictly follow research protocols prior to the post-intervention data collection and so was removed from further data analysis. Our final sample consisted of 123 CS (age = 60.37 ± 7.41 years, mean ± SD). Full demographic and anthropometric information for the sample at the baseline are displayed in Table 1 . Among them, the vast majority were women ( n = 120). Nine were born outside of the USA (Canada, China, Colombia, Germany, Japan, the Netherlands, South Africa, UK, and Vietnam). In terms of educational background, 2 had a high school education; 13 participants received some college/technical school education; 59 were college graduates; and 49 had a graduate school education. With regard to their annual income, 6 earned less than USD40,000; 2 participants’ salary ranged from USD 40,001 to USD50,000; 17 earned USD50,001–USD74,999; 19 made USD75,000–USD99,999; 64 earned greater than USD100,000 annually; 15 participants did not respond to this question. A total of 84 CS had private health insurance (e.g., Blue Cross, Kaiser), 9 had Medicare (government insurance for people aged 65 and over), 15 had both private health insurance and Medicare, and the rest had other insurance. In terms of cancer types, 110 had breast cancer, including 11 who also had another type of cancer (ovarian, endometrial, cervical, etc .), the rest had other types of cancers, such as endometrial ( n = 3), ovarian ( n = 1), fallopian tube cancer ( n = 3), thyroid ( n = 2), prostate ( n = 1), anal ( n = 1), and neck/throat ( n = 1) cancers. In the last 12 months, 107 CS visited their primary care doctor (or a family doctor) or medical oncologist or hematologist, and 16 did not visit any doctor. Among those who visited doctor, 2 visited a specialist once, 79 visited twice, 4 visited 3 times, 1 visited 4 times, 10 visited 5 times, 11 visited other health care providers, such as a nurse practitioner, homeopathic doctor, acupuncturist, and naturopathic doctor. Table 2 displays the descriptive results for CS’ daily steps, HRQoL, and PA determinants across the 4 groups at 3 time points. On average, CS displayed a moderate level of daily steps, averaging greater than 7000 steps at baseline. Their HRQoL, namely, physical and mental health, demonstrated relatively moderate levels since the mean values of these 2 outcomes (scores ranging from 1.50 to 1.79) were lower than the median scores (3.00; reverse scored) across time. Notably, CS displayed low-to-moderate levels of PA determinants at baseline (scores ranging from 1.99 to 3.84). A series of analyses of variance were performed on the study outcomes by groups at baseline, and no significant differences were identified ( p > 0.05), meaning that CS had equivalent daily steps, HRQoL, and PA determinants prior to the interventions across these 4 groups. As shown in Fig. 2 , the group effect for PA daily steps of the analysis of covariance approached the significant level ( F (1,117) = 2.22, p = 0.09, η 2 = 0.09). Pairwise comparison was then conducted to evaluate the differences in step changes across groups. The multi-component intervention group had significantly greater increased steps compared to the control group from baseline to post-test ( p < 0.05, 95% confidence interval (95%CI): 368–2951). Although the other 2 intervention groups had increased steps and the control group displayed decreased steps from baseline to 6 months, no significantly inferential differences were identified among these groups over time. The 4-way multivariate analysis of covariance did not yield significant group effect for HRQoL (Wilk's lambda = 0.93, F (2,116) = 1.34, p = 0.24, η 2 = 0.03). A significant group effect was observed for physical health ( p < 0.05) but not mental health ( p = 0.23). The pairwise comparison suggested that the multi-component intervention group had greater increased physical health compared to the control group from baseline to post-test ( p < 0.05, 95%CI: –0.41 to –0.01). In terms of PA determinants, our data did not yield significant group effect (Wilk's lambda = 0.94, F (6,232) = 1.19, p = 0.31, η 2 = 0.03). Yet the pairwise comparison indicated the social media group had greater increased social support compared to the control group from baseline to post-test ( p < 0.05; 95%CI: 0.01–0.93). No other significant differences were identified. 4 Discussion PA among CS is one of the most effective lifestyle factors in disease prevention and management. 43 , 55 , M-health interventions with personalized exercise prescription may improve PA and health outcomes in this population. 56 Yet, to our knowledge, few studies have integrated multiple m-health components to facilitate PA and health promotion in CS. 37 4 , 33 , 34 , In response, the present study attempted to fill this knowledge gap by investigating the efficacy of multi-component intervention (personalized exercise prescription and social media) on PA steps, HRQoL, and PA determinants among CS. Overall, CS displayed a moderate level of daily steps and HRQoL at baseline, as well as low to moderate levels of PA determinants. Study observations yielded mixed results. 44 For the first aim, we hypothesized that CS in the 3 m-health groups would have greater increased PA steps at 6 months compared to those from the control group. The descriptive analysis yielded that CS in all intervention groups showed a greater increase in steps than their control counterparts over the course of 6 months. However, the inferential tests indicate that only the multi-component intervention group had significantly greater increased steps than the control group. The findings partially support our first hypothesis by corroborating the postulation that multi-component PA interventions are effective at promoting PA in clinical populations, including CS. 4 , 9 , 33 , Specifically, empirical evidence has suggested that health wearables and exercise apps, in partnership with social media programs, could help to improve CS PA participation. 57 4 , Although the other 2 single component interventions did not display significantly greater increased steps than the control group, they did further illustrate the potential that m-health programs show for promoting PA in CS. We also hypothesized that CS in the multi-component intervention group would have greater increases in steps than those in the single component intervention groups at 6 months. However, although the multi-component intervention group had a greater increase in steps compared to the social media group at the mid-test, our data failed to support this hypothesis at post-test. It has been postulated that multi-component m-health interventions have advantages over single component interventions. 33 4 , For example, in a recent network meta-analysis, McDonough et al. 38–44 suggested multi-component accelerometer/pedometer intervention was the most effective strategy for reducing BMI among clinical populations, followed by commercial health wearable-only intervention. It is plausible that all CS received a commercial health wearable (Fitbit) and its companion app in this study and, thus, their healthy behaviors might improve partnered with other single component m-health programs. The findings could have practical implication for researchers and clinicians alike. Offering remote multi-component m-health programs, such as apps and personalized exercise prescription, along with health wearables could be a feasible and effective way to facilitate PA participation in CS. 9 Our data suggested that, post-intervention, the multi-component intervention group had greater increased physical health than the control group and the social media group had greater increased social support than the control group. This partially supports our second hypothesis, which states that CS in the interventions would show greater increases in HRQoL and PA determinants at 6 months compared to those in the control condition. This finding is in line with previous studies suggesting m-health PA programs could improve physical health or fatigue 33 among cancer populations. Studies also reported that m-health interventions enhanced individuals’ self-efficacy and social support. 58 For example, Gao et al. 59–61 used exergaming as the intervention channel and found that children's self-efficacy and social support significantly improved post-intervention. No significant difference in mental health was observed over the course of 6 months across groups. One explanation could be that participants’ mental health was already moderately high at the baseline assessment and, consequently, the ceiling effect might have masked the inferential difference across groups. It is also possible that many CS have had cancer for years and have learned effective strategies for dealing with stress and depression through various stress reduction programs, including yoga and meditation. The finding deserves further investigation; perhaps in-depth interviews would provide insight into a more detailed explanation. 60 In addition, our second hypothesis predicted that the multi-component intervention would show greater increases in HRQoL and PA determinants at 6 months compared to the other 2 m-health conditions. As seen, slight changes occurred in the HRQoL and PA determinants from baseline to post-test among CS in all 3 intervention groups. Interestingly, the results showed that only the multi-component intervention group had greater increased physical health than those from the social media group, which also partially supports the hypothesis. We believe that CS from both personalized exercise prescription only and multi-component m-health groups benefited from the PA engagement promoted by the exercise plans as their physical health improved. Meanwhile, CS from the social media only group received health education tips and social support from peers but no personalized PA encouragement, which may have led to a difference in physical health between the multi-component intervention group and social media group. Furthermore, since most m-health interventions use bundled packages with multi-components (i.e., wearables, app, support calls, and social media) delivered simultaneously, 4 , 33 , it is difficult to disentangle the intervention components to determine at which levels and in what combination(s) individuals’ PA and health outcomes are influenced. For this reason, the present study used traditional experimental design (2 × 2 factorial design) to test each component individually along with their interaction effects. It is recommended that, in future studies, the Multiphase Optimization Strategy framework may be adopted to evaluate intervention components’ individual and combined effects and to compare the individual effects of multiple intervention components simultaneously. 44 44 The present study has the following strengths: (1) all participants received the base intervention with a Fitbit and its companion app, and as a result they all benefited from the PA monitoring and promotion in this study; (2) it was the first study to apply multiple novel m-health components (i.e., fitness wearable, exercise apps, social media app) and big data applications into home-based practice in CS during the pandemic, as well as to explore the individual and combined effects of different intervention components; (3) it offered weekly personalized exercise prescriptions to CS according to objective Fitbit data from the previous week; and (4) intervention fidelity was ensured through process evaluation, which included weekly Fitbit data check-ups and monthly phone follow-ups. Nevertheless, study findings should be interpreted with caution given the following limitations. First, the vast majority of CS were breast CS. There is a need to recruit CS of other cancer types in the future. Second, the majority of participants were well-educated Caucasian women who were of relatively high socioeconomic status, which may limit external validity of the findings. A few CS knew each other from a cancer network and many had average daily steps of over 7000 at baseline. Hence, more diverse, less-active samples are needed in the future. Third, personalized exercise prescriptions were offered manually via email every week after research staff retrieved Fitbit data and allocated the weekly personalized exercise prescriptions based on weekly steps. Future studies should consider utilizing sophisticated apps, guided by algorithm and artificial intelligence, to automate these processes. To this end, the researchers recently developed a prevision exercise app, iFitRx (iRecOO Mobile Technology, Lewes, DE, USA) to deliver automated personalized exercise prescriptions based on big data analysis and predetermined algorithms. Finally, since the programs were remotely delivered and all data except for Fitbit data were self-reported, we missed the opportunity to assess objective outcomes such as BMI, MVPA, and functional fitness. In the future we may include objective instruments for assessing physical and physiological outcomes (e.g., MVPA via accelerometers, body composition). 5 Conclusion According to the study observations, the implementation of a multi-component m-health intervention had positive effects on the PA steps and physical health of CS at the 6-month follow-up. The social media m-health intervention also showed promise for enhancing perceived social support among CS. These findings shed new light on the roles remote m-health could play in terms of encouraging PA and health in CS. They are particularly meaningful results given that CS are at risk for low PA and HRQoL, particularly during and beyond the COVID pandemic. M-health with multiple emerging technologies has the potential to improve PA and hence reduce cancer-related death and cancer events (e.g., recurrence) in CS. According to our findings, professionals and practitioners may choose to utilize remote precision PA programs because these m-health interventions have proven to be feasible and cost-effective ways of promoting health among CS. 33 , 34 Acknowledgments This study was funded by College of Education and Human Development Acceleration Research Award at the University of Minnesota Twin Cities, USA. At the time of this study, the first author was with the School of Kinesiology at the University of Minnesota-Twin Cities, USA. Authors’ contributions ZG developed the project, wrote the manuscript, and prepared the figures; SR coordinated with the intervention implementation, was involved in data collections, helped with the manuscript preparation, and critically revised it; WZ, KA, and MH coordinated with the intervention implementation and involved in data collections; RZ, AB, JW, and JS helped with the manuscript preparation and critically revised it. All authors have read and approved the final version of the manuscript, and agree with the order of presentation of the authors. Competing interests The authors declare that they have no competing interests. Supplementary materials Supplementary materials associated with this article can be found in the online version at doi:10.1016/j.jshs.2023.07.002 . Supplementary materials Image, application 1 Image, video 1
REFERENCES:
1.
2. IRWIN M (2004)
3. ROCK C (2012)
4. POPE Z (2018)
5. LAHART I (2016)
6. MAMA S (2020)
7. RITVO P (2017)
8. QUINTILIANI L (2016)
9. MCDONOUGH D (2021)
10. RODRIGUEZGONZALEZ P (2022)
11. RODRIGUEZGONZALEZ P (2022)
12. BLOUNT D (2022)
13. EMBERSON M (2021)
14. EYSENBACH G (2001)
15. LYONS E (2016)
16. BALBIM G (2021)
17. RINGEVAL M (2020)
18. CHOUDHURY A (2021)
19. HARTMAN S (2019)
20. LYNCH B (2019)
21. REEVES M (2012)
22. ROGERS L (2015)
23. VALLE C (2017)
24. HARTMAN S (2018)
25. POPE Z (2019)
26. GRESHAM G (2018)
27. GAO Z (2022)
28. BLOUNT D (2021)
29. ZENG N (2020)
30. BANDURA A (1986)
31. BANDURA A (2004)
32. BANDURA A (1991)
33. STACEY F (2016)
34. STACEY F (2015)
35. ROGERS L (2008)
36. BARBER F (2012)
37. MAILEY E (2014)
38. DING D (2012)
39. GILESCORTI B (2003)
40. GILESCORTI B (2002)
41. PATRICK K (2005)
42. BANDURA A (2004)
43. DONG X (2020)
44. PHILLIPS S (2022)
45. KOHL H (2020)
46. KAELIN C (2006)
47. MICHAELS C (2013)
48.
49. CARLSON J (2012)
50. SECHRIST K (1987)
51. RODGERS W (2008)
52. HARTER S (1978)
53. ROGERS L (2005)
54. RICHARDSON J (2011)
55. PHILLIPS S (2015)
56. DONG X (2019)
57. POPE Z (2022)
58. KEADLE S (2021)
59. PHILLIPS S (2014)
60. GAO Z (2012)
61. VALLE C (2013)
|
10.1016_S1658-077X(17)30085-1.txt
|
TITLE: Inside Front Cover -Editorial Board
AUTHORS:
- No authors listed
ABSTRACT: No abstract available
BODY: No body content available
REFERENCES:
No references available
|
10.1016_j.tjog.2023.10.011.txt
|
TITLE: High-level mosaicism for 45,X in 45,X/46,XX at amniocentesis in a pregnancy with a favorable fetal outcome and postnatal decrease of the 45,X cell line
AUTHORS:
- Chen, Chih-Ping
ABSTRACT: No abstract available
BODY:
Dear Editor , A 25-year-old, gravida 3, para 2, woman underwent elective amniocentesis at 17 weeks of gestation because of anxiety. Amniocentesis revealed a karyotype of 45,X[26]/46,XX[12]. Simultaneous array comparative genomic hybridization (aCGH) analysis on the DNA extracted from uncultured amniocytes revealed arr (X) × 1–2, (1–22) × 2, consistent with 56 % mosaicism for 45,X. Prenatal ultrasound findings were unremarkable. She was referred for genetic counseling at 20 weeks of gestation. No repeat amniocentesis was suggested, and continuing the pregnancy was strongly advised. A phenotypically normal 2320-g female baby was delivered at 37 weeks of gestation. The karyotypes of cord blood, umbilical cord and placenta were 45,X[18]/46,XX[22], 45,X[27]/46,XX[13] and 45,X[26]/46,XX[14], respectively. When follow-up at age 11 months, the neonate was normal in development. The peripheral blood had a karyotypes of 45,X[11]/46,XX[29], and interphase fluorescence in situ hybridization (FISH) analysis on 118 buccal mucosal cells showed 17.8 % (21/118 cells) had monosomy X. High-level mosaicism for 45,X in 45,X/46,XX at amniocentesis associated with a favorable outcome and postnatal progressive decrease of the 45,X cell line has been previously reported [ 1 ]. Genetic counseling of high-level mosaicism for 45,X in 45,X/46,XX at amniocentesis remains difficult and challenging because of the concern of Turner syndrome phenotype after birth, and the genetic counselors may overemphasize the possibility of postnatal occurrence of Turner syndrome that leads to the parental decision to terminate the pregnancy. However, the present case and the report of Chen et al. [ 1 ] provide evidence that high-level mosaicism for 45,X in 45,X/46,XX at amniocentesis can be a benign and transient condition. Therefore, repeat amniocentesis is not necessary, and termination of the pregnancy should not be advised. The information is very useful for genetic counseling of the parents who have very advanced maternal age, who have undergone assisted reproductive technology, and who wish to keep the baby under such a circumstance. In the present case, amniocentesis at 17 weeks of gestation revealed 68.4 % mosaicism for 45,X in 45,X[26]/46,XX[12] in cultured amniocytes, and aCGH analysis on uncultured amniocytes revealed 56 % mosaicism for monosomy X. However, at birth, the cord blood had 45 % mosaicism for 45,X or 45,X[18]/46,XX[22], and at age 11 months, the peripheral blood had 27.5 % mosaicism for 45,X or 45,X[11]/46,XX[29], and FISH revealed 17.8 % (21/118 cells) mosaicism for monosomy X. The fact that postnatal progressive decrease of the 45,X cell line in the present case with high-level mosaicism for 45,X in 45,X/46,XX at amniocentesis implies that genetic counseling of the prognosis of high-level mosaicism for 45,X in 45,X/46,XX at amniocentesis simply based on the prenatal cytogenetic result is not reliable and not feasible. Conflicts of interest The authors have no conflicts of interest relevant to this article. Acknowledgements This work was supported by research grant NSTC-112-2314-B-195-001 from the National Science and Technology Council, Taiwan .
REFERENCES:
1. CHEN C (2022)
|
10.1016_j.foar.2016.07.001.txt
|
TITLE: Multi-identity planning process in a studio course: Integrative planning in multi-identity environments
AUTHORS:
- Shach-Pinsly, Dalit
- Porat, Idan
ABSTRACT:
The planning process in a planning studio demonstrates a microcosm of diverse concepts of ideologies and identities seeking acknowledgment and spatial recognition. In the modern world of multiple and dynamic identities and ideologies, aspiring for the self-recognition of regions, towns, and communities, a place-based identity has become a core aspect that needs to be taken into planning consideration. The analytic planning method used is iterative of both top–down and bottom–up approaches, thereby creating multi-dimension and coherent planning alternatives where spatial solutions arise from communities along their changing processes. We present two spatial alternative plans that were developed in the studio course and are based on this line of thinking. Results were very dynamic aspiring complex plans, which are also highly applicable and flexible, thereby addressing a wide range of ideologies and identities.
BODY:
1 Need for a place-based identity-ideology planning process Variety is an issue that needs to be considered in the modern world of multiple and dynamic identities. This recognition is an output of the place-based identity of regions, towns, communities, and individuals. Thus, it should be a core aspect of planning. This spatial diversity corresponds to the theories of multiculturalism that point out the advantage of variety ( Goldberg, 1994 ; Sandercock and Lysiottis, 1998 ), individualism ( Healey, 1997 ; Bellah et al., 2007 ), pluralism ( Davidoff, 1965; Hayden, 1994 ), and cosmopolitan ( Binnie et al., 2006 ; Bloomfield and Bianchini, 2003 ). However, in contrast to the theories that modern societies aim to adopt, we identify a lack of planning tools that address the main issues of multiculturalism, pluralism, and individualism on the regional level. This article discusses the outcome of a regional planning studio that deals mainly with the development of a long-term comprehensive regional plan (50 years forward) and offers a multi-identity planning process developed by several student teams, allowing reference for different and diverse communities and identities. This process displays products that combine conceptual pluralism and the regional perspective of a bottom–up planning approach based upon the integration of spatial information technology and a multi-parametric analysis of regional planning. 1.1 Planning studio The course methodology includes comprehensive planning that addresses complex and integrated questions of development, conservation, spatial justice, economy, transport, employment, and demographics. The planning process needs to present solutions for places situated in a dynamic process of change, where their place-based identity and self-recognition are changing and the future of their identity/identities is still unclear. This intensive course requires high investments in both methodology and technique within a limited period. The course is built upon five phases, as follows: (1) a review of the current situation (according to a spatial capital assets model); (2) individual planning concept development based on their ideological perceptions; (3) students work in teams for a comprehensive regional program development; (4) students work in teams to develop the spatial plan; and (5) an evaluation of the diverse spatial plans using analytical and political tools. The studio methodology allows students to address different aspects of comprehensive spatial planning as part of their training as planners and as part of the formulation of a “professional voice” and planner identity. The students are required to develop a working model that answers their “professional voice” and references the characteristics and constraints of reality. The planning work is done in a given area, wherein each student׳s team provides a planning alternative. In Phase 5 of the course, all the alternatives are evaluated by the students, and the pros and cons of each alternative are discussed. During the course, a number of student teams deal with a planning dilemma that relates to the inflexibility of the planning process of representing multi-identities or societies/communities and the difficulty of addressing differences between groups, types of identities, or communities. Criticism in planning has intensified as the real world has become more pluralistic, diverse, and multicultural. The following are the questions we asked in a metropolitan planning studio that gave our students a chance to translate and transform their conceptual ideas into spatial policy plans. Is it possible to combine the differences between communities and types of identities that characterize the complexity and the pluralistic world we live in today in a coherent planning process that places the principle of multiculturalism as a working model premise? Can we plan for a “reasonable” person in a multicultural world or a multi-identity world? Can we combine different ideologies/identities in integrative coherent planning? 1.2 Multicultural assumptions in planning The growth of a multicultural society is one of the known trends in the global and dynamic world and constitutes a challenge for planning. Planning has succeeded in the past by characterizing the uniform local cultural characteristics of a region or characterizing nationality with a common direction, alignment, and commitment to a common vision. Currently, such definitions are less common and agreed upon in a multicultural society, which is composed of a mosaic of communities. This mosaic of communities has been addressed in the spatial capital assets model, which declares that a region is defined by the unique mixture of the diverse forms of capital it possesses (see e.g. Friedmann, 2002 ; Kitson et al., 2004; Frenkel and Porat, 2013 ). This complex view of multi-capital assets is essential to the understanding and analysis of urban and regional complexity and the dimensions of sustainability ( Friedmann, 2002 ; Nilsson, 2007 ). Each community has its own unique mixture of capital assets, identity, and needs to fulfill its vision and goals. Communities need space and amenities to fulfill shared values, but even communities aspiring for similar goals and visions will require different needs because of spatial differences and differences in local authority policies ( Walters and Brown, 2004 ). The number of processes of creating a multicultural society is increasing. On one hand, the spatial outcomes are segregations; on the other hand, a need for integration exists ( Burayidi, 2000 ). However, both need a planning process that will identify their uniqueness in the first place and will address them in comprehensive planning ( Hague and Jenkins, 2005 ; Devine-Wright, 2009 ). This spatial diversity corresponds with several theories, such as multiculturalism ( Goldberg, 1994 ; Sandercock and Lysiottis, 1998 ), individualism ( Healey, 1997 ; Bellah et al., 2007 ), pluralism ( Davidoff, 1965; Hayden, 1994 ), and cosmopolitan ( Binnie et al., 2006 ; Bloomfield and Bianchini, 2003 ) theories that point out the advantage of variety. However, in contrast to these theories, which modern societies aim to adopt, we identify a lack of planning tools addressing multiculturalism, pluralism, and individualism at the regional level that can be adopted in the planning process. Another tool that may assess the advantage of variety is the public participatory, which is a bottom–up practice that allows connecting to multiculturalism and the needs of the stakeholders. Public participation (P2) involves diverse communities and stakeholders (usually from the same location) to understand the needs and preferences of different kinds of end-users and innovation better and to influence the planning process of a region ( Rowe and Frewer, 2000 ). In a studio process, we identify a lack of geographic information system (GIS) planning tools addressing the connection between different communities aspiring for multiculturalism and end-users’ need for the regional level. PGIS, an emergent practice from participatory approaches to planning, can be adopted in this planning process; it combines a range of geo-spatial information management tools and methods to represent peoples’ spatial knowledge in the form of physical maps that are used as interactive tools for spatial analysis, discussion, information exchange, and decision-making ( Corbett and Keller, 2005 ). Therefore, planning is obligatory in mitigating the negative aspects and developing the positive aspects of diversity. The negative aspects refer to aspects of segregation and closings, and positive aspects refer to the opening of possibilities, variety, choice, and the possibility of self-determination and self-expression. Only a few cities, i.e., those that are recognized as world-class cities, have succeeded in providing multiculturalism and a hyper-diverse life for their populations. Most regions and cities prefer to differentiate themselves from other regions and have adopted homogeneous identities. Under this approach, the goal of the studio planning process is to provide the optimal combination of diversity on the one hand and not to impose pluralism on the other hand. 2 Multicultural planning model in a planning studio Diversity and variety are addressed in planning mainly by refining the diversity of land use. In most planning fields, an increase in urban usage exists, such as green areas of different types, residential diversity and textures, density levels and urban fabric types, variety of public service, and mixed uses. This increases the variety of usage types as an outcome of the need to address a more complex existence and the need for a unique policy in a growing number of land parcels. Currently, there is a shift in the way the urban planning process is developed around the world. This change occurs because of several transformations and changes. One example is technology change, such as implementing the GIS platform in planning ( Talen, 2000 ). The development of new methods and tools for planning as form-based code (FBC) comes as an alternative to conventional zoning planning. At its base is the idea of a neighborhood or city as a whole, rather than its division by specific land uses ( Parolek et al., 2008 ). The Buffalo Green Code was developed for a new approach to guiding development. This new approach is called “place-base planning,” which is a way to shape the city by concentrating on the look and potential of places and their forms and characters instead of focusing only on the conventional categories of land use. Using this code will map the entire city by place type. This will be followed by a comprehensive zoning ordinance that will create a new set of rules to encourage development that fits with the desired character of the place ( http://www.buffalogreencode.com/what-is-place-based-planning/ ). The studio projects react to these changes in the planning field, especially in the studio program. For example, the change in planning terminology developed as an outcome of a top–down planning approach. The diversity of urban usages increases the variety of zoning types and serves as a solution to complex and wide variety of laws and building regulations. The planning process tries to cope with the wide variety of laws and regulations by developing innovative and creative terminologies for the land use types. Zoning terminology is inherently general and relates to a wide variety of regions and plans; it is not created to address place-based planning. Zoning terminology is not an outcome of a bottom–up process in most planning processes, but given as a fact. Zoning terminology assumes that its richness of land uses can cope with the variety and complexity of reality; otherwise, the plan will reduce the variety to an existing terminology. In addition, a bottom–up place-based planning approach that addresses the existing and future varieties in a specific area requires a different model of zoning. It requires a model that will be sufficiently flexible to develop an appropriate spatial policy for different communities and ideologies. Despite the high-level of flexibility, the model should be generic and based on space-based analysis. Hence, these specifications require a new planning model. A classic planning model is built as a double funnel or a sand clock. First, information and knowledge on the region are collected. This knowledge is abstracted and simplified for representation of urban systems. This knowledge is further abstracted into planning principles, alternatives, and finally, a chosen alternative. The chosen alternative undergoes a process of expansion, deepening, and development of policy measures and spatial details that mature the process to a complete comprehensive plan ( Altshuler, 1966 ; Hax and Majluf, 1996 ; Chadwick, 2013 ). The “narrow waist” of the planning process should be expanded to produce a bottom–up plan that addresses a variety of identities and different ideologies. Multiple alternatives should be reflected throughout the planning process instead of a single chosen alternative, side by side from the initial stages of data collection to the development of a comprehensive plan. Different alternatives fit different ideologies and identities that provide a plan for the exact identity it serves. All plans have their own internal logic and an overall view that incorporates and integrates a comprehensive plan on a spatial and conceptual level. This place-based planning process creates a mosaic of programs at different levels. Thus, every level will provide a harmonious plan. The outcome plans will expose all communities and local identities and create links between communities and regions and between neighbors and similar neighbors, all at the same time. A sample of identifying future trends can be described as a combination of the following: accessibility, transportation, access to employment, housing, economic capital; education level, marital status, spatial environment, social relationship, spatial relationships; nationality, religion, gender, and language. Furthermore, different levels of the region plan will be addressed, as follows: (a) the relationship between nearby places that share community life; and (b) ideological or conceptual system between distant places. The latter is similar to the concept of an ecological corridor that connects regions to create an ecosystem. This principle will be used to connect places of common ground ideology in the formation of an ideo-system, which might be a knowledge base of similar conceptions. This planning approach is based upon the use of GIS spatial information technology as an integral part of the planning process. This technology enables the development of multiple local plans of different characteristics and policies in one region and creates mutual connections between them. Spatial information systems can contain hundreds of attributes of information for all spatial objects and examine spatial adjustments between objects depending on their spatial and functional relationships. A similar approach can be seen at the “bottom–up GIS (BUGIS)” model developed by Talen (2000) , which included an understanding of residents׳ perceptions and preferences of local issues based on a GIS planning analysis of spatial complexity, spatial context, interactivity, and interconnection. This place-based planning process fits ideology to an identity with an appropriate set of policies of its own and for neighboring communities. The methodology is based upon an analytic process of development of an identity matrix and a generator/influence matrix, which defy the development characteristics for different ideologies and identities as well as the mutual influences between different identities and between identities and their surroundings. The methodology may be perceived as too rigid. However, the actual reference to a wide variety of different parameters allows the extensive and complex variation of land use, which is a view to the feature that encourages plan flexibility. Parallel to the development of the influence matrix generator, an extensive set of typical identities is formed as part of the planning process. This planning model concept allows the representation of each identity׳s needs in the matrix. The set of identities represents the diversity of the population in the region and addresses future variations. The future variation of identities is set according to the current trends of global societies. The major trends that have been introduced to the influence matrix generator represent a strengthening of self-definition based on nationality, religion, language, gender, education, economic capital, sexual orientation, marital status, and geographic identity. These processes increase the range of property needs of the society. The system may seem too rigid and categorically defines characteristics for existing and future identities. However, the opposite is true. The development of various land-use policies for an extensive set of typical identities increases the diversity across the region. This is bottom–up planning relates to the characteristics of personalities in a complex and profound way. Furthermore, this type of planning creates a wide variety of residential and employment areas, education amenities, and leisure spaces and places. This variety fits the needs of the place and also enables wide choices for future communities. Bottom–up planning has an added value at the regional level. The classification and characterization of identities allow for identifying opportunities and conflicts, communities׳ synergistic elements, and possible collaborations. Identifying opportunities, conflicts, synergies, and potential collaborations during the planning process allows conflict mitigation and region management in a way that encourages cooperation by development. These elements play a significant role in the regional planning process, which allows and supports a variety of space-place developments and copes with the regional challenges of economies of scale, efficient transportation, employment mix, a variety of services, and optimal spatial management. Fig. 1 describes in a schematic way the differences between a classic planning process featuring the convergence of decision-making-based planning, multi-criteria analysis of different ideologies and identities, and place-based planning process. The following section will demonstrate the place-based planning methodology on two examples of students׳ studio course demonstration of plans. The metropolitan plans were conducted in different regions of Israel and each illustrates in its own planning process way the principles presented above. 3 Description of the planning process in the studio 3.1 Two planning areas The alternative plans presented are located in heterogeneous areas at the outskirts of metropolitan core areas, one at the north of Israel, the Galilee area, and the second at the southern area of Israel, called the Northern Negev region (see Fig. 2 ). The Galilee area is a rural region that includes small towns of different Jewish and Arab cultures, with traditional employment, agriculture industries, and rural villages. The Northern Negev region is an intermediate region between the three main metropolitan areas of Israel, which are Jerusalem, Tel Aviv, and Beer-sheba, and is characterized by small- and medium-size cities, traditional employment and lifestyle, and diverse communities of Jews, Arabs, and ultra-religious Jews. The students learn and review the current situation in the planning region according to the spatial capital assets model. The data collection for this model is related to different spatial capital assets ( Frenkel and Porat, 2013 ) and based (in this studio course) on open and available data sources from the Israeli Bureau of Statistics, National Insurance Institute, Ministry of Interior, Ministry of Transportation, and other open data sources and also on local survey and knowledge. According to this data analysis, the students define the spatial capital of each region and sub-regions and identified the groups, narratives, conflicts, and social trends that emerge from the analysis. The motivation for both planning processes comes from a critical approach to the existing statutory planning that choose to ignore the variety of populations׳ narratives, identities, conflicts, and residence types in the alternative areas. The existing statutory plan defines most of the areas as rural and ignores the complexity and uniqueness of the nature of both regions. The spatial policy suffers from a lack of reference to the distinctive features that create identity and internal unity. The towns and villages in these regions are scattered and separated because of the absence of unique identity. 3.2 Definition of existing and future identities in both regions The first stage of the planning process includes the definition of different identities and distinction between the identities in the region. The definition of identities and ideologies in the studio are based on local knowledge and common narratives of different social groups and their social trends in the public space (in the region) and on their shared characters, desires, vision, and goals. In this studio, the student involves stakeholders only as part of the planning administration and not as part of the identities. In both alternatives, the current planning needs of various communities are examined and defined according to different routes: 3.2.1 The Galilee alternative The plan provides a unique planning statement and individual treatment for each settlement in the space/place and for the multi-dimensional connection between them. The idea is to create alternatives for people searching for a different lifestyle and to provide a generic solution for the types of settlements that people are looking for. The place-based models provide solutions for the differences that were identified between the “needs” and “wants” of people in search of alternatives to the conventional urban lifestyle. The main emphasis is on developing models that offer conditions required for specific lifestyles that will be provided at maximum flexibility to the residents in accordance with their wants and needs. The main idea based on this notion was to categorize the existing planning process behind these settlements, which according to the old model are developed according to a specific available area with appropriate space. Different community types are defined by their different characteristics, such as population size, self-sufficiency, self-organization level, mix of employment types, and typical local economy. The results are based on four different types of towns, as follows: (1) Ecological – According to research, ecological towns are viable only for small populations (up to 1000 residents). The distances to the city can be long because most of the residents work within the settlement. Being located close to environmentally sensitive areas suggests that these settlements have minimal environmental effect and tourism makes up a significant part of their income. (2) Cooperative – Distances from the city can be long because the towns can provide work, food (from their agriculture), and services. (3) Coexistence – The same as a house in a countryside, a community apparatus that is more complex and requires employment in the settlement is necessary. (4) Country side, house in the countryside – The town can be larger and closer to the city and to employment centers. Each combination displays a different level of performance in each characteristic, as shown in Fig. 1 a. 3.2.2 The Negev alternative In this plan, the students identify six different types of generic personal identities in the region desiring different needs and characteristics. These identities are determined through the following process. First, an analysis of the existing population characteristics is conducted. The students provides an overview of the populations living in the region and deliberately defined six stereotype figures that represent different identities living in the region, as manifested in different dimensions, such as religion, ethnicity, gender, and age. Each of the six identities has difficulties or is in conflict with the reality in the region today. These six figures are characteristics and are defined as parameters in a multi-parametric matrix. For example, Jubel (identity) is a student and a farmer׳s son deliberating between staying on the farm and leaving the farm; Phatma is a young Bedouin woman; J′aklin is a single mother living in a small traditional town; Oleg is an immigrant from the former Soviet Union who works as an engineer; Raz is a gay person who tries to find a suitable community for his modern family; and Sara-Rivka is an ultra-orthodox woman and a mother of five who works as a software programmer. Each of these generic identities provide possible future developments of these identities, as shown in Fig. 1 b. Each of the ideologies and identities also have spatial characteristics, such as transportation accessibility and access to employment, and social characteristics, such as nationality, religion, gender, language, identity, education, marital status, and economic capital. This identity analysis characterizes the variety of present communities and their future development needs. The place-based planning process tries to cope with a range of future possibilities, such as strengthening existing traditional elements, providing a mix of traditional and modern life, or a drastic change and abandonment of social tradition and adoption of some existing trends in Israeli society and global perspectives. The analysis refers to current trends in Israeli society and global trends that will be intensified in the future, such as women׳s employment, the growth of new family types, and the blurring of gender, individualism, liberalism, and education. Each of the identities/ideologies has undergone a similar expansion process that tried to outline the future possibilities. Obviously, this process cannot predict all directions of personal development, but this kind of thinking creates a vast mosaic of various mixtures that may characterize the majority of the population (see Fig. 3 b). The next step in the planning process includes the characterization of the towns and villages, which will provide the needs of the different communities, such as town size and type, occupation structure, social structure, regional identity, and relationship to the natural environment. These provide the basic elements of the identity matrix generator. Each alternative plan uses a different methodology for their planning performance, as follows. (1) The Galilee alternative metropolitan plan is based on an ARCview GIS model builder that assists in creating spatial locations in a region, where each town and village can define its own spatial future and still be a part of a collective ideology (see Fig. 4 and Table 1 ). (2) The Negev alternative metropolitan plan simulates spatial locations for a mixture of future identities based on present identities and the different routes each identity may take during its growth process and possibilities of identity changes over time (see Table 2 ). 3.3 Development of identity matrix generator The next stage of the methodology is based on a generator of ideologies and their transportation accessibility, access to employment, and social characteristics, such as nationality, religion, gender, language, identity, education, marital status, and economic capital. All of these are translated to define the spatial environment for each community and type of housing, employment, social relationship, and spatial relationships between different ideologies. These matrices are known as the “ identity matrix generator .” The matrix contains ideology types that were identified in the first stage and future ideologies that were developed based on social trends. The Galilee alternative developed the generator according to geographical characteristics (see Fig. 4 and Table 1 ), and the Northern Negev alternative developed the generator according to social indicators (see Table 2 ). Diverse ideologies are identified to cater to the characteristics of each existing and future identity and the towns and villages that will provide an opportune environment for communities. This matching process is accompanied by a development of policy measures for required adjustments that include, e.g., the establishment of specialized employment centers, educational institutions, and entertainment directed toward the needs of the communities. The policy measures also include a basis for future establishment of new settlements designated if required. The implementation of this methodology for large regional areas includes hundreds of different kinds of settlements that will allow specific reference to versatile ideologies and different communities that were the result of individual plans (see Fig. 4 ). The spatial distribution of towns and villages allowed a versatile identities mix for different communities, complex spatial pattern, and differentiated spatial policy for different types of ideologies. The implementation of the spatial identity matrix generator methodology over a large area includes hundreds of different kinds of identity combinations that allow specific reference for versatile ideologies, where different communities receive individual plans, thereby providing spatial patterns of ideologies in the Galilee and the Northern Negev alternatives. This specific methodology treatment for hundreds of options is possible because of GIS technologies. The pattern of ideologies, as shown in Fig. 5 , support a complex of sub-regional divisions, providing opportunities for regional collaboration and cooperation, conflict recession, identifying opportunities, and synergies between localities and communities. The Galilee alternative addresses the northern part of Israel and defines the core areas that will assist in connecting towns and villages of similar characteristics and empower them in areas, such as economics of scale, transportation efficiency, and employment mix. In addition, core areas of specific aspects that will support the ideologies based pattern are developed, as follows: (1) establishing a core area for community towns in Carmiel; (2) strengthening four core areas of mixed community towns in Maalot, Tarshiha, Massada, and Bet Shean; (3) developing core areas for agriculture-based villages in Kryat-Shmona, Shfaram, and Kafar Kama; and (4) developing core areas for ecological villages in Kazrin and Tiberious. The Negev alternative addresses the southern part of Israel and emphasizes opportunities and conflicts in that area, as follows: (1) religious/secular conflict in the town of Bet-Shemes; (2) social decline in the very homogeneous city of Kiryat-Malahi; (3) social opportunity in a heterogeneous city, Kiryat-Gat, adjacent to an agricultural landscape; (4) opportunity for heterogeneous city on the seashore of Ashkelon; (5) social conflict between old conceptions and new ones in the village of Nehora; (6) opportunity for spatial interaction in new Bedouin villages; and (7) opportunity for ecological communities. Both alternative plans (see Fig. 6 ) represent a vast variety of towns and village types, having complex spatial plans that were created in an analytic process and offer a spatial pattern that supports a variety of ideologies. Mapping the spatial distribution of different types of towns and villages allows planners to connect and link communities of similar characteristics or complementing characteristics. Both pattern alternatives support a complex of sub-regional divisions, which provide opportunities for regional collaboration and cooperation, conflict recession, identifying opportunities, and synergies between localities and communities. The product of this analysis allows for the examination of the spatial role of different communities in future situations (see Fig. 6 ). 4 Discussion and conclusion The planning process that was developed in the metropolitan planning studio shows that both plans relate to the tension between planning at the individual level (community and ideology) and are involved in comprehensive planning. Both alternative plans indicate that vital and extensive information related to ideologies and a regional way of life may be lost in classic comprehensive planning approach, including information that deals with the needs and desires of individuals. In addition, both alternative plans began by assuming that this personal information needs to be at the core of the planning process, and a future approach to decision-making and planning models should keep this information throughout the planning process until the formulation of the plan ( Ho et al., 2010 ). In both alternative plans, the planning policy relates differently to various ideologies and regional policies that point out different characteristics of the land uses in every location in the region, considering the characteristics of existing and future types of ideologies. The plans tell personal stories of various ideologies found in the region, trying to deal with the different paths each identity will take in the future by drawing alternative paths of diverse personal development of various identities according to global and local trends. Future relationships between identities and communities in the region are defined by the influence of the matrix generator. This matrix generator defines the relationship between characteristics and different ideologies and types of towns/villages in the regions. This generator is a geo-social matrix that addresses the spatial needs of various ideologies, such as settlement sizes, proximity to employment, proximity to services, and closeness to nature. This planning methodology enables the multi-resolution observation of the process. The methodology allows a user to zoom in on different types of communities and examine the conditions for their flourish, identify conflicts, regional opportunities, and ways to resolve/mitigate controversies between communities. The methodology allows for zooming out on overall connections and relations between communities and addressing mutual influences. This multi-resolution observation provides multi-level solutions of spatial conflicts and identifies spatial opportunities. Such a detailed approach of the characters of each identity can be operated and managed by users through GIS and the ability to manage and control hundreds, thousands, or even more features in an attribute matrix table. The attribute matrix table allows a multi-parametric approach to the different identities with no need to reduce parameters. In the case presented here, different identities were characterized by 17 different characteristics concerning a variety of spaces, such as personal, cultural, social, and spatial. These data have not been analyzed in a process that lowers the number of variables, such as cluster or factor analysis. On the contrary, processing the data increased the variance and the analysis enhanced and enriched the data. We would like to point out that the rapid technological development in the last decade has enabled planners to develop this form of planning process for special development and contributed to the studio working plan. The analysis of the spatial distribution of identities identified spatial conflicts and opportunities, addressing and presenting a complete picture of the region and the policies derived for different identities. This multi-parametric analysis produces two alternative plans with very different spatial structures but very similar ways of addressing different identities. Both alternative plans represent a vast variety of towns and village types, having complex spatial patterns that support a variety of ideologies. Mapping the spatial distribution of different types of towns and villages allows planners to connect and link communities of similar characteristics or complementing characteristics. However, this approach has several limitations. First, the process requires very good data of the planning region, social assets, and social trends, and such data are not always available. Second, a high degree of predictability of the future plans exists and forecasting of complex social trends is still too complex to be done. In addition, open and available data relating to capital resources estimation and characterization of identities, which are important to the planning process, are limited. Relationship to planning requires an open and flexible planning system. Therefore, this identification offers a regional planning dimension of connectors, bridges, and links based on relationships, opportunities, conflicts, and complementarity of diverse issues, thereby allowing for the tailoring of specific planning solutions as appropriate. This regional dimension is based on the complex layout of towns and villages, and it deals with questions of economies of scale and efficiency on the one hand and spatial diversity benefits management, conflict management, and spatial opportunities arising from the diversity on the other hand.
REFERENCES:
1. ALTSHULER A (1966)
2. BELLAH R (2007)
3.
4.
5. BURAYIDI M (2000)
6. CHADWICK G (2013)
7. CORBETT J (2005)
8. DAVIDOFF P (1965)
9. DEVINEWRIGHT P (2009)
10.
11. FRIEDMANN J (2002)
12. GOLDBERG D (1994)
13. HAYDEN D (1994)
14. HAGUE C (2005)
15. HAX A (1996)
16. HEALEY P (1997)
17. HO W (2010)
18. KITSON M (2004)
19. NILSSON K (2007)
20. PAROLEK D (2008)
21. ROWE G (2000)
22. SANDERCOCK L (1998)
23. TALEN E (2000)
24. WALTERS D (2004)
|
10.1016_j.csite.2025.106335.txt
|
TITLE: Experimental assessment of the interaction between indoor air quality and thermal comfort in naturally ventilated secondary classrooms in southern Spain
AUTHORS:
- Escandón, R.
- Calama-González, C.M.
- Suárez, R.
ABSTRACT:
Current European policies focus on achieving climate neutrality by 2050. However, the COVID-19 crisis has disrupted social conditions, reigniting the debate on buildings with high occupancy and static users for long periods, such as schools, given their inadequate health and comfort conditions. In the Mediterranean climate, most school buildings lack suitable ventilation systems, due to either their age or a reluctance to use mechanical ventilation systems.
This study provides a quantitative analysis of current behavioural and environmental factors affecting pollutant exposure, covering the gap in the existing literature on simultaneous assessment on indoor air quality conditions (CO2, PM2.5, PM10), and hygrothermal comfort (temperature and relative humidity) in a post-COVID scenario in existing secondary school buildings in southern Spain. For this purpose, a continuous monitoring of indoor environmental conditions in cooling, mild, and heating seasons is proposed to assess the influence of natural ventilation conditions on indoor air quality and thermal comfort, instead of the short-term monitoring focused on specific periods frequently found in previous studies. The results show a widespread use of natural overventilation through windows, especially in summer (more than 50 % of the occupied hours), to guarantee indoor air quality conditions (with CO2 below 900 ppm during almost 100 % of the occupied hours). However, in general, this involves clearly compromising thermal conditions (with seasonal average values above 25 °C and 100 % of the occupied hours in discomfort during the hottest weeks) and a moderate loss of cognitive performance during more than 97 % of the summer occupied hours.
BODY:
1 Introduction In addition to complying with current regulatory requirements, the energy upgrading needed for school building stock is also facing two new challenges in the context of Mediterranean climate and way of life. The first of these is social and is linked to indoor air quality and occupants' health, evidenced during the COVID-19 pandemic given the spread of infectious particles in high-occupancy indoor environments [ 1 ], while the second challenge is environmental, with attempts being made to achieve climate neutrality to face proven climate change [ 2 ]. Numerous studies in schools examine the correlation of diverse indoor environmental indicators with health problems [ 3 ] (such as influenza or the COVID-19 pandemic), suggesting that, in many cases, the usual natural ventilation systems in classrooms in the Mediterranean area [ 4 ] are insufficient to reduce the risk of pathogen transmission [ 5 ] and are indirectly impacting the students’ learning capacity [ 6 ]. The World Health Organization (WHO) does not classify CO 2 as a pollutant, but its concentration rate is generally used as the main indicator of indoor air quality (IAQ) in schools [ 7 ]. However, minimizing CO 2 concentrations is not sufficient to maintain adequate IAQ conditions [ 8 ], and not all the concentration levels of different pollutants are simultaneously reduced by increasing the natural ventilation rate. The effects of changes in the flow rate and ventilation periods differ between pollutants [ 9 ]. While longer natural ventilation periods favour CO 2 dissipation, they can also lead to an increase in particulate matter (PM) when the outdoor intake air (ODA) is of poor quality [ 10 ]. Therefore, although health risks in schools are mainly assessed in relation to exposure to high concentrations of CO 2 [ 11 ], other pollutants such as PM [ 12 ], and the thermal conditions of the space [ 13 ], should be controlled. In addition to providing spaces for learning, the classroom layout should favour health and social conditions [ 14 ], benefiting global Indoor Environmental Quality (IEQ) [ 15 ]. Any activity associated with breathing forms respiratory particles that remain in the air [ 16 ], so that achieving an adequate ventilation rate is a priority. Increasing the ventilation rate, besides decreasing CO 2 concentration, reduces the spread of disease [ 17 ], improves academic performance [ 18 ], and aids cognitive enhancement [ 19 ]. Therefore, ventilation measures must be adopted in classrooms to ensure both a healthy environment and appropriate hygrothermal behaviour. But the limits for CO 2 concentration considered acceptable in schools are variable: 2000 ppm by the Committee for Indoor Guidelines Value of the German Federal Environment Agency [ 20 ]; 1000 ppm recommended by ASHRAE Standard 62.1 [ 21 ] (rate of 7 l/s per person); or the values below 600 ppm set in EN 16798–1 [ 22 ] for a category IDA 2 (12.5 l/s per person). Furthermore, no standard proposes a combined IAQ and thermal comfort approach [ 23 ]. In the Mediterranean area, under an erroneous consideration of being a ‘benign climate’, during the pre-pandemic period classrooms were often naturally ventilated, relying on the personal perception of students and teachers. This resulted in unknown and uncontrolled ventilation rates, in many cases below standards, as well as average CO 2 concentrations above the recommended limits. Under these conditions there was frequently no adequate correlation with hygrothermal comfort conditions [ 24 ] for a large part of the school period, due to a series of factors, including the influence of seasonal changes [ 25 ]. In the winter period there was widespread inadequate air quality, with indoor temperatures conditioned by the ventilation rate and the use of heating systems. Heracleous and Michael [ 26 ] analysed the impact of natural ventilation on both thermal comfort and air quality, establishing a correlation with CO 2 levels in 114 secondary school buildings in Cyprus. Average concentrations of 1604 ppm, higher than the normative values, and average indoor temperatures of 19.3 °C, were obtained. In spring, favourable outdoor conditions often ensure indoor temperatures in comfortable ranges, although CO 2 concentrations remain high. In Greece, Dorizas et al. [ 27 ] evaluated ventilation rates and indoor air pollutants in 9 naturally ventilated schools. A positive correlation was also found for CO 2 concentrations, with average values of 1482 ppm, and the number of students. PM concentrations were significantly affected by ventilation rates, the presence of students and indoor pollution sources. In southern Spain, Gil-Báez et al. [ 28 ] monitored 9 schools in spring, observing indoor temperatures of 20–25 °C but average daily CO 2 concentration of about 1500 ppm. In summer, high outdoor temperatures condition the ventilation of classrooms. Santamouris et al. [ 29 ] analysed CO 2 concentrations in 27 schools in Athens, where the ventilation flow rate was below 8 l/s and the average concentration was 1400 ppm. For outdoor temperatures of between 20 and 28 °C, there was a tendency to limit the opening of windows and reduce the flow rate, protecting the occupants of the classrooms from the ambient heat. These widespread poor conditions in classrooms lead different authors to establish recommended CO 2 concentrations to promote healthy ventilation conditions. However, they fail to address how these influence thermal conditions. In southern Spain, Krawczyk et al. [ 30 ] propose a rate of 2.5–5 ACH to ensure IDA 2 category levels and CO 2 concentrations below 1000 ppm, which can result in 6–9 ACH at maximum occupancy. According to SINPHONIE guidelines [ 31 ] CO 2 concentrations should not exceed 1500 ppm. During the COVID-19 pandemic, and following significant IAQ problems in classrooms, schools were highlighted as a sensitive area for possible contagion and special health care [ 32 ]. As a result, standard ventilation rates were considered insufficient, and overventilation at rates of 5–6 ACH was recommended in the Ventilation Guide of Harvard [ 33 ], regardless of any possible adverse weather conditions. The priority in classrooms was to ensure health rather than comfort. Ventilation was therefore considered as a preventive measure to reduce the risk of virus transmission [ 34 ], mainly using natural ventilation through windows, with clear beneficial effects on indoor air quality [ 35 ], although not on thermal comfort, which depended on outdoor weather conditions. Particularly in winter, in 13 examination classrooms in Extremadura (Spain), Miranda et al. [ 36 ] recorded average CO 2 concentration levels between 450 and 670 ppm. These adequate ventilation conditions affected the thermal comfort of the occupants, with a dissatisfaction rate of between 25 and 72 % when outdoor temperatures dropped below 6 °C. Alonso et al. [ 37 ] monitored two pre-school classrooms located in Sevilla (Spain), with windows open and heating on during all teaching hours. CO 2 concentrations were below 1000 ppm, and the average indoor temperature was 15 °C, representing a worsening of comfort conditions compared to the pre-pandemic period. In autumn, with moderate outdoor temperatures, Villanueva et al. [ 38 ] evaluated ventilation conditions (CO 2 ) and suspended particulate matter (PM 2.5 , PM 10 ) levels in secondary school classrooms located in Ciudad Real (Spain). CO 2 concentrations were in the range of 800–1000 ppm, while indoor temperature was between 21.6 and 26.7 °C. Aguilar et al. [ 39 ] monitored a classroom in Granada (Spain) during a summer and winter period with natural ventilation. The results shown low CO 2 levels, but indoor temperature was affected in different ventilation strategies, with results close to outdoor conditions. After the COVID-19 pandemic, and the return to normality in classrooms, the effectiveness of ventilation systems should be reconsidered in order to guarantee a ventilation rate but also adequate conditions of thermal comfort and energy efficiency [ 40 ]. However, currently, few studies provide a combined analysis of air quality and thermal comfort conditions in naturally ventilated Mediterranean classrooms. Of particular note is the study by Miao et al. [ 41 ] monitoring 16 schools in Cataluña (Spain). The results showed that poor indoor air quality was mainly due to closing windows and doors in winter, while thermal discomfort occurred in summer due to high indoor temperature. The findings suggest that a proper ventilation protocol is the key to balancing indoor air quality and thermal comfort. Romero et al. [ 42 ], after the monitoring of university classes in Extremadura (southern Spain) at the end of June, detected average CO 2 concentrations around 550 ppm with indoor temperatures between 22 and 27 °C. Similarly, Torriani et al. [ 43 ] explored the relationship between thermal comfort and IAQ in 26 classrooms in Pisa (Italy). The study shows that occupants' perception of IAQ is inversely proportional to operating temperature and CO 2 concentration. In summary, the review of the literature on IEQ in buildings shows that: - Although there is a wealth of studies on indoor air quality and thermal comfort in the Mediterranean area, there is a gap in the literature when looking for studies that analyse both aspects in a combined way, especially in post-covid times. - Recommended ventilation rates may not guarantee air quality and thermal comfort, but no standard proposes a combined approach for IAQ and thermal comfort, which would allow for more informed trade-off decisions considering IAQ, thermal comfort, and energy targets. - Most studies are based on measurements in specific periods, and not on a continuous yearly basis. The association between IAQ and thermal comfort is clear, as the outdoor air introduced into the classroom affects the indoor thermal conditions [ 44 ]. Therefore, the novelty and relevance of this study lies in filling the gap detected in the existing literature, addressing the following questions in classrooms in a Mediterranean climate: - What strategies are contemplated to ensure indoor environmental quality and maintain a low risk of contagion after the pandemic period? - Are the usual solutions of natural ventilation through windows adequate to simultaneously guarantee IAQ and comfort? - What are the causes of their current environmental behaviour in the different seasons? Accordingly, the main objective of this work is the quantitative diagnosis and comprehensive assessment of current behavioural and environmental factors relating to exposure to pollutants, IAQ, and thermal comfort conditions in representative existing secondary school buildings in the Mediterranean climate. This is carried out by long-term monitoring of the main indoor environmental variables (temperature and relative humidity) and high-priority pollutants (CO 2 , PM 2.5 and PM 10 ). In addition, post-COVID natural ventilation strategies are analysed, assessing their influence on IAQ, thermal comfort, and cognitive performance of students. The paper is structured as follows: Section 2 presents the case studies, materials and methods followed to carry out the monitoring campaigns and results assessment; Section 3 reports the results and discussion of the global statistical analysis of the three main seasons, the specific evaluation of the hottest and coldest weeks, the ventilation rates and cognitive performance loss assessment; and Section 4 provides for the conclusions of the work and future research steps. 2 Methodology This work is developed within the research project “Retrofit ventilation strategies for healthy and comfortable schools within a nearly zero-energy building horizon” (COHEVES). The first phase of this project focused on the cataloguing and documentation of secondary schools in southern Spain in order to select representative case studies [ 45 ]. Based on this, in situ measurement campaigns were carried out in three secondary schools (two classrooms per school), located in representative climatic zones of southern Spain, during a complete school year. After being filtered and treated, the measured data have been used to characterize the environmental behaviour of these secondary schools, evaluating hygrothermal comfort conditions and indoor air quality during occupied and unoccupied periods. Fig. 1 shows a graphic summary of the methodology used in this work. 2.1 Selected case studies In order to carry out the monitoring campaigns, three case studies were selected from the most common building typology in secondary schools in southern Spain: the class - corridor - class structure. In each of these schools, two classrooms with opposing orientations and located on intermediate floors were measured. Fig. 2 shows the interior view of a typical classroom in each of the schools (façade and interior partition with corridor). Table 1 summarizes the main characteristics of the 6 case study classrooms. As can be seen, the three schools are located in different Mediterranean climatic zones considered representative of the climate of southern Spain. This climatic classification is taken from the Spanish Technical Building Code (CTE [ 46 ]), which defines winter climate severity with letters from ‘A’ to ‘E’ (mild to cold) and summer climate severity with numbers from 1 to 4 (mild to hot). Thus, the study sample covers the 3 most representative winter zones in southern Spain (A, B, and C, in increasing order of severity), and the 2 most repeated summer zones (3 and 4, in increasing order of severity). Furthermore, the case studies selected cover the two main periods of construction of secondary school stocks: before 1979, when the first Spanish regulation on thermal conditions in buildings (NBE CT 79 [ 47 ]) came into force; and between 1979 and 2006, when the current Spanish Technical Building Code [ 46 ] was implemented. The classrooms in cases 1 and 3 have a similar volume and occupancy, while those in case 2 are somewhat smaller but with lower occupancy, giving a very similar ratio of number of people per volume in all three cases. This is in keeping with the rest of the secondary school stock in southern Spain. The students occupying the case study classrooms are aged between 15 and 19. Table 1 also compiles other variables with a great influence on the ventilation and environmental behaviour of the classrooms, such as the number of doors (which favour cross ventilation in this typology), window surface, solar protection, and HVAC systems. All classrooms are naturally ventilated by users manually opening windows, and have radiators for heating and fans to circulate the air in summer (except case 1), but no active cooling systems. Classrooms are generally used from 8:30 to 15:00 h, with a break between 11:30 and 12:00 h (when the classroom is empty). 2.2 In-situ measurement campaigns In this work, long-term in-situ measurement campaigns were carried out continuously over a whole school year in the case study classrooms. The main hygrothermal parameters (temperature and relative humidity) and air quality parameters (CO 2 , PM 2.5 and PM 10 ) were measured using multifunction sensors ( Fig. 3 ). Sensonet Multisensor SW20 dataloggers, whose main characteristics are described in Table 2 , were used. In addition to air temperature, these sensors measure globe temperature, although the globe sensor has failed in some classrooms. Therefore, a comparison of air and globe temperature measurements recorded in different classrooms throughout the year was carried out, showing minimal differences, and prompting the decision to use the air temperature measurements (without data losses) for analysis. The uncertainties associated with the measuring equipment are within acceptable parameters according to EN ISO 7726 [ 48 ] and EN ISO 16000–26 [ 49 ]. Once calibrated, the sensors were placed in the classrooms, in the centre of the interior partition with the corridor (to avoid draughts and direct solar radiation), and at a height of 1.8 m to avoid interfering with classroom operation ( Fig. 3 ). Table 3 presents the complete measurement periods for individual cases, and the weeks with maximum and minimum indoor temperatures have also been highlighted, since they will be analysed in detail at a later stage. For the subsequent analysis of the measured data, limit or reference values have been established for the different parameters ( Table 4 ). These thresholds refer to current regulations or guidelines in force in Spain. For example, the threshold set for CO 2 level follows the design conditions established by the Regulation on Thermal Installations in Buildings (RITE [ 50 ]), which in turn is governed by EN 16798–3:2017 [ 51 ]. This regulation establishes four air quality categories: IDA 1, optimum air quality; IDA 2, good air quality; IDA 3, medium air quality; and IDA 4, poor air quality. In classrooms, IDA 2 air quality is required, where a CO 2 concentration of less than 500 ppm above the outdoor air concentration (estimated at 400 ppm) must be maintained. As the Spanish regulations do not define the daily limits for PM 2.5 and PM 10 , the indications provided by the WHO in its Global air quality guidelines [ 52 ] have been followed. The hygrothermal comfort band for non-residential buildings established in RITE is used as reference in this work ( Table 4 ). Even though the thermal conditions of educational buildings are not usually stable, due to changes in occupancy during each session, previous studies support the use of static comfort assessment models, since it takes very little time for students to reach a stable thermal state [ 53 ]. 2.3 Results assessment Once in-situ measurement data are collected and processed, a full quantified assessment of the environmental behaviour of the case studies can be carried out, focusing primarily on indoor air quality and comfort. For this purpose, a statistical analysis of the compiled data was conducted, including a comparison with the limit or reference values set out in Table 4 . Furthermore, ventilation rates were estimated using the methodology detailed in section 2.3.1 in order to further analyse the influence of natural ventilation strategies on IAQ and comfort conditions. Finally, cognitive performance loss was calculated according to the methodology described in section 2.3.2 to assess the influence of thermal conditions on students' academic performance. 2.3.1 Air change rates estimation The ‘steady state’ method described in Ref. [ 54 ] was applied to estimate the approximate natural ventilation or air exchange rate through the opening of windows in the classrooms under study. This method can be used when the CO 2 level has reached a stationary concentration. In this work, two values were calculated for each day (one during the first part of the morning, before the break, and one during the second part), where a stable CO 2 level was determined when occupancy conditions remained fixed for at least three complete air changes. In order to calculate the air change rate ( As ), Equation (1) was applied. where (1) A s [ ACH ] = 6 × 10 4 × n × G p V × ( C i − C o ) n is the number of occupants, Gp the average CO 2 generation rate per person (l/min) according to Batterman [ 55 ], V the volume of the classroom (m 3 ), Ci the steady-state indoor CO 2 concentration (ppm), and Co the outdoor CO 2 concentration (ppm). 2.3.2 Cognitive performance loss In order to evaluate and quantify the impact of the lack of comfort on the cognitive performance of students in the schools selected as case studies, the methodology defined by Wargocki et al. [ 56 ] was applied to the data measured in this study. The authors used a meta-analysis of 18 previous studies to establish a mathematical relationship between classroom temperature and student performance (Equation (2) ), aiming to predict the effects of temperature change on the speed of learning and development of school tasks. This relationship is only valid for temperate climates, applicable in an indoor temperature range from 20 to 30 °C. where (2) R P t = 0.2269 × t 2 − 13.441 × t + 277.84 RPt is the cognitive performance and t the indoor temperature. To analyse the results obtained, the percentage of cognitive performance loss ( CPLt ) was calculated according to the equation established by Dong et al. [ 57 ] (Equation (3) ). (3) C P L t [ % ] = 100 − R P t 3 Results and discussion 3.1 Global statistical analysis The first step consisted in the statistical analysis of the indoor variables measured during the occupied hours of the 2022–2023 academic year, organized into three periods: cooling ( Table 5 ), mild ( Table 6 ), and heating seasons ( Table 7 ). In these tables, values within the established limits (shown in Table 4 ) are marked in green, while those outside the limits appear in red. This is intended to provide an overview of the annual behaviour of the three monitored schools. In the summer period ( Table 5 ), it can be observed that in case S1 (located in Málaga) the indoor air temperatures in the north-facing classroom are between 21 and 28 °C and in the south-facing classroom between 23 and 28.5 °C. In both cases there is a standard deviation of only 1 °C. In case S2 (located in Sevilla, the most severe climatic zone in summer), the minimum temperatures are similar, but the maximum temperatures rise above 32 °C in both the north-facing and south-facing classrooms. In this case the standard deviation remains at around 1.5 °C. In case S3 (located in Puente Genil), the minimum indoor air temperature values are similar to case S1, and although the maximum temperatures are slightly higher (30 °C in the east-facing classroom and 29.6 °C in the west-facing classroom) they remain below those of case S2. The standard deviations are similar to those calculated in cases S1 and S2. Regarding relative humidity, in ascending order, mean values of 37 % were measured in both classrooms in case S2 (with standard deviations of 8 %), 38–40 % in case S3 (with standard deviations of about 7 %), and 47 % in case S1 (with standard deviations of 7 %). For the parameters relating to IAQ, the average CO2 levels detected were between 600 ppm (cases S1.N, S1.S, and S2.N) and 670 ppm (S2.S and S3.E), values significantly below the maximum established in current regulations. The maximum values reached were between 1300 ppm (case S1.S) and 3000 ppm (measured at some point in time in S3.E). The average measured values of PM2.5 vary between 4 μg/m3 (case S3.E) and 17 μg/m3 (S1.S), and similar average values have been measured for PM10 (4–19 μg/m3). The conditions measured indicate good average air quality. In the mild periods ( Table 6 ), indoor air temperatures become equal in the different case studies, with mean values of 22–23 °C and standard deviations of around 1.5 °C in all cases. This is also the case for relative humidity, with mean values of 41–43 % and standard deviations of 8–10 % in all cases. The average CO 2 levels increase slightly with respect to the summer period, with values between 670 ppm (S1.N) and 920 ppm (S2.S), a value that already reaches the established normative limits. In this classroom, occasional maximum values of almost 4800 ppm are recorded, indicating that ventilation in mid-seasons is not as intensive in all the classrooms measured. This increase is not observed in the average values measured for PM 2.5 , which in this period are between 4.5 μg/m 3 (case S3.E) and 16.6 μg/m 3 (S2.N), nor in those for PM 10 (4.8–18 μg/m 3 ). During winter ( Table 7 ), in case 1, the indoor air temperatures in the north-facing classroom range between 15 and 21 °C and in the south-facing one between 16 and 19 °C, with a standard deviation of close to 1 °C. In case 2 (with a slightly harsher winter climate), the standard deviation and the minimum temperature in the north-facing classroom are similar to those in case 1 (15 °C), while the minimum temperature in the south-facing classroom is much higher (20 °C), and the maximum temperatures rise to 24 °C in both the north- and south-facing classrooms. In case 3 (the most severe winter climate), the minimum indoor temperature values are the lowest, below 13 °C in both classrooms, but the maximums are slightly higher than in case 2 (24.5 °C in both classrooms). The standard deviations are also slightly higher than those detected in cases 1 and 2, approaching 2 °C. Almost all cases (43–46 %) display similar mean values for relative humidity, except S2.S which rises to 57 %. Standard deviations range between 5 and 12 %. As outdoor conditions become colder, average CO 2 levels rise slightly in all cases. The average values are around 850–900 ppm in all classrooms, approaching the normative limits. The maximum values measured exceed 2500 ppm in all cases, even reaching 5000 ppm in S3.E, indicating insufficient ventilation in all classrooms on certain occasions. In this period, the average measured values of pollutants also rise, albeit very slightly, with PM 2.5 ranging from 7 μg/m 3 (case S3.E) to 17 μg/m 3 (S2.N) and PM 10 from 7 to 19 μg/m 3 , although the values remain adequate. A comparison of the results obtained with other similar studies in the Mediterranean context, shows that in summer Miao et al. [ 41 ] detect a similar average CO 2 value (593 ppm) in Cataluña (northern Spain), but a higher maximum (4015 ppm) than those detected in this analysis in southern Spain. Indoor temperatures, both average (28.22 °C) and maximum (35.33 °C), are also higher in Cataluña, probably as a result of less intensive ventilation than that presented in this study. However, in mild seasons, the maximum CO 2 value measured by Miao et al. (2446 ppm) is similar to that measured in case 1 in southern Spain, and much lower than those detected in cases 2 and 3 of this study, with a very similar average temperature (22.7 °C) and a somewhat higher maximum temperature (29.7 °C). Similar average values of CO 2 (557 ppm) and indoor temperatures (23 °C) are measured at the end of June in Extremadura (Spain) [ 42 ]. The problem of summer overheating in schools is recurrent also in other climates, aggravated by high occupancy and inadequately controlled natural ventilation. In climates with more beneficial outdoor temperatures, such as England, Mohamed et al. [ 58 ] report indoor temperatures up to 27 °C, with CO 2 levels above 1000 ppm. In hot and humid climates, Haddad et al. [ 59 ] found average indoor temperatures in Iran of almost 27 °C and CO 2 levels above 1400 ppm, and Cai et al. [ 60 ] in China registered the same average indoor temperature with CO 2 levels slightly higher (above 1500 ppm). In winter in Pisa (Italy), Torriani et al. [ 43 ] measured an average CO 2 level of 1490 ppm, and a maximum value of 3899 ppm, with an average indoor temperature of 21.5 °C and a maximum of 27.4 °C. These CO 2 values are in line with those detected in Cataluña (northern Spain) by Miao et al. [ 41 ], with an average level of 1194 ppm and a slightly higher maximum of 4950 ppm. A similar average temperature (21.4 °C) is also obtained in Cataluña, while the maximum reached (36.7 °C) is much higher. The results of the present work show a generally lower average CO 2 level (albeit with similar maximum), but much lower temperatures at the same time. This could be the result of both less controlled use of natural ventilation and less intensive use of heating systems. 3.2 Assessment of the hottest and coldest weeks In order to carry out a more detailed analysis of the behaviour of the case studies, an evaluation was carried out on the indoor variables measured in the six classrooms during the weeks of minimum and maximum indoor air temperatures in each case, assessing both occupied and unoccupied periods. The reference limit values are indicated in each figure, following the criteria set out in the methodology section ( Table 4 ). Table 8 provides a summary of maximum and minimum outdoor air temperature values measured during the weeks under study and obtained from the national meteorological stations closest to the case studies. 3.2.1 Summer week In general, the values measured of the parameters related to IAQ during the summer week are adequate almost 100 % of the hours ( Fig. 4 ). Specifically, the average indoor CO 2 values during occupied hours vary between 500 ppm (case S2) and 600 ppm (S3), with maximum values only exceeding 900 ppm (design value according to current regulations) occasionally in cases S2 and S3. Values do not exceed 1100 ppm at any time. During unoccupied hours, the measured values remain even lower, below 600 ppm almost 100 % of the hours. As regards pollutants, the results show values always below the maximum threshold established by the WHO [ 52 ], with particularly low values in case S3.E (with an average value of 5 μg/m 3 for both PM 2.5 and PM 10 ). Particles measured during unoccupied hours, although with a wider range of values, maintain mean values similar to or slightly below those detected during occupied hours. Regarding indoor temperature, during the summer week ( Fig. 5 a) all cases are in discomfort during 100 % of the occupied hours. The highest temperatures are observed in case S2, with average values above 30 °C in the south-facing classroom. This is the case with the highest outdoor air temperatures, reaching 37 °C in that week. The average indoor temperatures of case S3 exceed 28 °C, with outdoor temperatures of between 17 and 32 °C, and those of case S1 are around 27 °C, with outdoor values between 21 and 29 °C. In general, temperatures during unoccupied hours are slightly lower than those during occupied hours, although this is not the case in S1, where during unoccupied hours temperatures are similar and even slightly higher. This could be the result of natural ventilation having been stopped during unoccupied hours, and in this case the entry of fresh air could have favoured heat dissipation, since outdoor temperatures were not so extreme. When the indoor relative humidity measured in the classrooms is analysed with respect to the limit values established in national regulations ( Fig. 5 b), it is observed that during the summer week, case S1 is in adequate conditions most of the time, but this is not the case for S2 and S3. Case S2 displays the worst performance, with average indoor relative humidity values of 30 %, far below the limits established as adequate. The average values in the unoccupied periods are very similar to those of the occupied periods, although a wider range of measured values is detected. 3.2.2 Winter week During the winter week, the IAQ parameters measured indoors ( Fig. 6 ) worsen with respect to those measured in summer. During unoccupied periods CO 2 level values remain similar to those measured in summer, but during occupied periods they rise in almost all cases (except S3.E, where a high rate of natural ventilation is maintained despite the cold, probably countered by the use of heating systems). In cases S1 and S2, the average values are around 900 ppm (established limit value), with maximum values of between 1600 ppm (S2.N) and 2400 ppm (S2.S). In case S3.W, the average value remains around 500 ppm, with the highest percentage of hours below the limit, although occasional maximums of almost 2000 ppm are reached. For the polluting particles, S1 maintains values very similar to those measured in the summer period, while in S2 and S3 there is a considerably wider range of measured values and slightly higher average values. However, these remain below the limits during almost 100 % of the hours (the limit is only exceeded at very specific moments in the PM 2.5 values measured in S2.S and S3.W). Focused on thermal comfort, the analysis during the winter week ( Fig. 7 a) also shows a high percentage of occupied hours outside the comfort band, as in summer. Case S1 remains below the comfort limit during 100 % of the hours, with average values of around 17 °C and very little temperature variation throughout the day. In S2, the south-facing classroom is in comfort conditions 50 % of the occupied hours, with an average value of 21 °C. The north-facing classroom has slightly lower temperatures, but also close to comfort conditions. The greatest variations in indoor temperature are detected in S3, with average values of 17 °C (east-facing classroom) and 19 °C (west-facing classroom), but minimum values of 13 °C (east-facing classroom). Even though case S3 is the one with the highest climatic severity in winter, better indoor conditions during occupied hours are detected in classroom S3.W than in S1 (with the lowest climatic severity according to Spanish standards). This could be due to a more intensive use of heating systems in case 3 than in the other cases. Regarding outdoor temperatures, minimum temperatures are similar in cases S1 and S2, and lower in case S3, while maximum temperatures are similar in cases S1 and S3 (except for the week measured in case S3.E, with maximum temperatures below 12 °C), and higher in S2. In terms of relative humidity ( Fig. 7 b), most of the time S1 is in adequate conditions, as seen in summer, while S2 and S3.W are slightly above boundary conditions most of the time (while occasional moments of relative humidity of up to 70 % are also detected). In case S3.E, conditions below the established limits are detected 100 % of the time (with an average value of 32 % during occupied hours). This could be due to the fact that in the coldest week of this case study, outdoor conditions were particularly harsh, with minimum outdoor temperature values below 0 °C, so that there was probably a very intensive use of the heating system. 3.3 Evaluation of estimated ventilation rates According to the methodology based on CO 2 measurements described in section 2.3.1 , the predominant ventilation rate throughout the two parts of the school day (separated by an intermediate break) was estimated. Fig. 8 shows the results obtained from the values measured during the 2022–2023 school year, organized into cooling (a), mild (b), and heating season (c). Different ventilation levels were defined, and the ventilation rates obtained have been grouped as follows: - Deficient (red): ventilation rates from 0 to 3 ACH which, given the volume of the classrooms case study, would correspond to a value of IDA 4 according to Spanish regulations [ 50 ] (low quality air, defined with a ventilation rate of 5 l/s per person). - Low (orange): ventilation rates from 3 to 9 ACH, corresponding to IDA 3 [ 50 ] (medium air quality, 8 l/s per person). - Appropriate (green): ventilation rates from 9 to 13 ACH, corresponding to the range from IDA 2 [ 50 ] (good air quality, 12.5 l/s per person), as required in school classrooms, to IDA 1 (air of optimum quality, 20 l/s per person). - Overventilation (blue): ventilation rates higher than 13 ACH. The results during the cooling period ( Fig. 8 a), show that in cases 2, 3 and S1.N excessive natural ventilation rates are applied for more than 50 % of the occupied hours (up to 68 % of the hours in case S2.N). The natural ventilation rate is insufficient (low or deficient) between 18 % (case S1.N) and 41 % of occupied hours (S3.W), but very few hours with deficient values are detected. The most optimized use of natural ventilation is detected in S1.S, since the most frequent ventilation rate (occurring in 45 % of the hours) is considered appropriate. In the mild period ( Fig. 8 b) ventilation rates are generally reduced, and are low or deficient in more than 50 % of occupied hours in almost all cases (up to 65 % in case S1.S). S1.N is the only case where appropriate or overventilation conditions are maintained during just over 60 % of the hours (36 % and 24 % respectively). The least optimized ventilation rates are found in case 3, since are appropriate during only 5–7 % of the hours while excessive during 39–42 % of the hours, and low or deficient during 50–56 % of the hours. During the winter season ( Fig. 8 c), in cases 1 and 2 ventilation rates are generally reduced compared to mild period, with low or deficient values of between 61 % (case S2.S) and 83 % of occupied hours (S1.S). However, in case 3, appropriate and overventilation conditions remain very similar to those of the mild period, although the number of hours with deficient values increases by 10–16 %. It is worth noting that in this case overventilation is detected during 41–44 % of the hours, despite the negative effect in terms of indoor temperatures and/or energy consumption for heating in a location with a degree of climatic severity in winter. 3.4 Assessment of cognitive performance loss In section 2.3.2 , the methodology for the assessment of the cognitive performance loss ( CPLt ) applied in this work is described in detail. Results have been grouped according to the classification established by Dong et al. [ 57 ]: ‘No loss’ when CPLt is equal to 0 %; ‘No significant loss’ when CPLt is between 0 and 5 %; ‘Moderate loss’ when CPLt is between 5 and 20 %; and ‘Severe loss’ when CPLt is above 20 %. Firstly, an overall evaluation of the percentage of occupied hours with different levels of cognitive performance loss during the entire measurement period (school year 2022–2023) has been carried out. Fig. 9 shows the results obtained in the six classrooms monitored in this study. Most of the time, the cognitive performance loss is moderate, with values of between 71 % (case S2.S) and 83 % of the hours (case S3.W). In all cases, severe loss values are negligible or zero. However, as the existing literature mostly focuses these studies on the cognitive performance loss due to overheating in summer periods, a second analysis has been carried out focusing on the data monitored during the cooling period ( Fig. 10 ). According to the results, when focusing on the warmest periods, the hours of moderate cognitive performance loss increase in all cases, with values of between 97 % (case S2.N) and 100 % of the hours (case S3.W). This means that in all cases there is a moderate loss of cognitive performance most of the time. In case S3.W, with the highest CPLt , the highest percentage of hours with low or insufficient natural ventilation rates was also detected in the cooling period. This is in keeping with the conclusion reached by Dong et al. [ 57 ]: ‘Increasing ventilation rates is an effective means of reducing cognitive performance loss, although its effectiveness decreases with increasing outdoor temperatures’. 4 Conclusions Since the pandemic period, the complexity and vulnerability of decision-making in relation to natural ventilation mechanisms and levels to ensure adequate IAQ and comfort conditions in classrooms has become particularly evident. In an attempt to complete the scanty existing literature on this subject, this work has quantified the air quality and hygrothermal comfort conditions in six representative secondary school classrooms in southern Spain with natural ventilation through windows, by collecting and analysing long-term monitoring data on the main environmental and pollutant variables during the 2022/2023 academic year. Up until the COVID-19 pandemic, the most frequently encountered issue in schools was high CO 2 levels. However, following the end of this crisis period, the natural ventilation patterns of over-ventilating classrooms throughout most of the day and year (especially in summer periods) have generally been maintained in southern Spain. Although this strategy tends to guarantee adequate IAQ levels, the thermal conditions of the classrooms are significantly compromised. An issue of particular concern is that of overheating due to over-ventilation in summer, aggravated by the high occupancy loads characteristic of secondary school classrooms in Spain, which can ultimately lead to issues of loss of cognitive performance among students. More specifically, in the case studies it is concluded that: - In summer, high temperatures have been measured in all cases (100 % of the occupied hours are in discomfort in the hottest week), with average values above 25 °C. The most unfavourable values are found in the case located in Sevilla (one of the locations with the highest summer climatic severity), with maximum temperatures exceeding 32 °C. - In winter, low temperatures are detected (but with a lower degree of discomfort than in summer), with average values below 21 °C in almost all cases. The lowest minimum temperatures are detected in the case located in Puente Genil (with the highest winter climatic severity), below 13 °C. - CO 2 levels are generally very good in summer (below 900 ppm most of the occupied hours) but worsen slightly due to lower rates of natural ventilation as outdoor temperatures drop. Measured particulate pollutants (PM 2.5 and PM 10 ) show adequate air quality in all periods (although the lowest values are measured in summer). - There is an excess of hours of over-ventilation, both in summer (with more than 50 % of the occupied hours in almost all cases) and in winter (with cases with more than 40 % of the hours). However, in winter, ventilation is insufficient in more than 50 % of the occupied hours in almost all measured cases. Over-ventilation in periods when the outdoor climate is not so mild, such as in summer in southern Spain, entails a clear loss of comfort conditions and a moderate loss of cognitive performance. A possible solution in winter could be to adjust the ventilation rate and increase the time that heating is in use (which is short in southern Spain). However, in summer the problem is more complex, given the lack of efficient cooling systems in schools. Given the obtained results, future research steps should necessarily assess and optimize natural ventilation protocols, as well as incorporate energy-efficient active systems (such as adiabatic cooling). The quantitative analysis carried out in this paper is the essential starting point for the development and validation of parametric simulation models to evaluate the global behaviour of the secondary school stock in southern Spain at a regional scale. By means of genetic optimization algorithms, these models will also allow the future proposal of retrofitting strategies to guarantee a compromise between thermal comfort, indoor air quality and energy objectives, and the assessment of their response/resilience to extreme heat and future climatic scenarios. CRediT authorship contribution statement R. Escandón: Writing – review & editing, Writing – original draft, Visualization, Methodology, Investigation, Formal analysis, Data curation, Conceptualization. C.M. Calama-González: Writing – review & editing, Visualization, Methodology, Investigation, Formal analysis, Data curation, Conceptualization. R. Suárez: Writing – review & editing, Writing – original draft, Project administration, Methodology, Investigation, Funding acquisition, Formal analysis, Conceptualization. Funding The authors wish to acknowledge the financial support provided by Grant (PID2020-117722RB-I00) “ Retrofit ventilation strategies for healthy and comfortable schools within a nearly zero-energy building horizon ” funded by MICIU / AEI /10.13039/501100011033/. Escandón also acknowledges the financing of the VI PPIT-US , through the 2020 Call for Contracts for Access to the Spanish Science, Technology and Innovation System for the Development of the Own R&D&I Program of the University of Seville . Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
REFERENCES:
1. WILLIAM M (2021)
2. (2019)
3. FANG Y (2023)
4. FERRARI S (2023)
5. FERRARI S (2022)
6. VILLARREAL Y (2023)
7. CHATZIDIAKOU L (2015)
8. RAMALHO O (2015)
9. STABILE L (2017)
10. COOPER N (2020)
11. JACOBSON T (2019)
12. BURTSCHER H (2012)
13. SALTHAMMER T (2016)
14. HOLGATE S (2020)
15. SCHIBUOLA T (2020)
16. MITTAL R (2020)
17. BUONANNO G (2013)
18. PETERSEN S (2016)
19. WARGOCKI P (2020)
20.
21. (2022)
22. (2019)
23. BABICH F (2023)
24. FISK W (2017)
25. CHATZIDIAKOU L (2012)
26. HERACLEOUS C (2019)
27. DORIZAS P (2015)
28. GILBAEZ M (2021)
29. SANTAMOURIS M (2008)
30. KRAWCZYK D (2016)
31. KEPHALOPOULOS S (2014)
32. STAGE H (2021)
33. ALLEN J (2020)
34. (2020)
35. PARK S (2021)
36. MIRANDA M (2022)
37. ALONSO A (2021)
38. VILLANUEVA F (2021)
39. AGUILAR A (2021)
40. LIPINSKI T (2020)
41. MIAO S (2023)
42. ROMERO P (2025)
43. TORRIANI G (2023)
44. JIA L (2021)
45. LLANOSJIMENEZ J (2022)
46. (2006)
47. (1979)
48. (1998)
49. (2012)
50. (2021)
51. (2017)
52. (2021)
53. RUN K (2025)
54. (2012)
55. BATTERMAN S (2017)
56. WARGOCKI P (2019)
57. DONG J (2023)
58. MOHAMED S (2021)
59. HADDAD S (2017)
60. CAI C (2021)
|
10.1016_j.celrep.2014.08.052.txt
|
TITLE: Enteroendocrine Cells Support Intestinal Stem-Cell-Mediated Homeostasis in Drosophila
AUTHORS:
- Amcheslavsky, Alla
- Song, Wei
- Li, Qi
- Nie, Yingchao
- Bragatto, Ivan
- Ferrandon, Dominique
- Perrimon, Norbert
- Ip, Y. Tony
ABSTRACT:
Intestinal stem cells in the adult Drosophila midgut are regulated by growth factors produced from the surrounding niche cells including enterocytes and visceral muscle. The role of the other major cell type, the secretory enteroendocrine cells, in regulating intestinal stem cells remains unclear. We show here that newly eclosed scute loss-of-function mutant flies are completely devoid of enteroendocrine cells. These enteroendocrine cell-less flies have normal ingestion and fecundity but shorter lifespan. Moreover, in these newly eclosed mutant flies, the diet-stimulated midgut growth that depends on the insulin-like peptide 3 expression in the surrounding muscle is defective. The depletion of Tachykinin-producing enteroendocrine cells or knockdown of Tachykinin leads to a similar although less severe phenotype. These results establish that enteroendocrine cells serve as an important link between diet and visceral muscle expression of an insulin-like growth factor to stimulate intestinal stem cell proliferation and tissue growth.
BODY:
Introduction The gastrointestinal (GI) tract is a complex organ essential for nutrient absorption and whole-body metabolism ( Miguel-Aliaga, 2012 ). The Drosophila midgut is an equivalent of the mammalian stomach and small intestine. The midgut epithelium has no crypt-villus structure but instead is a monolayer of absorptive enterocytes (ECs), with interspersed intestinal stem cells (ISCs), enteroblasts (EBs), and enteroendocrine cells (EEs) located closer to the basement membrane ( Micchelli and Perrimon, 2006; Ohlstein and Spradling, 2006 ). All cells in the midgut likely constitute together the niche that regulates ISC proliferation and EB differentiation for tissue homeostasis. The visceral muscle secretes Wingless, insulin-like peptides, epidermal growth factor receptor (EGFR) ligands, and Decapentaplegic (Dpp)/bone morphogenetic protein ( Guo et al., 2013; Jiang et al., 2011; Lin et al., 2008; O’Brien et al., 2011 ). The mature ECs are a major source of stress-induced Dpp, EGFR ligands, and the JAK-STAT pathway ligands Unpaired (Upd) 1–3 ( Biteau and Jasper, 2011; Buchon et al., 2010; Guo et al., 2013; Jiang et al., 2009, 2011; Li et al., 2013a; Osman et al., 2012; Tian and Jiang, 2014; Xu et al., 2011 ). The differentiating EBs also produce Upds, Wingless, and EGFR ligands ( Cordero et al., 2012; Jiang et al., 2011; Zhou et al., 2013 ). The surrounding trachea secretes Dpp, while the innervating neurons can also regulate intestinal physiology ( Cognigni et al., 2011; Li et al., 2013b ). EEs constitute a major cell type in the Drosophila midgut epithelium. While the mammalian secretory lineage is differentiated into Paneth cells, goblet cells, enteroendocrine cells, and tuft cells ( Gerbe et al., 2012 ), the entire population of secretory cells in the Drosophila midgut is collectively called EEs and marked by the homeodomain protein Prospero (Pros) ( Micchelli and Perrimon, 2006 ). Nonetheless, different subsets of hormones are produced from different subtypes of midgut EEs ( Ohlstein and Spradling, 2006 ). In the mouse intestine, the Lgr5+ ISCs directly contact Paneth cells, and isolated ISC-Paneth cell doublets have higher efficiency to form organoids ( Sato et al., 2011 ). However, mouse genetic knockout that has Paneth cells removed did not result in the loss of Lgr5+ ISCs ( Durand et al., 2012 ). Only recently have Drosophila midgut EEs been shown to negatively regulate ISC proliferation via EGFR ligand production and to regulate ISC differentiation via the Slit/Robo pathway ( Biteau and Jasper, 2014; Scopelliti et al., 2014 ). Therefore, the function of EEs in regulating stem cell activity largely remains to be investigated. Here, we show that Drosophila midgut EEs serve a niche function by producing hormones such as Tachykinin (Tk) to regulate insulin peptide expression in the surrounding muscle that in turn affects intestinal homeostasis. Results and Discussion scute RNAi and Deletion Result in EE-less Flies Previous evidence shows that adult midgut mutant clones that have all the AS-C genes deleted are defective in EE formation while overexpression of scute ( sc ) or asense ( ase) is sufficient to increase EE formation ( Bardin et al., 2010 ). Moreover, the Notch pathway with a downstream requirement of ase also regulates EE differentiation ( Micchelli and Perrimon, 2006; Takashima et al., 2011; Zeng et al., 2013 ). To study the requirement of EEs in midgut homeostasis, we first attempted to delete all EEs by knocking down each of the AS-C transcripts using the ISC/EB driver esg-Gal4. The results show that sc RNAi was the only one that caused the loss of all EEs in the adult midgut ( Figures 1 A and S1 A–S1F). The esg-Gal4 driver is expressed in both larval and adult midguts, but the esg > sc RNAi larvae were normal while the newly eclosed adults had no EEs. Therefore, sc is likely required for all EE formation during metamorphosis when the adult midgut epithelium is reformed from precursors/stem cells ( Jiang and Edgar, 2009; Micchelli et al., 2011 ). The sc / 6 sc hemizygous mutant adults were also completely devoid of midgut EEs ( 10-1 Figures 1 B, S1 G, and S1H), while other hemizygous combinations including sc , 1 sc , and 3B sc were normal in terms of EE number. 5 Df(1)sc is a small deficiency that has both 10-1 ac and sc uncovered. sc and 1 sc each contain a gypsy insertion in far-upstream regions of 3B sc , while sc and 5 sc are 1.3 and 17.4 kb deletions, respectively, in the 6 sc 3′ regulatory region ( García-Bellido and de Celis, 2009 ). The sc / 6 sc combination may affect 10-1 sc expression during midgut metamorphosis and thus the formation of all adult EEs. The atonal homolog 1 (Atoh1) is required for all secretory cell differentiation in mouse ( Durand et al., 2012; VanDussen and Samuelson, 2010 ). However, esg-Gal4-driven atonal ( ato ) RNAi and the amorphic combination ato 1 /Df(3R)p13 showed normal EE formation ( Figures 1 A, 1B, S1 F, S1I, and S1J). Nonetheless, we found that older ato 1 /Df(3R)p13 flies exhibited a significantly lower increase of EE number ( Figure 1 B), suggesting a role of ato in EE differentiation in adult flies. Changing the Number of EEs Alters Lifespan In sc RNAi guts, the mRNA expression of allatostatin (Ast), allatostatin C (AstC), Tachykinin (Tk), diuretic hormone (DH31), and neuropeptide F (NPF) was almost abolished ( Figure S1 K), consistent with the absence of all EEs. On the other hand, the mRNA expression of the same peptide genes in heads showed no significant change ( Figure S1 L). Even though the EEs and regulatory peptides were absent from the midgut, the flies were viable and showed no apparent morphological defects. There was no significant difference in the number of eggs laid and the number of pupae formed from control and sc RNAi flies ( Figure S1 M), suggesting that the flies probably have sufficient nutrient uptake to support the major physiological task of reproduction. However, when we examined the longevity of these animals, the EE-less flies after sc RNAi showed significantly shorter lifespan ( Figure S1 N). In addition, when the number of EEs was increased in adult flies by esgGal4;tubGal80 ts (esg ts > )-driven sc overexpression ( Bardin et al., 2010 ; Figure S3 ), an even shorter lifespan was observed. These results suggest that a balanced number of EEs is essential for the long-term health of the animal. Moreover, there may be important physiological changes in these EE-less flies that are yet to be uncovered, such as reduced intestinal growth described in detail below. EE-less Flies Have Reduced Intestinal Growth as Observed under Starvation Conditions One of the phenotypic changes we found for the sc RNAi/EE-less flies was that under normal feeding conditions, their midguts had a significantly narrower diameter than that of control midguts ( Figures 1 C and 1D). When reared in poor nutrition of 1% sucrose, both wild-type (WT) and EE-less flies had thinner midguts. When reared in normal food, WT flies had substantially bigger midgut diameter, while EE-less flies had grown significantly less ( Figure 1 E). The cross-section area of enterocytes in the EE-less midguts was smaller ( Figure 1 F), suggesting that there is also a growth defect at the individual cell level. A series of experiments showed that ingestion of food dye by the sc RNAi/EE-less flies was not lower than control flies ( Figure S2 C). The measurement of food intake by optical density (OD) of gut dye contents also showed similar ingestion ( Figure S2 D). The measurement of excretion by counting colored deposits and visual examination of dye clearing from guts showed that there was no significant change in food passage ( Figures S2 E and S2F). The normal fecundity shown in Figure S1 M also suggested that the mutant flies likely had absorbed sufficient nutrient for reproduction. Nonetheless, another phenotype we could detect was a substantial reduction of intestinal digestive enzyme activities including trypsin, chymotrypsin, aminopeptidase, and acetate esterase ( Figures 1 G, 1H, S2 A, and S2B). These enzyme activities exhibit strong reduction after starvation of WT flies. The EE-less flies therefore have a physiological response as if they experience starvation although they are provided with a normal diet. EE-less Midguts Have Reduced ISC Division and Dilp3 Expression A previous report has established that newly eclosed flies respond to nutrient availability by increasing ISC division that leads to a jump start of intestinal growth ( O’Brien et al., 2011 ). When we fed newly eclosed flies on the poor diet of 1% sucrose, both WT and sc RNAi/EE-less guts had a very low number of p-H3-positive cells ( Figure 2 A), which represent mitotic ISCs because ISCs are the only dividing cells in the adult midgut. When fed on normal diet, the WT guts had significantly higher p-H3 counts, but the sc RNAi/EE-less guts were consistently lower at all the time points. The sc / 6 sc hemizygous mutant combination exhibited a similarly lower mitotic activity on the normal diet ( 10-1 Figure 2 B). When we investigated possible signaling defects in the EE-less flies, we found that in addition to other gut peptide mRNAs, the level of Dilp3 mRNA was also highly decreased in these guts while the head Dilp3 was normal ( Figures 2 C and S1 L). This is somewhat surprising, because Dilp3 is expressed not in the epithelium or EEs but in the surrounding muscle ( O’Brien et al., 2011; Veenstra et al., 2008 ). We used Dilp3 promoter-Gal4-driven upstream activating sequence (UAS)-GFP expression (Dilp3 > GFP) to visualize the expression in muscle ( Figure 2 D). Both control and sc RNAi under this driver showed normal muscle GFP expression ( Figure 2 E), demonstrating that sc does not function within the smooth muscle to regulate Dilp3 expression. We then combined the esg-Gal4 and Dilp3-Gal4, and the control UAS-GFP samples showed the expected expression in both midgut precursors and surrounding muscles ( Figures 2 F–2H). When these combined Gal4 drivers were used to drive sc RNAi, the smooth muscle GFP signal was clearly reduced ( Figures 2 I–2K). These guts also exhibited no Prospero staining and overall fewer cells with small sizes as expected from esg > sc RNAi ( Figures 2 I–2K). The report by O’Brien et al. (2011) showed an increase of Dilp3 expression from the surrounding muscle in newly eclosed flies under a well-fed diet (see also Figure 2 C). This muscle Dilp3 expression precedes brain expression and is essential for the initial nutrient stimulated intestinal growth. Our EE-less flies show similar growth and Dilp3 expression defects, suggesting that EE is a link between nutrient sensing and Dilp3 expression during this early growth phase. Increasing the Number of EEs Promotes ISC Division Partly via Dilp3 Expression WT and AS-C deletion ( sc ) mutant clones in adult midguts did not exhibit a difference in their cell numbers ( B57 Bardin et al., 2010 ). Moreover, we performed esg ts > sc RNAi in adult flies for 3 days but did not observe a decrease of mitotic count or EE number. Together, these results suggest that sc is not required directly in ISC for proliferation, and they imply that the ISC division defects observed in the sc mutant/EE-less flies is likely due to the loss of EEs. To investigate this idea further, we used the esg ts > system to up- and downshift the expression of sc at various time points and measure the correlation of sc expression, EE number, and ISC mitotic activity. The overexpression of sc after shifting to 29°C for a few days correlated with increased EE number, expression of gut peptides, and increased ISC activity ( Figure S3 A–S3I). Then, we downshifted back to room temperature (23°C) to allow the Gal80 ts repressor to function again. The sc mRNA expression was quickly reduced within 2 days and remained low for 4 days ( Figure 3 A). Although we did not have a working antibody to check the Sc protein stability, the expression of a probable downstream gene phyllopod ( Reeves and Posakony, 2005 ) showed the same up- and downregulation ( Figure 3 B), revealing that Sc function returned to normal after the temperature downshift. Meanwhile, the number of Pros+ cells and p-H3 count remained higher after the downshift ( Figures 3 C and 3D). Therefore, the number of EEs, but not sc mRNA or function, correlates with ISC mitotic activity. We performed another experiment that was independent of sc expression or expression in ISCs. The antiapoptotic protein p35 was driven by the pros-Gal4 driver, which is expressed in a subset of EEs in the middle and posterior midgut ( Figures S4 B–S4E). This resulted in a significant albeit smaller increase in EE number and a concomitant increase in mitotic activity ( Figures S3 J and S3K), which was counted only in the middle and posterior midgut due to some EC expression of this driver in the anterior region ( Figure S4 C). Therefore, the different approaches show consistent correlation between EE number and ISC division. Dilp3 expression was significantly although modestly increased in flies that had increased EE number after sc overexpression ( Figure 3 E), similar to that observed in fed versus fasted flies ( O’Brien et al., 2011 ). We tested whether Dilp3 was functionally important in this EE-driven mitotic activity. Due to the lethality, we could not obtain a fly strain that had esg-Gal4, Dilp3-Gal4, UAS-Dilp3RNAi, tub-Gal80ts, and UAS-sc to perform a comparable experiment as shown in Figure 2 . So instead, we generated flies that contained a ubiquitous driver with temperature controlled expression, i.e., tub-Gal80 ts /UAS-sc; tub-Gal4/UAS-Dilp3RNAi. These fly guts showed a significantly lower number of p-H3+ cells than that in the tub-Gal80 ts /UAS-sc; tub-Gal4/+ control flies ( Figure 3 F). These results demonstrate that the EE-regulated ISC division is partly dependent on Dilp3 . The expression of an activated insulin receptor by esg-Gal4 could highly increase midgut proliferation, and this effect was dominant over the loss of EEs after sc ( RNAi Figure S4 A), which is consistent with an important function of insulin signaling in the midgut. Tk-Secreting EEs Have a Role in Regulating Dilp3 and ISC Proliferation As stated above, normally hatched flies did not lower their EE number after esg ts > sc RNAi, perhaps due to redundant function with other basic-helix-loop-helix proteins in adults. The expression of proapoptotic proteins by the pros ts -Gal4 also could not reduce the EE number. We thus screened other drivers and identified a Tk promoter Gal4 (Tk-Gal4) that had expression recapitulating the Tk staining pattern representing a subset of EEs ( Figures S4 B and S4F–S4I″). More importantly, when used to express the proapoptotic protein Reaper (Rpr), this driver caused a significant reduction in the EE number ( Figure S4 J), Tk and Dilp3 mRNA ( Figures 4 A and 4B ), and mitotic count ( Figure 4 C). The Tk-Gal4-driven expression of another proapoptotic protein, Hid, caused a less efficient killing of EEs ( Figure S4 J) and subsequently no reduction of p-H3 count ( Figure 4 C). The knockdown of Tk itself by Tk-Gal4 also caused significant reduction of p-H3 count ( Figure 4 D). A previous report revealed the expression by antibody staining of a Tk receptor (TkR86C) in visceral muscles ( Poels et al., 2009 ), and our knockdown of TkR86C in smooth muscle by Dilp3-Gal4 or Mef2-Gal4 showed a modest but significant decrease in ISC proliferation ( Figure 4 E, F). There was a concomitant reduction of Dilp3 mRNA in guts of all these experiments ( Figures S4 K–S4M), while the head Dilp3 mRNA had no significant change in all these experiments. As a comparison, TkR99D or NPFR RNAi did not show the same consistent defect. In conclusion, we show that among the AS-C genes, sc is the one essential for the formation of all adult midgut EEs and is probably required during metamorphosis when the midgut is reformed. In newly eclosed flies, EEs serve as a link between diet-stimulated Dilp3 expression in the visceral muscle and ISC proliferation. Depletion of Tk-expressing EEs caused similar Dilp3 expression and ISC proliferation defects, although the defects appeared to be less severe than that in the sc RNAi/EE-less guts. The results together suggest that Tk-expressing EEs are part of the EE population required for this regulatory circuit. The approach we report here has established the Drosophila midgut as a model to dissect the function of EEs in intestinal homeostasis and whole-animal physiology. Experimental Procedures Drosophila Stocks and Tissue Staining All Drosophila stocks were maintained at room temperature in yeast extract/cornmeal/molasses/agar food medium. UAS-mCD8GFP and w were used for crossing with Gal4 and mutant lines as control. The fly stocks 1118 ac (29586), RNAi ase (31895), RNAi lsc (27058), RNAi sc (26206), RNAi ato (26316), RNAi Tk (25800), RNAi NPF (27237), RNAi sc , 1 sc , 3B sc , 5 sc , 6 ato , 1 Df(1)sc , 10-1 Df(3R)p13 , and UAS-sc were obtained from Bloomington Stock Center. TkR86C (13392), RNAi TkR99D (43329), and RNAi NPFR (107663) were obtained from VDRC. esg RNAi - Gal4, Dilp3 (33681), Dilp3-Gal4, Mef2-Gal4, and pros-Gal4 have been described previously ( RNAi Micchelli and Perrimon, 2006; O’Brien et al., 2011; Sen et al., 2004 ). The Tk-Gal4 line was among a set of Tk promoter Gal4 lines screened for expression in the adult midgut, and it contains an approximately 1 kb fragment 2.5 kb upstream of the Tk transcription start ( Song et al., 2014 , in this issue of Cell Reports ). Female flies were used for routine gut dissection because of the bigger size. Immunofluorescence staining, antibodies used, microscope image acquisition, and processing were as described previously ( Amcheslavsky et al., 2009, 2011 ). Feeding, Fecundity, and Enzyme Assays For feeding experiments, newly hatched or appropriately aged flies were kept in regular food vials or in plastic vials with a filter paper soaked with 1% sucrose in water and transferred to fresh vials every day. For dye-ingestion experiments, 20 flies were transferred to a plastic vial with a filter paper soaked with 5% sucrose and 0.5% bromophenol blue sodium salt (B5525, Sigma). At the indicated time, flies that showed visible blue abdomen were counted or used for gut-extract preparation and OD measurement. For defecation experiments, flies were placed in new vials with sucrose/bromophenol blue, and colored excreta on the vial wall were counted at 4 and 24 hr time points. For gut-clearance assays, flies were first fed with bromophenol blue, and ten flies that had blue abdomen were transferred to a new vial containing 5% sucrose only. At 2 and 24 hr after, flies were counted based on whether they still had blue abdomen or not. For fecundity assays, newly hatched male and virgin female flies were aged for 5 days on a normal diet. A group of ten females and five males were put together in a new food vial and transferred to a fresh food vial every day. The number of eggs was counted in each vial for 10 days. Vials were kept to allow larvae and pupae to develop, and the number of pupae was counted for every vial. For digestive enzyme assays, midguts from fertilized females (7–10 days old) were homogenized in 50 μl PBS at 5,000 rpm for 15 s (Precellys 24, Bertin Technologies) and centrifuged (10.000 × g for 10 min). Substrates for trypsin enzymatic assay (C8022) were purchased from Sigma-Aldrich, and the reaction was set up following the manufacturer’s instructions. Increase in absorbance (405 nm) or fluorescence (355 nm/460 nm) after substrate cleavage was monitored by a microplate reader (Mithras LB 940, Berthold Technologies). Each genotype corresponded to five to six samples of ten midguts each. Real-Time qPCR Total RNA was isolated from ten dissected female guts and used to prepare cDNA for quantitative PCR (qPCR) using a Bio-Rad iQ5 System ( Amcheslavsky et al., 2011 ). qPCR was performed in duplicate from each of at least three independent biological samples. The ribosomal protein 49 ( rp49 ) gene expression was used as the internal control for normalization of cycle number. The primer sequences are listed in the Supplemental Experimental Procedures . All error bars represent SEM, and p values are from the Student’s t test. Author Contributions A.A., Q.L., Y.N., and Y.T.I. designed, carried out, and analyzed the experiments. W.S. and N.P. performed the experiments that identified the TK-Gal4 gut driver, expression pattern, and cell-killing conditions. I.B. and D.F. designed and performed the gut digestive enzyme and feeding assays. A.A. and Y.T.I. wrote the manuscript. All authors amended the manuscript. Acknowledgments We acknowledge the Vienna Drosophila RNAi Center and the Bloomington Drosophila Stock Center for providing fly stocks. We thank Lucy O’Brien for the kind provision of fly stocks and reagents. Y.T.I. is supported by an NIH grant (DK83450) and is a member of the UMass DERC (DK32520), the UMass Center for Clinical and Translational Science (UL1TR000161), and the Guangdong Innovative Research Team Program (201001Y0104789252). Work in the N.P. laboratory is supported by HHMI and the NIH. Work in the D.F. laboratory was funded by CNRS and ANR Drosogut. I.B. was supported by the Sao Paulo regional government. Supplemental Information Supplemental Information includes Supplemental Experimental Procedures and three figures and can be found with this article online at http://dx.doi.org/10.1016/j.celrep.2014.08.052 . Supplemental Information Document S1. Supplemental Experimental Procedures and Figures S1–S3 Document S2. Article plus Supplemental Information
REFERENCES:
1. AMCHESLAVSKY A (2009)
2. AMCHESLAVSKY A (2011)
3. BARDIN A (2010)
4. BITEAU B (2011)
5. BITEAU B (2014)
6. BUCHON N (2010)
7. COGNIGNI P (2011)
8. CORDERO J (2012)
9. DURAND A (2012)
10. GARCIABELLIDO A (2009)
11. GERBE F (2012)
12. GUO Z (2013)
13. JIANG H (2009)
14. JIANG H (2009)
15. JIANG H (2011)
16. LI H (2013)
17. LI Z (2013)
18. LIN G (2008)
19. MICCHELLI C (2006)
20. MICCHELLI C (2011)
21. MIGUELALIAGA I (2012)
22. OBRIEN L (2011)
23. OHLSTEIN B (2006)
24. OSMAN D (2012)
25. POELS J (2009)
26. REEVES N (2005)
27. SATO T (2011)
28. SCOPELLITI A (2014)
29. SEN A (2004)
30. SONG W (2014)
31. TAKASHIMA S (2011)
32. TIAN A (2014)
33. VANDUSSEN K (2010)
34. VEENSTRA J (2008)
35. XU N (2011)
36. ZENG X (2013)
37. ZHOU F (2013)
|
10.1016_j.carpta.2023.100283.txt
|
TITLE: Carboxymethyl chitin and chitosan derivatives: Synthesis, characterization and antibacterial activity
AUTHORS:
- Islam, Md. Monarul
- Islam, Rashedul
- Mahmudul Hassan, S M
- Karim, Md.Rezaul
- Rahman, Mohammad Mahbubur
- Rahman, Shofiur
- Nur Hossain, Md.
- Islam, Dipa
- Aftab Ali Shaikh, Md.
- Georghiou, Paris E.
ABSTRACT:
Water-soluble carboxymethyl chitin (CMCT) 1a–b, and chitosan (CMCS) 2a–b derivatives were synthesized and evaluated for antibacterial activity. The synthesized compounds were characterized by Fourier transform infrared spectroscopy (FT-IR) and X-ray diffraction (XRD). Thermal properties of the synthesized compounds were studied by thermogravimetric analysis (TGA) and their surface morphologies examined by scanning electron microscopy (SEM). Antibacterial activity of the chitosan (CS) 2 and the synthesized derivatives were tested against both gram-negative (Shigella flexneri, Enterococcus faecalis, Pseudomonas aeruginosa, Klebsiella pneumoniae, Vibrio paraheamolyticus) and gram-positive (Staphylococcus aureus, Bacillus subtilis, Bacillus cereus) bacterial strains. CS 2 shows antimicrobial activity against all tested pathogenic bacteria, whereas CMCS 2a shows antimicrobial activity against Shigella flexneri, Bacillus cereus, and Bacillus subtilis. CMCS 2b shows antimicrobial activity against Pseudomonas aeruginosa, Klebsiella pneumoniae and Staphylococcus aureus. CMCT 1a only shows antimicrobial activity against Bacillus subtilis and 1b shows antimicrobial activity against Vibrio paraheamolyticus and Bacillus cereus.
BODY:
1 Introduction In recent years, there is much research underway globally which is concerned with producing new materials in a sustainable manner, including those derived from natural resources such as sustainable biopolymers which are biodegradable, are environmentally-friendly and are renewable with lower energy consumption. Chitin (CT) 1 and its N -deacetylated derivative chitosan (CS) 2 at the C-2 are such promising natural biopolymers ( Fig. 1 ) which have been widely used in pharmaceutical and biomedical applications ( Silva et al., 2017 ). The major industrial source of biomass for the large-scale production of CT and CS are mainly from the shell waste of prawn shrimp and crab ( Fu et al., 2013 ; Ramamoorthy et al., 2018 ). CT 1 is structurally identical to cellulose, but it has acetamide groups (–NHCOCH 3 ) at the C–2 position. On the other hand, CS 2 is a deacetylated derivative of CT 1 , which is a cationic amino-polysaccharide with a linear chain consisting of β -(1,4)-linked 2-amino-2-deoxy-β- d -glucopyranose moieties and completely and also partially-deacetylated 2-acetamino-2-deoxy-β- d -glucopyranoses ( Fig. 1 ) ( Vasiliu et al., 2009 ). CT and CS are bio-renewable, environmentally friendly, show no toxic effects on human cells and as a result these properties have made them scientifically attractive to chemists ( Kumar et al., 2004 ). The biological activities of chitin and chitosan derivatives have made them promising biopolymeric agents for drug delivery applications which can control the specific release of the drug over a long period of time ( Prabaharan, 2008 ). Notably, CT and CS are nontoxic biopolymers having antimicrobial activity, wound-healing properties, and can decrease the cholesterol levels in humans ( Denkbaş & Ottenbrite, 2006 ; Harish Prashanth & Tharanathan, 2007 ; Islam et al., 2011b ; Islam et al., 2011c ; Roller & Covill, 1999 ). Due to the poor solubility of chitin and chitosan in water or in organic solvents, however, their utilization for specific applications in industry and medicine are very limited. The chemical modification of the amino or hydroxyl groups in the side chains of CT and CS afford new derivatives for promising biological activities and physiochemical properties. By utilizing the acetamido, amino and hydroxyl groups of chitin and chitosan as modification sites, many derivatives have been obtained via substitution, chain modification, and depolymerization ( Harish Prashanth & Tharanathan, 2007 ). These modified chitin and chitosan derivatives may enhance their solubility in water and organic solvents Scheme 1 , 2 , 3 . CS 2 has three types of reactive functional groups, namely, a primary amino group at C–2 and two hydroxy (primary and secondary) groups at the C–3 and C–6 positions which allow for chemical modification ( Fig. 1 ) ( Aranaz et al., 2010 ). The chemical modification of chitin and chitosan to produce derivatives such as N,N,N -trimethylchitosan, N,O -acetylchitosan, N -acetylated-chitosan and N -carboxymethylchitosan reported by many researchers, have been synthesized to improve the solubility of chitosan derivatives ( Xu S. et al., 2020 ) Among other modifications or derivatives, chitosan gel formation via a chitosan-epichlorohydrin adduct has been shown to have significant cell adhesion properties; cross-linking chitosan with glutaraldehyde can improve the chemical and mechanical resistance of chitosan; N- [(2‑ hydroxy -3- trimethylammonium ) propyl ]chitosan chloride has been synthesized to produce good antibacterial activity ( Fangkangwanwong et al., 2006 ; Kim & Je, 2015 ; Kumirska et al., 2011 ; Lim & Hudson, 2004 ; Lu et al., 2004 ; Muzzarelli et al., 1988 ; Muzzarelli et al., 1994 ; Sashiwa et al., 2002 ; Vieira & Beppu, 2005 ). Jayakumar et al., have reported that alkyl, or carboxymethyl functional groups introduced at the side chains of chitin and chitosan can significantly increase their solubility as well as their potential application in different fields without affecting their original structure ( Jayakumar et al., 2010 ). The shrimp industry is one of Bangladesh's leading exports and it contributes to a significant part of it's economy. It is the second largest export industry after garment production. However, the resulting rapid expansion of shrimp farming has created a series of negative environmental impacts, including ecological imbalance, environmental pollution and some cases of disease outbreaks. Thus, shrimp farming in Bangladesh is facing management-related difficulties which lead to greater concerns about its sustainability. During the processing of shrimp the meat is mostly separated and taken and used, while the shell and head portions generate a large amount of waste, presenting financial and environmental challenges to the waste management practices for the shrimp processors. Instead of dumping the shrimp waste into landfills or into the sea, the waste could be chemically converted to chitin and its derivative chitosan. Therefore, for both economic and environmental reasons it is necessary and desirable to develop a low-cost suitable technology to convert the waste biomaterials into valuable products such as chitin and its chitosan derivatives. In our laboratory, we are now focusing on the synthesis of such valuable derivatives of chitin and chitosan, determining their biological properties and developing composites for water purification and other potential applications ( M. Islam, Masum, Rahman, & Shaikh, 2011d; M.M. Islam et al., 2011a; M.M. Islam, Masum, & Mahbub, 2011b; M.M. Islam, Masum, R, & Haque, 2011c; Kabiraz et al., 2016; Siraj et al., 2012 ). The main objective reported in the present study was the preparation of CS 2 via CT 1 from indigenous sources, and to form their water-soluble derivatives such as carboxymethyl chitin (CMCT) and carboxymethyl chitosan (CMCS) as well as study their physicochemical and antibacterial activity. 2 Experimental 2.1 Materials and methods Prawn scientific name is Penaeus monodon (in Bengali: Bagda ), head peels were collected from the Satkhira shrimp firm, located in the southern part of Bangladesh. Prawn head peels were scraped free of loose tissue, washed with cold water and dried in the sunlight (Temperature 30–33 °C) in the open for 2 days (48 h). All reagent grade chemicals such as HCl (37%, Merck), NaOH (98%, Daejung), Monochloro acetic acid (>99, Merck), Isopropyl alcohol (99.7%, Active Fine Chemicals), Methanol (99.8%, Active Fine Chemicals) were purchased from local market and used without further purification. 2.2 Preparation of chitin 1 (CT) Dried prawn head peel (500.0 g) was mixed with 5% hydrochloric acid (100 mL) in an Erlenmeyer flask and the mixture was stirred for 24 h at room temperature. The reaction mixture was then filtered and the residual head peel was washed with distilled water and dried in the sun. The de-mineralized head peel was mixed with aqueous 5% sodium hydroxide solution (50 mL) and stirred at 70 °C for 3 h. The deproteinized peel was washed with distilled water and dried in the sun (30–33°C) for 2 days (48 h). The dried product is chitin 1 (115.0 g). IR (KBr), ν max (cm −1 ): 3430 (O–H stretch), 3262 (N–H stretch), 2918 (C–H stretch), 1738 (C=O bend, amide I), 1655 (C=O bend, amide I), 1619 (C=O bend, amide I), 1569 (N–H bend, amide II), 1416 (C–H stretch), 1376 (C–H stretch), 1312 (C–N stretch, amide III), 1115 (bridge O stretch) and 1063, 1009 (C–O stretch). 2.3 Preparation of chitosan 2 (CS) Chitin 1 (115.0 g) was mixed with aqueous 70% sodium hydroxide (50 mL) and stirred at 80 °C for 4 h. The mixture was then cooled and filtered. The residue was washed with distilled water and dried in sun. The dried product is chitosan 2 (90.0 g). IR (KBr), ν max (cm −1 ): 3359 (O–H stretch), 2879 (C–H stretch), 1578 (N–H bend), 1420 (C–H stretch), 1375 (C–H stretch), 1312 (C–N stretch, amide III), 1146 (bridge O stretch), and1056, 1030 (C–O stretch). 2.4 Preparation of O -carboxymethyl chitin 1a ( O CMCT) Chitin 1 (5.0 g) was mixed thoroughly with aqueous 40% w/v NaOH (40 mL) and kept overnight (12 h) at -20 °C. Isopropanol (100 mL) was added into the thawed CT 1 slurry and monochloroacetic acid (28.8 g) in 100 mL IPA was added dropwise to the reaction mixture under constant stirring using a magnetic stirrer and then the reaction mixture was stirred at 40 °C for a further 6 h. The reaction mixture was cooled to room temperature (25 °C) and neutralized with 10% hydrochloric acid. The mixture was filtered, and the solid was collected and washed using 80% (v/v) methanol/water. The product was dried in an oven at 50 °C for 8 h to give 5.5 g of O -carboxymethyl chitin 1a ( O CMCT). IR (KBr), ν max (cm −1 ): 3357 (O–H stretch), 3282 (N–H stretch), 2895 (C–H stretch), 1736 (C=O bend), 1571 (C=O, COOH, anti-symmetric stretching), 1470 (N–H bend), 1416 (C=O, COOH, symmetric stretching), 1360 (C–H stretch), 1311 (C–N stretch), 1135, 1119 (bridge O stretch), and 1055, 923 (C–O stretch). 2.5 Preparation of O -carboxymethyl chitosan 2a ( O CMCS) Chitosan 2 (5.0 g) was mixed thoroughly with aqueous 40% NaOH (w/w, 40 mL) and kept overnight (12 h) at -20 °C. Isopropanol (100 mL) was added into the thawed CS 2 slurry and monochloroacetic acid (28.8 g) in 100 mL IPA was added dropwise to the reaction mixture with under stirring with a magnetic stirrer and the reaction mixture was stirred at 40 °C for a further 6 h . The the reaction mixture was then neutralized with 10% hydrochloric acid. The mixture was filtered, and the solid was collected, washed using 80% (v/v) methanol/water and the product was dried in the oven at 50 °C for 8 h to give 5.2 g of dried O -carboxymethyl chitosan 2a . IR: ν max (KBr)/cm −1 : 3257(O–H, N–H stretch), 2865(C–H stretch), 1726 (C=O bend), 1579 (C=O, COOH, anti-symmetric stretching), 1411 (N–H bend), 1310 (C–N stretch), 1246 (bridge O stretch) and 1125, 1056 (C–O stretch). 2.6 Preparation of O –carboxymethyl chitin 1b ( O CMCT) Chitin 1 (2.0 g) was added to 50 mL isopropyl alcohol in a round-bottom flask (500 mL) and the reaction mixture was stirred using a magnetic stirrer, at room temperature (25 °C) for 2 h. Aqueous 60% NaOH solution (w/v, 80 mL) was then added into the reaction mixture which was then heated at reflux at 65 °C for 8 h. Monochloroacetic acid solution in 100 mL IPA was then added in five equal parts over a period of 10 min. The reaction mixture was heated with stirring, at 65 °C for a further 8 h. The reaction mixture was then neutralized using hydrochloric acid (4.0 M). After removal of the insoluble residue by filtration, the resulting carboxymethyl chitin 1b was precipitated by adding methanol. The product was filtered, washed several times with a mixture of methanol/water (v/v, 1:1) and dried in the oven at 50 °C for 8 h to give O CMCT 1b (2.2 g). IR: ν max (KBr)/cm −1 : 3372 (O–H, N–H stretch), 2865(C–H stretch), 1724 (C=O stretch), 1630 (C=O, COOH, anti-symmetric stretching), 1377 (CH 2 stretch), 1228 (C–N stretch), 1054 (C–O stretch) and 899 (C–C stretch). 2.7 Preparation of N,O –carboxymethyl chitosan 2b ( N,O CMCS) Chitosan 2 (2.0 g) was added to 50 mL isopropyl alcohol in a round-bottom flask (500 mL) and the reaction mixture was stirred at room temperature for 2 h, using a magnetic stirrer,. Aqueous 60% w/v NaOH (80 mL) was then added into the reaction mixture which was then heated at 65 °C for 8 h. Aqueous monochloroacetic acid solution in 100 mL IPA was then added in five equal parts over a period of 10 min. The reaction mixture was heated with stirring, at 65 °C for a further 8 h. The reaction mixture was then neutralized using hydrochloric acid (4.0 M). After removal of the insoluble residue by filtration, the resulting carboxymethyl chitin 2b was precipitated by adding methanol. The product was filtered, washed several times with a mixture of methanol/water (v/v, 1:1) and dried in an oven at 50 °C for 8 h to give 2.4 g of N,O CMCS 2b . IR: ν max (KBr)/cm −1 : 3410 (O–H, N–H stretch), 1737(C=O, COOH stretch), 1436, 1375 (CH 2 stretch), 1417 (COO – stretch), 1267, 1216 (C–N stretch), 1133 (bridge O stretch), and 1066, 1033 (C–O stretch), 899 (C–C stretch). 2.8 Characterization of chitin 1, chitosan 2 and their respective derivatives, 1a–b and 2a–b 2.8.1 Fourier transform infrared spectroscopy (FT-IR) Fourier transform infrared spectroscopy (FT-IR) was performed using a Perkin Elmer Universal ATR spectrophotometer (UATR-FT-IR, USA) equipped with a ZnSe crystal for the FT-IR spectroscopy. Transmittance was measured as the function of the wave number between 4000 and 650 cm −1 with their solution of 4 cm −1 and the number of scans equal to 12. The degree of deacetylation (DD) of chitosan 2 was calculated from Eq. (1) ( Kaya et al., 2015 ; Kaya & Baran, 2015 ): where A (1) DD % = 100 − [ ( A 1655 / A 3450 ) * 100 / 1.33 ] 1655 is the amide-I C=O absorbance at 1655 cm −1 which corresponds to the N -acetyl group content in the sample. A 3450 is the absorbance at 3450 cm −1 of the hydroxyl band and is taken as an internal standard for correcting for film thickness. The factor 1.33 denotes the value of the ratio of A 1655 /A 3450 for fully N -acetylated chitosan. 2.8.2 Acid-base titration A sample of CS 2 (0.10 g) was dissolved in 25 mL hydrochloric acid (0.10 M) and then titrated with a standard aqueous 0.10 M NaOH solution. The titrant was added dropwise until the pH reached 11.5. The pH values of the mixture were recorded and a curve with two inflection points was obtained. The average degree of deacetylation (DD) of chitosan samples were determined by using Eq. (2) ( Abdel-Rahman et al., 2015 ; Kaya, Baran et al., 2014 ), where c NaOH is the concentration of the NaOH solution; (V 2 −V 1 ) is the difference between the two inflection points; 161 is the molecular mass unit of chitosan; m is the mass of chitosan sample: (2) DD ( % ) = C NaOH × ( V 2 − V 1 ) × 161 / m 2.8.3 Thermogravimetric analysis (TGA) Approximately 15–20 mg of each sample was weighed into an aluminum pan and subjected to thermogravimetric analysis (TGA model 6300 Japan). Samples were heated in a nitrogen atmosphere (50 mL/min) from room temperature to 600 °C at a rate of 20 °C/min. A representative TGA result showing weight change data for chitin and chitosan samples plotted against temperature can be seen in Fig. 3 . 2.8.4 Determination of degree of substitution for 1a–b and 2a–b The degree of substitution (DS) of 1a – b and 2a – b was determined by using a titrimetric method as reported ( Ge & Luo, 2005 ). In brief, 0.10 g samples of each derivative ( 1a – b and 2a – b ) were dissolved in distilled water (20 mL). The pH of the solutions was adjusted to pH < 2 by adding standard 0.10 M hydrochloric acid. The solutions were then titrated with 0.050 M standard aqueous NaOH, the pH values being simultaneously recorded under non-stop stirring conditions. The degree of substitution (DS) of 1a and 1b were determined using Eq. (3) , and the DS of 2a and 2b were determined using Eq. (4) : (3) DS = 203 A / ( m − − 58 A ) where, (4) DS = 161 A / ( m − − 58 A ) A = V NaOH X C NaOH , V NaOH is the difference in the volumes (L) of NaOH solution recorded between the two inflection points, C NaOH is the molarity of the aqueous NaOH (0.050 mol/L) and m is the mass of samples ( 1a, 1b and 2a, 2b ) used; 203, 161 and 58 are the molecular weights of N-acetylglucosamine (CT skeleton unit), glucosamine (CS skeleton unit) and a carboxymethyl group, respectively. 2.8.5 Scanning electron microscopy (SEM) SEMs of all samples were taken using a field emission scanning electron microscope (JSM-7610F, JAPAN); under the following conditions: 10 kV; working distance of 3.4–7.3 mm; display mode: secondary electrons; high vacuum and room temperature. The pictures were made with 55–90,000-X magnification. 2.8.6 X-ray diffraction (XRD) The X-ray diffraction patterns of water-soluble carboxymethyl chitin (CMCT) 1a – b , and carboxymethyl chitosan (CMCS) 2a – b were collected on a EMMA GBC, Australia, X-ray Diffractometer. The scan was completed at room temperature from 10 °C to 60 °C (2θ) in 0.02 °C steps, with a counting time of 4 °C per minute. 2.8.7 Solubility test in distilled water The solubility of synthesized derivatives in water were determined by naked eye observation at 25 °C. At first, 20 mg amounts of the synthesized derivatives were added in 100 mL distilled water then mixed using a vortex mixer (Model: VM-1000, Brand: Digisystem Origin: Taiwan). When a clear solution was observed, additional 10 mg amounts of the product was added to the clear solution until precipitation was observed. This total amounts of product added was considered as the solubility limit level. 2.9 Antimicrobial activity determination of chitosan 2, 1a–b and 2a–b 2.9.1 Preparation of solutions for antimicrobial testing For the preparation of a 2.4% (w/v) CS 2 solution, CS 2 was dispersed in aqueous 1.0% (v/v) acetic acid. The other prepared derivatives 1a, 1b, 2a and 2b were dispersed in distilled water. After stirring overnight, all solutions were autoclaved at 120 °C for 15 min (the thermostability of the compounds under these conditions had been previously checked). 2.9.2 Microorganisms The antibacterial activities of CS 2 and the other prepared derivatives were tested against eight bacterial strains. The five gram-negative bacteria used were Shigella flexneri ATCC 12,022, Enterococcus faecalis ATCC 29,212, Pseudomonas aeruginosa ATCC 27,853, Klebsiella pneumoniae ATCC 13,883, Vibrio paraheamolyticus ATCC 17,802, and gram-positive three were Staphylococcus aureus ATCC 9144, Bacillus subtilis ATCC 11,774, Bacillus cereus ATCC 10,876. 2.9.3 Determination of antibacterial activity The bacterial inoculums were prepared using the Clinical and Laboratory Standards Institute (CLSI) guidelines. The bacterial cultures were emulsified in normal saline and their turbidities were matched with 0.5 McFarland turbidity standards. The agar cup method was followed to investigate the antibacterial activity of the extracts. The TSB (0.1 mL) broth culture of the test organisms was firmly seeded over Mueller-Hinton Agar (MHA) plates ( Barry, 1980 ). The chitosan solution was added to the different wells in the plate by using a micropipette and the plates were kept at low temperature (4 °C) for 2–4 h and then incubated at 37 °C for 24 h. After the incubation period, the formation of zones around the wells confirmed the antibacterial activity of the respective compounds. 3 Result and discussion The distinction between CT 1 and CS 2 is somewhat unclear; but it has been scientifically estimated that chitosan 2 consists of >50% of deacetylated CT 1 , and whereas CS 2 is soluble in aqueous 1% acetic acid, chitin 1 is insoluble ( Peter MG, 1995 ). Due to its relative simplicity, FT-IR spectroscopy is one of the most important techniques for characterization of CT 1 and CS 2 ( Ng et al., 2006 ). The IR-spectra of α - and β -chitin display a series of narrow absorption bands, typical of crystalline polysaccharide samples. The C=O stretching region of the amide moiety, between 1700 and 1500 cm −1 yields different signatures for α - and β -chitin. For α-chitin, the amide I band is split into two components at 1655 and 1619 cm −1 (due to the influence of hydrogen bonding or the presence of an enol form of the amide moiety ( Focher et al., 1992 ), whereas for β -chitin it is at 1619 cm −1 ( Fig. 2 : top ). The amide II band is observed in 1569 cm −1 for β -chitin. Infrared spectra of β -chitin reveal two additional bands for CHx deformations at about 1416 and 1376 cm −1 and a greater number of narrower bands in the C–O–C and C–O stretching vibration region (1150–950 cm −1 ) observed in β-chitin. The FT-IR spectrum of the CT 1 ( Fig. 2 : top ) isolated from prawn head confirmed the finding that this chitin resembles β-chitin more closely than α-chitin ( Kumirska et al., 2010 ). The production efficiency of CS 2 by the N -deacetylation of CT 1 was evaluated using FT-IR analysis. During the N -deacetylation of CT 1 , the band at 1619 cm −1 gradually decreased, while that at 1578 cm −1 increased ( Fig. 2 : bottom ), indicating the prevalence of NH 2 groups. The band at 1578 cm −1 displayed a greater intensity than the one at 1619 cm −1 and demonstrated the effective deacetylation of CT 1 . The formation of a new band at 1578 cm −1 and the disappearance of the band at 1619 cm −1 are due to the NH 2 deformation, which predominates over the band at 1736 cm −1 ( Fig. 2 : bottom ). This latter band is associated with the carbonyl (C=O) groups, that tends to decrease as the degree of deacetylation of CS 2 increases. The disappearance of the two bands between the regions 3430 and 3262 cm −1 , as already mentioned, is related to deacetylation of the group NHCOCH 3 , transforming the amide into the primary amine. The degree of deacetylation (DD) of CS 2 was calculated from both the FT-IR spectra ( Eq. (1) ) and the titrimetric method by using ( Eq. (2) ) and found to be 80% and 78%, respectively. The average DD of CS 2 is therefore assumed to be 79%. The FT-IR absorption bands (cm −1 ) of product O –CMCT 1a (SI, Figure. S2) were 3357 (O–H stretching) and 3282 (N–H stretching), 2895 (C–H stretching), 1736 (C=O). 1571 (C=O of –COOH antisymmetric stretching), 1416 (C=O of –COOH symmetric stretching), 1311 (C–N stretching), and 1055 (C–O stretching). Compared with the CT 1 spectrum, the new absorption bands of –COOH are strong, and the O–H and N–H bands become narrow and weak, both indicating a high carboxymethylation of the –OH group. Meanwhile, the bands at 1571 cm −1 intensify significantly, thus indicating that carboxymethylation has occurred on the –OH groups of the CT 1 . The FT-IR absorption bands (cm −1 ) of the product O –CMCT 2a (SI, Figure. S3 ) are 3257 (O–H stretching) and 2865 (C–H stretching), 1579 (C=O of –COOH antisymmetric stretching), 1310 (C–N stretching), and 1056 (C–O stretching). Compared with the CS 2 spectrum, the new –COOH absorption bands of are strong, and the O–H and N–H bands become narrow and weak, both indicating a high carboxymethylation on the –OH group. These phenomena are similar to those observed with compound 1a . The broad peak of O –CMCT 1b (SI, Figure. S4 ) at 3372 cm −1 is due to the –OH stretching vibrations. The sharp absorption bands at 1377 cm −1 correspond to the CH 2 bending vibration. The band at 899 cm −1 is attributed to the C–C stretching vibration. The peak at 1724 cm −1 in the FT-IR spectrum can be assigned to the C=O vibrational mode. Compared with the CT 1 spectrum, the new –COOH absorption bands are strong, and the O–H and N–H bands become narrow and weak, both indicating a high carboxymethylation of the –OH group at the C-3 position. The broad peak of N,O –CMCS 2b (SI, Figure. S5 ) at 3410 cm −1 is due to–OH stretching vibrations. The sharp absorption bands at 1436 cm −1 correspond to the CH 2 bending vibration. The peak observed at 1375 cm −1 is due to the CH 2 wagging vibration. The band at 899 cm −1 is attributed to the C–C stretching vibration. The peak at 1737 cm −1 in the FT-IR spectrum can be assigned to the C=O vibrational mode. The absorption band at 680 cm −1 is assigned to the out-of-plane bending vibration of the carboxylate group. Compared with the CS 2 spectrum, the new –COOH absorption bands are strong, and the O–H and N–H bands become narrow and weak, both indicating a high carboxymethylation of the –OH or –NH 2 groups. Meanwhile, the bands at 1737 cm −1 and 1216 cm −1 intensify significantly, thus indicating that the carboxymethylation has occurred on both the amino and the hydroxyl groups of CS 2 ( Kaya, Baran et al., 2014 ). In addition, the bands corresponding to C=O of NH–C=O stretching and N–Bending are overlapped with the much stronger C=O of COOH antisymmetric stretching. The degree of substitution (DS) of 1a – b and 2a – b were calculated by using the Eqs. (3 and 4) are 0.66, 0.75, 0.68 and 0.89, respectively. The thermal parameters obtained from the TGA curves ( Fig. 3 ) are listed in Table 1 . The initial weight loss (below 120 °C), observed in all compounds, can be attributed to the loss of moisture, since polysaccharides usually have a strong affinity for water and therefore may be easily hydrated. The second (main) step includes both decomposition and oxidation reactions of the prepared compounds. In the last stage, there is almost a complete degradation of intermediates generated earlier at lower temperatures. It can be seen from the TGA curve in Fig. 3 that the decomposition stage starts at 260 °C for most of the samples. However, in the case of 2b , we observed an exceptional second stage from 150 °C. This may be due to the change of the structure of the material and a change in the mechanism of its thermal degradation process. Scanning electron microscopy was used to investigate the surface morphology of CT 1 , CS 2 and their derivatives 1a – b and 2a – b . The surface morphology of CT 1 ( Fig. 4 : Top left ) shows a more compact, denser structure, with layers of crumbling flakes without porosity. CMCT 1a exhibits an irregular, rough and wrinkled surface without any smear layer and ice melting type ( Fig. 4 . Top middle ). CMCT 1b shows a prominent arranged microfibrillar crystalline structure with the porous surface ( Fig. 4 . top right ). The surface morphology of CS 2 ( Fig. 4 . Bottom left ) shows a non-homogenous and non-smooth surface with straps and shrinkage. CMCS 2a has the non-smooth porous surface ( Fig. 4 . Bottom middle ). The surface morphology of 2b showed an irregular non-smoothed surface ( Fig. 4 . bottom right ). From these SEM results, we can conclude that the incorporation of the acetyl group affects the surface morphology and the physicochemical characteristics of the polymer. The degrees of substitution (DS) for 1a, 2a, 1b, and 2b were calculated using Eqs. (3 and 4) . The DS of 1a (0.66) is almost equal to that of 2a (0.68) indicating that under the same conditions only the C–6 primary hydroxyl hydrogen might be substituted by the carboxymethyl group. In the case of 1b (0.75) and 2b (0.89), at a higher temperature (65 °C), for 1b both C–3, C–6 hydroxyl hydrogens have a chance to be substituted by carboxymethyl groups. For 2b both the C-2 -NH 2 group and the C–3 and C–6 hydroxyl hydrogens could be substituted by carboxymethyl groups and result in a higher value of DS (0.89). X-ray diffraction analysis was applied to detect the crystallinity of the isolated CT 1 and CS 2 . CT 1 ( Fig. 5 : Left side ) shows strong reflections at 2θ around 19–20°. CS 2 ( Fig. 5 : right side ) shows reflections at 2θ around 20–21°. The XRD patterns of CT 1 and CS 2 are both in good accordance with the literature data from different sources of CT 1 and CS 2 ( Kaya & Baran, 2015 ; Kumari et al., 2015 ; Kumirska et al., 2010 ; Sagheer et al., 2009 ). The X-ray patterns of all compounds reported herein are shown in the Supporting Information. For carboxymethyl chitin 1a (Figure. S6) peaks can be seen on angle 2θ = 26.8°, 33.5°, 29.3°, 42.2°, 54.3°, 58.1°. The X-ray patterns of carboxymethyl chitosan 2a shown in Figure. S7, peaks can be seen on angle 2θ = 32.2°, 45.8°, 56.2°, 75.6°. Figure. S8 shows the XRD patterns of carboxymethyl chitin 1b with peaks on angle 2θ = 31.9°, 45.8°, 57.1°, 77.9° (SI,). However, Figure. S9 reveals no crystalline peaks for 2b . It is clear therefore, that the carboxymethylation of CT 1 and CS 2 forced important changes in the array of the polymer chains in the solid state. In fact, the spectra of 1a, 1b and 2a, 2b exhibit poorly defined and less intense peaks when compared to those of their respective parents, CT 1 and CS 2 . This may be due to the presence of the carboxymethyl moieties which replace the hydrogen atoms of the hydroxyl and amino groups of the CT 1 and CS 2 . Thus, since the carboxymethyl groups are much larger than the hydrogen atoms, an important excluded volume effect occurs and a polyelectrolyte effect must also be considered due to the presence of charged groups in the chains of 1a, 1b and 2a, 2b which lead to the deformation of the strong hydrogen bonds in CT 1 and CS 2 . This result means that the carboxymethyl groups of 1a – b and 2a – b are more amorphous than chitin 1 and chitosan 2 . Kim et al. observed similar phenomena in the case of water-soluble chitin derivatives and they reported that trimethylaminoethyl–chitin (TEAE–chitin) did not show any crystalline peak ( Mohammed et al., 2013 ). 3.1 Solubility test of derivatives 1a, 1b, 2a and 2b in distilled water From the solubility tests conducted, chitin products 1a and 1b are both ∼0.2% by weight soluble in distilled water. Chitosan product 2a is ∼0.6% and 2b is ∼0.3% soluble by weight in water. The photographs on the left in Fig. 6 (A) show the saturated (homogeneous) solution of our synthesized derivatives in water at room temperature 25 °C, and on the right-side (B) that a higher mass loading that the solubility limit in water has been reached. 3.2 Antibacterial activity of the solutions of CS 2 and the derivatives 1a, 1b, 2a and 2b The antibacterial activity of CS 2 , and the derived products were tested against eight bacterial strains according to the reported procedures, and are listed in Table 2 ( Islam, Masum, & Mahbub, 2011b; Islam, Masum, R, & Haque, 2011c ). The five gram-negative bacteria Shigella flexneri ATCC 12,022, Enterococcus faecalis ATCC 29,212, Pseudomonas aeruginosa ATCC 27,853, Klebsiella pneumoniae ATCC 13,883, Vibrio paraheamolyticus ATCC 17,802, and three gram-positive Staphylococcus aureus ATCC 9144, Bacillus subtilis ATCC 11,774, Bacillus cereus ATCC 10,876 in Muller–Hinton (M–H) broth were tested. This study was conducted to assess the inhibitory effects of CS 2 , derivatives 1a – b and 2a – b , as measured by macro and micro broth dilution techniques and the results are presented in Table 2 . CS 2 exhibited activity against all eight pathogens and the highest zone of inhibition was observed against Staphylococcus aureus (35 mm). The carboxymethyl derivatives of chitin ( O CMCT) 1a only show antimicrobial activity against Bacillus subtilis , whereas O CMCT 1b shows antimicrobial activity against two pathogens, Vibrio paraheamolyticus , and Bacillus cereus. The carboxymethyl methyl derivatives of CS, O CMCS 2a show antimicrobial activity against three pathogens Shigella flexneri, Bacillus cereus, and Bacillus subtilis. The prepared derivative of CS, 2b also shows antimicrobial activity against three pathogens such as Pseudomonas aeruginosa, Klebsiella pneumoniae, and Staphylococcus aureus. In the case of the CS 2 derivatives, due to their having free amino groups, antibacterial activity against three pathogens were observed. Similar phenomena have also been reported for the decrease of antibacterial activity of carboxymethyl chitosan relative to CS ( Kim et al., 1997 ). The antimicrobial activity of CS is associated with the degree of deacetylation (DD); consequently, the antimicrobial activity of CS depends on its positive charge number ( Kaya, Cakmak et al., 2014 ; Leuba & Stossel, 1986 ; Young et al., 1982 ) and it is reported that CS with a higher DD, and thus a higher positive charge, would be expected to have greater antimicrobial activity ( Tsai & Su, 1991 ). Park et al. studied the susceptibility of both gram-negative and gram-positive bacteria against CS with a different DD and observed that CS with 75% DD was more effective than 90 or 50% deacetylated chitosan ( Park et al., 2004 ). Mohamed et al . reported the antimicrobial activity of carboxymethyl chitosan against Bacillus subtilis, Staphylococcus aureus , and Escherichia coli ( Mohamed & Abd El-Ghany, 2012 ). However, in the present study CS 2 showed greater antibacterial activity than the carboxymethyl derivatives ( 1a–b and 2a–b ). Its application however may only be effective in an acidic medium due to its low solubility in neutral and basic media Therefore, chemical modifications of CS 2 are required to enhance its solubility and broaden the field of its applications. 4 Conclusion The carboxymethyl chitin (CMCT) 1a – b , and carboxymethyl chitosan (CMCS) 2a – b derivatives were successfully synthesized from prawn head shell via CT 1 using relatively mild chemical methods with minor modification. The synthesized compounds were characterized by Fourier transform infrared spectroscopy (FT-IR) and X-ray diffraction (XRD). The thermal properties were studied by thermogravimetric analysis (TGA) and the surface morphology examined by scanning electron microscopy (SEM) of the synthesized compounds. The synthesized carboxymethyl derivatives of chitin and chitosan showed potential antimicrobial activity against the tested microorganisms. From these results it could be concluded that the chemical methods reported here have suitable potential for the extraction of chitosan via chitin from prawn head shell wastes. Furthermore, this approach could help to develop an environmentally-friendly waste management system to minimize environmental pollution from prawn head shell wastes and has the potential to earn a significant amount of foreign currency to Bangladesh by exporting chitin and chitosan, or save the currency spent by reducing their imports. Synthesis of various derivatives of chitin and chitosan under varying conditions are in progress in our laboratory to improve their solubility in different solvents and investigate their resulting biological properties. CRediT authorship contribution statement Md. Monarul Islam: Investigation, Validation, Formal analysis, Funding acquisition, Conceptualization, Data curation, Methodology, Project administration, Writing – original draft. Rashedul Islam: Formal analysis, Investigation, Data curation. S M Mahmudul Hassan: Formal analysis, Investigation. Md.Rezaul Karim: Formal analysis, Investigation. Mohammad Mahbubur Rahman: Formal analysis, Investigation. Shofiur Rahman: Formal analysis, Investigation. Md. Nur Hossain: . Dipa Islam: Formal analysis. Md. Aftab Ali Shaikh: Supervision, Writing – review & editing. Paris E. Georghiou: Writing – review & editing. Declaration of Competing Interest The authors report no declarations of interest. Acknowledgments This work was supported by Bangladesh Council of Scientific and Industrial Research (BCSIR) and partially funded by a special allocation program of the Ministry of Science and Technology (MOST), Bangladesh . Authors are thankful to Muhammad Shahriar Bashar (Principal Scientific Officer), IFRD, BCSIR for providing XRD and Md. Abdul Gafur (Principal Scientific Officer) PP&PDC for TGA analyses. Supplementary materials Supplementary material associated with this article can be found, in the online version, at doi:10.1016/j.carpta.2023.100283 . Appendix Supplementary materials Image, application 1
REFERENCES:
1. ABDELRAHMAN R (2015)
2. ARANAZ I (2010)
3. BARRY A (1980)
4. DENKBAS E (2006)
5. FANGKANGWANWONG J (2006)
6. FOCHER B (1992)
7. FU X (2013)
8. GE H (2005)
9. HARISHPRASHANTH K (2007)
10. ISLAM M (2011)
11. ISLAM M (2011)
12. ISLAM M (2011)
13. ISLAM M (2011)
14. JAYAKUMAR R (2010)
15. KABIRAZ M (2016)
16. KAYA M (2015)
17. KAYA M (2015)
18. KAYA M (2014)
19. KAYA M (2014)
20. KIM C (1997)
21. KIM D (2015)
22. KUMAR M (2004)
23. KUMARI S (2015)
24. KUMIRSKA J (2010)
25. KUMIRSKA J (2011)
26. LEUBA L (1986)
27. LIM S (2004)
28. LU S (2004)
29. MOHAMED N (2012)
30. MOHAMMED M (2013)
31. MUZZARELLI R (1994)
32. MUZZARELLI R (1988)
33. NG C (2006)
34. PARK P (2004)
35. PETER M (1995)
36. PRABAHARAN M (2008)
37. ROLLER S (1999)
38. RAMAMOORTHY D (2018)
39. SAGHEER F (2009)
40. SASHIWA H (2002)
41. SILVA S (2017)
42. SIRAJ S (2012)
43. TSAI G (1991)
44.
45. VIEIRA R (2005)
46. XUA S (2020)
47. YOUNG D (1982)
|
10.1016_j.mmcr.2023.06.001.txt
|
TITLE: Seborrheic dermatitis-like adult tinea capitis due to Trichophyton rubrum in an elderly man
AUTHORS:
- Xie, Wenting
- Chen, Yuping
- Liu, Weida
- Li, Xiaofang
- Liang, Guanzhao
ABSTRACT:
Adult tinea capitis is often neglected and misdiagnosed, especially in men. We herein reported an older man with seborrheic dermatitis-like tinea capitis caused by Trichophyton rubrum to raise awareness of the disease. Scale and alopecia were the critical diagnostic clues in this patient. Given the previous presence of tinea pedis and onychomycosis, relevant mycological examinations were promptly performed, and antifungal therapy, as well as patient education, were effectively administered.
BODY:
1 Introduction Tinea capitis, a public health concern, predominately occurs in prepubescent children and occasionally in adults, where it most commonly affects postmenopausal women, probably due to decreased estrogen levels and triglycerides in the sebaceous glands after menopause [ 1 ]. Trichophyton rubrum ( T.rubrum ) is the most common causative agent of skin and nails fungal infections but rarely causes scalp and hair infections [ 2 ]. In recent years, however, many investigations have reported that the prevalence of adult tinea capitis (ATC) is increasing, and a considerable proportion of it is infected by T.rubrum [ 3 , 4 ]. The clinical presentations of ATC are atypical and variable, depending on the types of pathogenic microorganisms, hair invasion, and host inflammatory response, which often lead to misdiagnosis and improper management. We herein reported an older man with seborrheic dermatitis-like tinea capitis caused by T.rubrum to raise awareness of the scalp disease in adult males. 2 Case presentation A 77-year-old man visited the dermatology hospital with over six months history of furfur and itchy scalp (day 0). The patient developed scattered scaly scalp with obvious pruritus before six months. He had been diagnosed with “seborrheic dermatitis " in the other hospital and treated with selenium sulfide lotion and systemic Chinese medicine. However, the lesions gradually expanded with growing hair loss. On physical examination, diffuse and gray-yellow scaly erythema accompanied by scattered blood crusts, as well as sporadic hair loss, were observed on his parietal scalp ( Fig. 1 ). In addition, he has been suffering from tinea pedis and onychomycosis for years without treatment, presenting as desquamation of toes, discoloration and thickening of most nail plates (S.1). Scale and alopecia were the critical diagnostic clues in this patient, and considering his history of superficial fungal infections, relevant mycological examinations were carried out quickly. Direct microscopic study of the lesion showed long and septate hyphae (day 0) ( Fig. 2 A). Fungal culture on potato dextrose agar established white and velvety colonies with red pigmentation (day +10) ( Fig. 2 B). After solid culture, the fungal protein was extracted by ethanol-formic acid extraction method, then 1ul of the protein was placed on the polished steel target plate, and 1ul of matrix solution was added, and finally put into the matrix-assisted laser desorption ionization–time-of-flight mass spectrometry (MALDI-TOF MS) EXS3600 (Zybio Inc., Chongqing, China) for analysis. As a result, T.rubrum was identified by the MALDI-TOF MS test (day +5) and further confirmed by molecular sequencing(day +6). Trichoscopy showed black dots (purple arrows), broken hairs (blue arrows), comma hairs (red arrows), and white scales on the hair and scalp (yellow arrows) ( Fig. 2 C and D) (day 0). Therefore, seborrheic dermatitis-like adult tinea capitis infected by T. rubrum was diagnosed. The patient was actively advised to shave his hair, clean possible fomites, and screen family members. Nevertheless, he did not shave his hair because of aesthetic concerns. At the same time, given the age of the patient and the type of pathogen, he was treated with oral terbinafine (250 mg/day), topical ketoconazole cream (twice/day), and ketoconazole shampoo (three times/week). After a month of therapy, the patient achieved the clinical and mycological cure and showed a smooth scalp with scattered hypopigmented spots and no further hair loss ( Fig. 3 ). The patient had no recurrence after six months of combined online and offline follow-up. 3 Discussion The epidemiological characteristics of ATC vary by geographic region and change with time, with prevalence ranging from 2.6% to 13.6%, which is higher than in previous decades [ 3–5 ]. The increasing majority of ATC may be related to population aging and immune system changes caused by systemic diseases such as diabetes, malignancy, acquired immunodeficiency diseases, or prolonged use of glucocorticoids and immunosuppressants. Interestingly, a retrospective epidemiology survey of ATC in Mainland China found that only 2.2% of patients had comorbid immunosuppressive diseases, reminding us to consider the possibility of ATC of scalp diseases among “healthy people,” especially those who have had dermatophytosis [ 3 ]. A multicenter prospective study in China found that the most common causative fungus for TC was the zoophilic Microsporum Canis ( M. canis ) [ 6 ]. However, compared with children, patients of ATC are more often complicated with another superficial dermatophytosis, and the proportion of anthropophilic dermatophytes is higher, including Trichophyton violaceum ( T. violaceum ), Trichophyton tonsurans ( T. tonsurans ), and T. rubrum [ 6 ]. The lesions of ATC are always atypical and various, mainly encompassing diffuse erythematous scale, black dots, abscesses, and alopecia. Generally, the inflammatory reaction was milder in anthropophilic dermatophytes but more intense in zoophilic or geophilic species. Most anthropophilic species usually appear as endothrix infections, meaning the pathogens invading the hair shaft without damaging the stratum corneum [ 1 ]. This pattern of infection is clinically characterized by black spots and scaly erythema of the scalp, which is consistent with our patient's clinical and trichoscopic features, making this case more representative. ATC is primarily differentiated from seborrheic dermatitis and scalp psoriasis as a scalp disease. Seborrheic dermatitis usually manifests greasy scaling, itching, and erythema. It often occurs on the scalp, face, back, and chest in adults. The density of Malassezia spp has always been connected with the severity of the disease [ 7 ]. Topical antifungals are the primary treatment for this disease by reduction of the yeast [ 8 ]. Silver-white scales, well-circumscribed erythema, and hair casts characterize scalp psoriasis. Besides the scalp, psoriasis-like skin lesions are often found on the knees, elbows, and body folds [ 9 ]. Trichoscopy usually presents typical vascular patterns, such as bushy red dots and loops [ 10 ]. Mycological tests were generally negative for the disease. In addition to the above two conditions, ATC sometimes needs to be distinguished from bacterial folliculitis, alopecia areata, even lupus erythematosus, cutaneous tuberculosis, and secondary syphilis. Generally speaking, diagnosis of ATC is made by direct microscopic detection of septate hyphae and fungal culture from scalp hair samples, but the fungal incubation usually takes 2–3 weeks [ 11 ]. MALDI-TOF MS, which is a potential alternative method of molecular sequencing for microbial identification and can enhance the test performance by expanding its database, has been proven to quickly, accurately, and economically identify bacterial, yeast, and mold species, as well as gradually applied to the identification of dermatophytes. The identification of dermatophytes by MALDI-TOF MS can be carried out at the initial stage of incubation, which is better for the early recognition of species and selection of appropriate antifungal drugs [ 12 ]. Trichoscopy, a fast, painless, and highly sensitive tool, may observe trichoscopic signs in ATC, including comma hair, broken hair, corkscrew hair, zigzag hair, and black dots, which may be peculiar to different organisms and not visualized by the naked eye [ 13 ]. Histopathological examinations are rarely conducted and used only in extremely atypical cases. The treatment principles for ATC are similar to those for children. Both systemic and adjuvant topical antifungals are required for these patients. For oral drugs, terbinafine is usually the preferred systemic drug for infections with Trichophyton spp, while itraconazole or griseofulvin is for infections with Microsporum spp [ 11 ]. In the elderly with systemic medications, it is essential to consider underlying diseases, drug interactions, compliance, and reduce the occurrence of adverse drug events. Topical antifungal agents are necessary for reducing the transmission of spores in the initial phases of therapy [ 1 ]. Based on drug types and patients' responses, treatment duration generally ranges from 3 to 6 weeks, and mycological cure (negative microscopy and culture results) is the end point of treatment [ 1 ]. Family screening and cleaning of fomites for all anthropophilic species are vital for preventing recurrence, while identifying and treating the animal sources are also necessary for zoophilic infection. Totally, ATC is more common in postmenopausal women, but we have reported a first misdiagnosed case of seborrheic dermatitis-like ATC caused by T.rubrum attributed to self-inoculation of superficial dermatomycosis in an elderly male who presents with scaly erythema and non-cicatricial alopecia of the scalp. The case aims to remind dermatologists and general practitioners that ATC is not a rare disease, and close contact with family members suffering from superficial fungal infections as well as autoinoculation, may be the primary mode of transmission of this disease. Increased awareness and vigilance of different clinical forms of ATC, and timely mycological examination for suspected patients, can help with early diagnosis and appropriate management. Ethical Form Please note that this journal requires full disclosure of all sources of funding and potential conflicts of interest. The journal also requires a declaration that the author(s) have obtained written and signed consent to publish the case report/case series from the patient(s) or legal guardian(s). The statements on funding, conflict of interest and consent need to be submitted via our Ethical Form that can be downloaded from the submission site www.ees.elsevier.com/mmcr . Please note that your manuscript will not be considered for publication until the signed Ethical Form has been received. Conflict of interest There are none. Acknowledgments We thank the patient for participating in this study and for his consent to publication. This work was funded by the National Key Research and Development Program of China (Grant No. 2022YFC2504804 , 2022YFC2504800 ). Appendix A Supplementary data The following are the Supplementary data to this article. figs1 figs1 Appendix A Supplementary data Supplementary data to this article can be found online at https://doi.org/10.1016/j.mmcr.2023.06.001 .
REFERENCES:
1. MAYSER P (2020)
2. NENOFF P (2014)
3. LIANG G (2020)
4. PARK S (2019)
5. LOVANAVARRO M (2016)
6. CHEN X (2022)
7. ADALSTEINSSON J (2020)
8. SOWELL J (2022)
9. LANGLEY R (2005)
10. BRUNI F (2021)
11. FULLER L (2014)
12. DERESPINIS S (2014)
13. DHAILLE F (2019)
|
10.1016_j.toxcx.2021.100086.txt
|
TITLE: Access to antivenoms in the developing world: A multidisciplinary analysis
AUTHORS:
- Potet, Julien
- Beran, David
- Ray, Nicolas
- Alcoba, Gabriel
- Habib, Abdulrazaq Garba
- Iliyasu, Garba
- Waldmann, Benjamin
- Ralph, Ravikar
- Faiz, Mohammad Abul
- Monteiro, Wuelton Marcelo
- de Almeida Gonçalves Sachett, Jacqueline
- di Fabio, Jose Luis
- Cortés, María de los Ángeles
- Brown, Nicholas I.
- Williams, David J.
ABSTRACT:
Access to safe, effective, quality-assured antivenom products that are tailored to endemic venomous snake species is a crucial component of recent coordinated efforts to reduce the global burden of snakebite envenoming. Multiple access barriers may affect the journey of antivenoms from manufacturers to the bedsides of patients. Our review describes the antivenom ecosystem at different levels and identifies solutions to overcome these challenges.
At the global level, there is insufficient manufacturing output to meet clinical needs, notably for antivenoms intended for use in regions with a scarcity of producers. At national level, variable funding and deficient regulation of certain antivenom markets can lead to the procurement of substandard antivenom. This is particularly true when producers fail to seek registration of their products in the countries where they should be used, or where weak assessment frameworks allow registration without local clinical evaluation. Out-of-pocket expenses by snakebite victims are often the main source of financing antivenoms, which results in the underuse or under-dosing of antivenoms, and a preference for low-cost products regardless of efficacy. In resource-constrained rural areas, where the majority of victims are bitten, supply of antivenom in peripheral health facilities is often unreliable. Misconceptions about treatment of snakebite envenoming are common, further reducing demand for antivenom and exacerbating delays in reaching facilities equipped for antivenom use.
Multifaceted interventions are needed to improve antivenom access in resource-limited settings. Particular attention should be paid to the comprehensive list of actions proposed within the WHO Strategy for Prevention and Control of Snakebite Envenoming.
BODY:
1 Introduction The goal of achieving “access to safe, effective, quality and affordable essential medicines and vaccines for all” is embedded into UN Sustainable Development Goal (SDG) 3.8, and a central component of Universal Health Coverage (UHC) ( United Nations, 2017 ). For hundreds of thousands of snakebite victims around the world, this basic human right is unattainable. Lack of access to essential medicines in resource-limited settings is multifaceted. Medicines may be unavailable (e.g.: due to shortages, stock-outs or discontinued production), unsuitable (e.g.: lacking specificity, or suitability to programmatic requirements in a given setting), unaffordable (e.g.: higher in price than the capacity or willingness to pay), and/or of low quality (e.g.: lacking in potency or failing to meet appropriate standards) ( Pécoul et al., 1999 ; Wirtz et al., 2017 ). Effective strategies to enhance access to medicines in resource-limited settings need to be tailored to the specific product characteristics, the clinical context and the target population. Snakebite envenoming (SBE) can cause life-threatening medical emergencies. The effects can include severe bleeding, paralysis, kidney injury, and damage to muscle and other local tissues that can result in permanent disability, amputation of limbs, or death. It is estimated that worldwide, SBE may kill up to 137,880 persons annually ( Gutiérrez et al., 2017 ). Antivenoms have been recognized by the World Health Organization (WHO) as essential medicines since WHO released its first list of essential medicines in 1977 ( WHO Expert Committee, 1977 ). The administration of antivenom has been the foundation of treatment of SBE for nearly 125 years. Well-designed antivenoms intended to neutralize the venoms of specific species of snakes (from national or regional populations) can be highly effective in reducing mortality and morbidity, in tandem with appropriate ancillary medical care. Current antivenoms are biological preparations of animal plasma-derived antibodies that differ from one another with regards to many characteristics, primarily the specific snake species they are intended to be used for [see Box 1 ]. Although precise data is scant, there is consensus that only a small fraction of the estimated 2.7 million people envenomed after a snake bite each year have access to antivenom therapy. WHO includes SBE in its Category A list of highest priority neglected tropical diseases (NTDs), and has launched an ambitious global Strategy for the Prevention and Control of Snakebite Envenoming, that aims to reduce deaths and disabilities by 50%, and deliver 3 million effective treatments per year by 2030 ( Williams et al., 2019 ; WHO, 2019 ). In light of WHO's plan to increase the accessibility, affordability, effectiveness and safety of antivenoms, this paper examines some of the most relevant barriers to these goals in resource-limited settings of Latin America, sub-Saharan Africa, South Asia and South-East Asia, and discusses actions to overcome them. We have drawn in part on an adapted framework on access to insulin ( Beran et al., 2021 ), and how all three stages (e.g. upstream, midstream, downstream) of the antivenom journey from the manufacturing site to the patient bedside need to be taken into account. Specific access challenges are associated to each stage [See Fig. 1 ] and we will present these in a multi-disciplinary manner taking into account a range of perspectives. Our analysis is restricted to antivenom access but we also recognize the importance of access to good ancillary care alongside other components of SBE management. 2 Upstream: R&D and innovation/manufacturing 2.1 Global antivenom manufacturing landscape Historically SBE has been considered a local issue by the majority of health authorities in affected countries. This has driven parallel manufacturing, often by public institutions, of antivenoms that were specific to local endemic species and requirements. In practice, antivenoms were developed to meet needs at national or sub-national levels, and occasionally at sub-regional level for a group of neighbouring countries. Some institutions (e.g. France's Institut Pasteur or Australia's Commonwealth Serum Laboratories, now CSL Limited) produced antivenoms with wider geographic coverage for political and strategic reasons, but commercial interest in antivenoms has generally been low. Approximately 50 antivenom manufacturers are currently listed as active by WHO ( WHO Website, a ). The scale of antivenom production by each of these organizations is unclear. Manufacturer surveys about the number of vials produced annually capture only a proportion of the true number, due to commercial in-confidence limitations. Translation of this data into the number of effective treatments available is somewhat speculative, since few products have undergone well-designed dose-finding studies, and it is impossible to extrapolate the dose of one product to the dose that may be needed for another [See Box 2 ]. A global survey in 2020 obtained data from 22 manufacturers representing 65 distinct antivenom products currently in use ( Global Snakebite Initiative, 2020 ). Just under 6 million antivenom vials or ampoules intended for markets in LMICs were produced by these companies. Based on self-reported, manufacturers’ recommended starting doses, this equates to approximately 1 million doses, probably an overestimate, since many producers claim that their antivenom requires low dosing in the absence of evidence. In any case, this is far below the global need to treat 2.7 million cases of SBE every year. The antivenom supply crisis is however not uniform. It is particularly serious in sub-Saharan Africa where previous surveys from 2007 to 2010 found that the availability of efficacious antivenom could be as low as 2.5% of the projected need ( Brown, 2012 ). Another survey in 2011 cautiously noted that annual production of antivenoms intended for use in India may have reached 2 million vials ( Whitaker and Whitaker, 2012 ), which, based on current manufacturers’ dose recommendations (5–20 vials depending on severity), equates to 100–400 thousand initial treatments. Because the number of vials that constitutes an effective clinical dose is unclear [See Box 2 ] this likely translates into even fewer complete treatments. With between 1.11 and 1.77 million snakebite cases per year in India alone ( Suraweera et al., 2020 ), and many more cases in the whole region of South Asia where these antivenoms are used, clearly the supply of antivenoms is woefully short of current needs. Antivenom manufacturing represents an exceptionally heterogeneous industry. The majority of producers, particularly in Latin America and South East Asia, are not-for-profit, low volume public sector manufacturers that meet domestic or sub-regional needs. A small number of public and private sector manufacturers have larger capacity for distribution into both domestic and foreign markets ( Gutiérrez, 2019 ). While small national public sector manufacturers may be a legacy of the past, they are also strategic investments, as the lack of national production capacity leaves endemic countries vulnerable through dependence on foreign supply [See Fig. 2 ]. One such example is Nepal, where an acute shortage of supply occurred in 2012 after an Indian court banned the export of antivenom due to shortages in India itself ( Shrestha et al., 2017 ). Similarly, Papua New Guinea's (PNG) historical reliance on very expensive Australian antivenoms resulted in chronic shortages ( McGain et al., 2004 ). Since 2018 the Australian manufacturer has donated 600 vials a year to the PNG government under a partnership with the Australian government. This approach still leaves PNG vulnerable to loss of product access, and in part contributed to the abandonment of efforts to introduce new low-cost products and pathways to local manufacturing in the future. Until recently, Myanmar had to import expensive Indian and Thai antivenoms of limited effectiveness against the medically most important species in Myanmar due to the limited production capacity of the country's public manufacturer ( Williams et al., 2011 ; World Health Organization Regional Office for South-East Asia, 2002 ). However, following a collaboration with Australia to upgrade production, the national antivenom company in Myanmar has been able to meet the country's needs since 2017. The small number of private antivenom manufacturers and the general lack of interest from large multinational pharmaceutical producers are indicative of the commercial realities of manufacturing nationally or regionally-specific products for unpredictable and unreliable markets. The decision made by Sanofi in 2010 to cease antivenom production ( Chippaux and Habib, 2015 ) highlights how commercial imperatives trump corporate social responsibility when it comes to markets in low-to middle-income countries (LMICs). One of the key weaknesses in the global market is the lack of diversified manufacturing of products that meet specific needs. This fragility can lead to severe consequences when a sole producer leaves the market, or where geopolitical and other constraints render access to products difficult. In Yemen, Médecins Sans Frontières (MSF), which admits more than 600 snakebite victims to its hospitals every year, has been unable on several occasions to access the only regionally relevant polyvalent antivenom, a product manufactured by the National Antivenom and Vaccine Production Center (NAVPC), a public sector laboratory in Saudi Arabia. Logistics and bureaucratic constraints are largely to blame. Similarly, the sole manufacturer of boomslang ( Dispholidus typus ) antivenom, South African Antivenom Producers (SAVP) based in Johannesburg, South Africa, produces only small quantities of this expensive antivenom, which has relatively small demand. Consequently, production gaps, stock shortages and logistics challenges mean that the product is unavailable to those who need it in other countries where the species is endemic ( Gutiérrez, 2019 ). Efforts are needed to maintain appropriate supply and future-proof markets with multiple sources of efficacious and safe products for every region ( Gutiérrez, 2019 ). Within this ecosystem, it is not surprising that chronic shortages of antivenom are reported in regions with high burdens of SBE and a small number of producers [see Fig. 2 ]. After calls were made to address the antivenom supply crisis in Sub-Saharan Africa at the end of 20th century ( Theakston and Warrell, 2000 ), greater commercial interest emerged, as entrepreneurial manufacturers in India, Latin America and elsewhere responded by broadening their market ambitions. Over the past 20 years, some manufacturers have moved into new markets, mostly with good intentions and a desire to improve treatment options for affected communities. Some of these new products have proven to be safe and effective treatments, while others have not ( Visser et al., 2008 ; Abubakar et al., 2010 ). New antivenoms are generally produced in response to specific priorities, resulting in products not being developed for species that cause, or are perceived to be responsible for, a low burden of snakebite injuries and deaths. South Asia's case is striking: all marketed antivenoms in the sub-region have been raised against the same “Big Four” snake species ( Bungarus caeruleus , Daboia russelii , Echis carinatus and Naja ) for more than 70 years, yet there are many more medically important species for which specific antivenoms have never been developed. For example, no specific antivenom for Naja kaouthia is available in the South Asian market, although the species accounts for most cobra bites in Bangladesh. Similarly, specific products to neutralize venoms of hump-nosed pit vipers ( Hypnale hypnale ), the commonest cause of snakebite envenoming in Sri Lanka, do not exist. Available products are usually ineffective in such instances despite the administration of large doses, and carry a high concomitant risk of allergic reactions ( Ralph et al., 2019 ). The development of an investigational product able to neutralize the venom of Hypnale hypnale is therefore an encouraging prospect ( Petras et al., 2011 ; Villalta et al., 2016 ). Finally, there exists no specific antivenom for Bungarus walli and B. niger , now recognized as medically important krait species in Bangladesh and Nepal. The antivenoms currently marketed in these countries are prepared in India and are raised against B. caeruleus , and no preclinical or clinical evidence of paraspecific neutralization against B. walli and B. niger exists for these products ( Shrestha et al., 2017 ). It should however be noted that current antivenoms are generally considered poorly effective against krait venoms ( Bungarus spp.) ( Alirol et al., 2010 ), so it is unsure if an antivenom raised specifically against B. walli and B. niger would be any more effective than the antivenoms currently available in Nepal and Bangladesh. In addition, only one major producer of venom has been licensed to supply antivenom manufacturers in India, and these venoms, sourced from snake specimens only from Tamil Nadu in Southern India, have not historically been produced to standards that apply to pharmaceutical starting materials. This may be one of the reasons why current antivenoms are believed to be significantly less effective in Northern India ( Ralph et al., 2019 ). Similar issues are also found outside South Asia. The tri-specific antivenom made and marketed in Indonesia by a state-owned enterprise is another example of mismatch. It neutralizes venoms from Indonesian populations of Calloselasma rhodostoma , Naja sputatrix and Bungarus fasciatus , but it excludes coverage against the venom of B. candidus which is actually responsible for far more bites than B. fasciatus ( Tan et al., 2016 ; Williams et al., 2011 ). In order to improve antivenom access at a global level, the reliability, stability and security of antivenom supply lines, the elimination of monopolies and the development of a competitive, dynamic market are all needed. A sustainable future market requires product standardization to specific market needs, and consortia or networks of Good Manufacturing Practice (GMP) compliant manufacturers who share research and development, quality control and other common costs while pursuing process optimization and rationalization goals aimed at sustainable, low-cost, high volume, commercially viable antivenom output ( Williams et al., 2011 ). Achieving an optimized antivenom producer ecosystem requires a mixed approach, depending on technical capacities and political will in the different endemic regions. Consolidation of production effort would likely generate economies of scale and facilitate a range of improvements, although there will continue to be single manufacturer business case models, especially for very low-volume products. Collaboration between producers can lead to rapid technological improvements, product diversification and increased production: the well-coordinated network of public manufacturers in Latin America offers multiple sources of pan-specific antivenoms adapted to all sub-regions of Latin America, enables high-volume antivenom production and economies of scale, and incentivizes sharing of reference venoms from different geographical origins ( Gutiérrez, 2019 ). Likewise, the international collaborations undertaken by public institutions and private companies in Australia, Brazil, Costa Rica, India and the United Kingdom for production and development of antivenoms for use in other regions of the world have been critical to address unmet antivenom needs ( Di Fabio et al., 2021 ). 2.2 Antivenom quality A recent review observed that regulatory affairs-related antivenom issues were rarely discussed in peer-reviewed literature ( Di Fabio et al., 2021 ). To address the gap in regulatory aspects, WHO published Guidelines for the Production, Control and Regulation of Snake Antivenom Immunoglobulins designed to assist both manufacturers and regulatory authorities in 2010, followed by a revision in 2017 ( WHO, 2017 ). The WHO guidelines attempt to establish a minimum set of design, production, quality control and regulatory parameters to support standardization and harmonization efforts. As there are shortcomings, a third edition is planned in 2022. For example, data on venom potency and venom yield, which could substantially inform rational design of products with adequate neutralizing potency, is not included. There is also a strong case to be made for closer scrutiny of antivenom production by the WHO Expert Committee on Biological Standardization (ECBS), and by networks of regulatory authorities. Many regulators lack the technical capacity and human resources to adequately regulate and control the safety, effectiveness and quality of the antivenoms that are imported or exported. Future editions of the guidelines will focus more on improving these capabilities. The underlying principle for the production of all medical products is the adherence to GMP and other international standards (e.g. Good Laboratory Practice, GLP), and antivenoms are no exception. Manufacturers with deficient GMP are vulnerable to serious deficiencies that can result in release of inferior, defective, ineffective or even potentially dangerous products. Pyrogenic reactions, hypersensitivity reactions and other adverse events due to possible presence of endotoxins, protein aggregates and other impurities are often a consequence of poor GMP, notably during the plasma fractionation and subsequent manufacturing processes ( Morais and Massaldi, 2009 ). Lack of standardization methodology for quality control can lead to variable results, even when compliant with the same specifications, and lack of reference venoms established and validated by competent authorities continue to hamper reproducibility and quality assurance improvements ( León et al., 2018 ). Many products are marketed without prior independent preclinical efficacy testing ( Ainsworth et al., 2020 ), nor data of safety and clinical effectiveness from well designed, pragmatic clinical trials ( Alirol et al., 2015 ). Evidence of poor preclinical efficacy ( Calvete et al., 2016 ; Harrison et al., 2017 ) of products with high market penetration ( Potet et al., 2019 ; Brown, 2012 ) and the abandonment of other products with acceptable profiles, such as Sanofi's FAV-Afrique in 2010, have led to efforts to reshape the market and improve compliance. WHO launched a pilot programme to develop a risk-benefit assessment procedure for antivenoms in 2015, focused on products for Sub-Saharan Africa. The first product recommendation was issued in August 2018 and four other products are completing the process in 2021 ( WHO Website, b ). The comprehensive nature of the assessment process has uncovered one unscrupulous antivenom producer having presented fabricated clinical trial data in documents submitted to WHO ( Imani, 2019 ). Clearly there remains considerable work to be done to improve quality and safety of antivenom products, especially in weakly regulated markets such as sub-Saharan Africa. WHO's risk-benefit assessment programme will continue to evaluate products for sub-Saharan Africa, South-East Asia and the Western Pacific regions, and WHO has a number of activities aimed at strengthening the capacity of regulatory agencies to evaluate and improve regulation of antivenoms. Under the WHO strategy for snakebite envenoming, a formal prequalification procedure to support procurement of quality-assured products will be developed within the next 3–4 years ( Williams et al., 2019 ). These measures are expected to increase the supply and sustainability of well-designed, safe and effective antivenoms. Achieving them will require external support and technical assistance for manufacturers who need to invest in upgrading infrastructure and manufacturing technologies in order to meet the requirements for participation in this new environment. Support from governments, funding agencies, philanthropic foundations and development banks should be considered within the context of SDGs related to health as well as national innovation and infrastructure policy ( United Nations, 2017 ). 2.3 Innovations for improved antivenom access WHO has prioritized the need for academia and antivenom manufacturers to cooperate and align research priorities on practical issues that need to be overcome to improve the production, quality control and regulation of antivenoms, as many manufacturers lack the resources and capacity to undertake R&D activities on their own. Incremental modification of manufacturing processes can greatly improve the safety, effectiveness and accessibility of final products ( León et al., 2018 ). One example of this is the emergence of lyophilization as a means of producing antivenoms that do not depend on cold-chain, something that is often absent in remote resource-limited settings. Lyophilization avoids the need for refrigeration, increasing the range of deployable locations for antivenoms. Research to improve stability can further extend the effective life of some antivenoms, reducing wastage and boosting antivenom availability. A wider range of innovations to traditional antivenoms are now being developed, including new adjuvants, improved immunizing mixtures, quality control tools and assays, and the use of toxicovenomic technology-aided antivenom design, alongside new generation SBE treatments based on small molecules or cocktails of neutralizing antibody cocktails ( Gutiérrez et al., 2017 ). Likewise, the availability of new rapid tests to identify the offending snake species may have an impact on antivenom designs in the future ( Knudsen et al., 2021 ). The novel antivenoms that arise from these innovations will likely have very different characteristics in terms of geographic coverage, administration routes and costs, compared to conventional antivenoms. This will bring new perspectives and challenges regarding access, but ultimately will lead to increased access to safer, more effective and affordable treatments for many more people around the globe. 3 Midstream: registration and marketing/selection, pricing and reimbursement/procurement and supply 3.1 Antivenom registration Effective regulation of medicines is designed to ensure that products in the marketplace are safe, effective and represent high value care for specific illnesses or diseases. Registration at country level is essential for national procurement agencies to have confidence that products are safe, effective and truly adapted to the medically most important snake species in the country. Antivenoms are subject to registration by competent regulatory authorities in virtually all jurisdictions, but the degree to which the registration process independently and robustly establishes the validity and reliability of manufacturer's claims is extremely variable. The Collaborative Registration Procedure has been designed by the WHO to aid in precisely this process: to accelerate registration of prequalified essential medicines in multiple countries ( Ahonkhai et al., 2016 ). The planned introduction of prequalification procedures for antivenoms will further facilitate their entry into regional registration programmes. Countries that lack local manufacturing capacity are particularly vulnerable if the antivenoms used are unregistered and lack marketing authorization. Suppliers generally give priority to markets where products are registered, leaving other countries without reliable access. SAIMR-Polyvalent, a polyspecific antivenom produced by SAVP in South Africa and considered by some as the current “Gold Standard” for treating envenomings by many African snake species, is only registered in South Africa. Export of supplies to other countries such as Eswatini and Tanzania are limited ( Harrison et al., 2017 ; Yates et al., 2010 ; Habib et al., 2020 ; Erickson et al., 2020 ). Many manufacturers eschew the financial and administrative implications of registering products in multiple jurisdictions. Regional collaboration between regulatory authorities, and shared registration programmes that facilitate broader market penetration across regions are needed to address this issue, following experiences that have been applied to other essential medicines. Inclusion of antivenoms in joint assessment initiatives would change the landscape considerably, inviting greater participation by manufacturers ( Arik et al., 2020 ). 3.2 Antivenom prices and financing The cost of antivenom has a major impact on accessibility and affordability. Very few countries regulate price, and a lack of adequate price regulation can result in price-gouging and profiteering by manufacturers or intermediaries involved in the supply chain. For example, the cost of antivenom in the USA can exceed more than US$10,000 per vial ( Theakston and Warrell, 2000 ; Boyer, 2015 ). While costs are generally lower in LMICs, antivenoms are often prohibitively expensive relative to the incomes of often impoverished snakebite victims. In such settings even treatment costing as little as US$4 per vial might be unaffordable to patients in need ( Theakston and Warrell, 2000 ). In Kenya, when calculating the daily wage of a lowest-paid government worker (LPGW), a study found just one vial of antivenom was unaffordable, costing an LPGW 2.3 days in the public sector, 16.9 days in the private sector and 7.7 days in the mission sector ( Ooms et al., 2021 ), keeping in mind that to be effective multiple vials are needed. Indeed, unit price is a completely unreliable indicator of the total cost of providing an effective clinical dose. The actual price of an effective treatment needs to be based upon the number of vials/ampoules that are confirmed in independent studies to be necessary. It varies from product to product based on potency, immunoglobulin concentration, mass of injected venom (per species) and other factors [see Box 2 ]. Attempts have been made to estimate costs of an effective dose for different products. In sub-Saharan Africa this was highly variable, ranging from US$55 to US$640 ( Brown, 2012 ). A 2020 global survey of 22 antivenom manufacturers found that the average price per starting dose equated to US$463 ( Global Snakebite Initiative, 2020 ). These numbers should be taken with caution as the reality is that against some of the most medically important species, some antivenoms may be ineffective even at currently recommended doses [See Fig. 3 ]. Some LMICs provide antivenoms either free or highly subsidized to patients in public hospitals, but the resources allocated and volumes supplied are often inadequate. A scheme in Burkina Faso provided only enough antivenom to treat around 4% of patients ( Gampini et al., 2016 ). In many resource-limited settings, particularly in sub-Saharan Africa and South Asia, national insurance schemes do not have high coverage, are often restricted to employees in the public sector and do not extend to farming communities. This leads to enormous out-of-pocket expenses for individuals requiring them to sell assets such as land or livestock ( Vaiyapuri et al., 2013 ). Few supplies of antivenom are provided by not-for-profit non-governmental organizations. Recent examples are scarce and include Rotary International in north-eastern Nigeria, and Médecins Sans Frontières in parts of Central African Republic, Ethiopia, South Sudan and Yemen ( den Boer, 2021 ). Humanitarian organizations are often best prepared to provide antivenom therapy during humanitarian emergencies, particularly natural disasters and other crises causing population displacement, which carry a high risk of both snakebite epidemics and disruption of health services and medical supply chains ( Ochoa et al., 2020 ; Igawe et al., 2020 ). Despite the challenge around high costs and affordability, antivenoms are among the most cost-effective interventions in developing countries ( Brown and Landon, 2010 ). Analysis of data from 16 west and central African countries shows it to be highly cost-effective ( Habib et al., 2015 ; Hamza et al., 2016 ). But these data need to be expanded. Further health economic modelling and sensitivity analyses are required to better articulate the cost-effectiveness of antivenoms and to ensure their appropriate usage across multiple geographic regions. These studies should investigate the potential for gains that could be achieved by improving antivenom potency, production costs, economies of scale and quality of life utilities for different treatment options (including ancillary treatments when available). Furthermore, these assessments could also be used to better highlight the downstream medical costs and socio-economic impacts of untreated snakebite on victims and their families, as well as the cost-benefits of improved distribution and storage programs, staff training and improved utilization of antivenom. While domestic public resources to finance antivenom access are limited, the launch of the new WHO Roadmap for neglected tropical diseases in 2021 ( WHO, 2020 ), which reiterates the ambitious objective to cut snakebite mortality in half by 2030, will hopefully attract additional financial support for preventing and treating SBE from both donor and affected country governments, as well as philanthropic foundations. 3.3 Antivenom procurement and supply Procurement processes can vary substantially depending on whether supply is provided by a national public sector manufacturer operating as a monopoly, several national public and private sector manufacturers competing against each other, or if there is no national manufacturer and supply is imported from overseas. All countries in sub-Saharan Africa, except South Africa, belong to the latter category. To ensure that products are fit-for-purpose, procurement agencies need to carefully define product specifications based on expert consensus. Where products are registered by a national regulatory authority, or where WHO recommendations exist for particular products, this process may be somewhat easier, but when there are no products registered, or in the absence of evidence-based recommendations, procurement of appropriate products may be especially challenging. Purchasing and distribution of inappropriate antivenoms that were not specific for the venomous species endemic in the country has been reported in West Africa, PNG ( Warrell, 2008 ) and more recently in Ethiopia ( den Boer, 2021 ), suggesting that antivenoms may be particularly amenable to violations of Good Distribution Practices by wholesale distributors. The misunderstanding of what represents effective treatment with antivenom [See Box 2 ], creates another challenge for national procurement agencies. Procurement models and supply contracts should be restricted to dealing with quantities in terms of “effective treatments” or “effective doses”, rather than “single vial/ampoule doses”. Well-designed clinical dose-finding and safety studies are essential to determine what volume of a product constitutes an “effective clinical dose”, and should be a prerequisite to product registration and marketing authorization. Within this context, LMICs would benefit from multilateral antivenom procurement mechanisms. Examples exist for other products: the Strategic Fund for the acquisition of Essential Medicines and the Revolving Fund of Vaccines of the Pan American Health Organization consolidate procurement on behalf of participating countries in the Americas, and the International Coordination Group on Vaccine provision manages stockpiles of vaccines for prompt delivery for outbreak response ( DeRoeck et al., 2006 ; Yen et al., 2015 ). Coordinated demand forecasts and pooled procurement of specific antivenoms at continental level could increase supply security and optimize pricing. Such a mechanism would however depend on a continuous and sustainable provision of antivenoms. Long term contracts could entice manufacturers to commit to producing their products, particularly those from the public sector who depend on a government budget to support a periodic production program. Along those lines, WHO has begun work to establish a stockpile of effective antivenoms for sub-Saharan Africa. 4 Downstream: prescribing and dispensing/use 4.1 Local antivenom availability and geographical accessibility Snakebite envenoming is a time-critical medical emergency. A rapid response with access to effective treatment is essential in the first hours after the bite. Delayed treatment is a recognized risk factor for complications and death ( Feitosa et al., 2015a , 2015b ; da Silva Souza et al., 2018 ; Iliyasu et al., 2015 ). Unfortunately, antivenom availability and accessibility remain distant possibilities for large proportions of at-risk populations around the world. Variable policies on use, distribution and clinical environments restrict access by limiting the number and types of health facilities where antivenom can be held and used. Rather than being available at primary health centres, antivenoms are often only available in secondary or tertiary referral centres under medical prescription ( Habib et al., 2020 ). Surveys in Kenya ( Okumu et al., 2019 ; Ooms et al., 2021 ), Uganda and Zambia ( Ooms et al., 2020 ) paint a depressing picture. In Kenya antivenom was available at one-third of the healthcare facilities, and stock-outs were reported even in large urban referral hospitals such as Kisumu. Only 4.2% and 7.6% of healthcare workers in Uganda and Zambia respectively reported available antivenom stock when surveyed. The situation is equally bleak in parts of South-East Asia. A community-based survey on snakebite incidence from Lao PDR highlighted the lack of antivenom in district and provincial hospitals ( Vongphoumy et al., 2015 ). Similarly, in Vietnam, antivenom products can only be accessed from certain prominent tertiary hospitals in the Mekong Delta and are largely unavailable in district and provincial hospitals in some provinces of central Vietnam ( Blessmann et al., 2018 ). In many countries access to antivenom is restricted to facilities staffed by doctors, where resources for effective management of complications such as adverse drug reactions, airway and breathing emergencies, kidney injury and local tissue injury are available. This high bar for initial treatment can be a barrier to access especially in rural settings. In India, there have been calls to decentralize access to antivenom in every primary health center in order to drastically improve geographical accessibility; however, strengthening of the health workforce in these facilities will be required, so that an officer can be available during night hours and that all workers be properly trained on SBE management ( Bawaskar et al., 2020 ). In Ecuador and Tanzania, successful management of SBE was achieved in a severely resource-constrained area by improving access to treatment in nurse-led clinics ( Gaus et al., 2013 ; Yates et al., 2010 ). In Nigeria decentralization of antivenom supply through a “hub-and-spoke” distribution and utilization network model, wherein rural facilities serve as satellites or spokes and are linked to major hospitals in urban hubs for referrals, linkages, support, training and antivenom supplies has been proposed ( Habib, 2013 ). Lack of communication also can lead to tragedy. During a snakebite outbreak in 2016 in Donga, Nigeria, most victims and their relatives were unfortunately not aware of the free antivenom provided at the referral hospital in the city and did not seek care accordingly ( Igawe et al., 2020 ). Many victims of snakebite must travel long distances to access even primary health care, and the distance to facilities where antivenom is available can be even greater ( Feitosa et al., 2015a , 2015b ; da Silva Souza et al., 2018 ; Schioldann et al., 2018 ). Shortages of qualified health workers able to administer antivenoms and provide ancillary treatments compound the situation in many settings. In the Brazilian Amazon, transport to health facilities may involve several different means of travel, significant time delays and sometimes exposure to dangerous conditions on land, water and in the air ( Cristino et al., 2021 ). Even when antivenom is available locally, some patients may perceive that quality of care will be better at more distant facilities, and travel hundreds of kilometres further away (to Manaus for example), often suffering poorer outcomes as a result ( Guimarães et al., 2018 ; Cristino et al., 2021 ). Unfortunately, patient waiting times, ineffective triage and workforce shortages can lead to delays in access to treatment even when a patient arrives at a health facility ( Bajpai, 2014 ; Sharma, 2015 ; Simpson, 2007 ; Islam and Biswas, 2014 ). In India, distances to antivenom treatment centres are generally shorter, but snakebite victims in rural areas sometimes travel over 100 km to access basic healthcare ( Singh and Badaya, 2014 ). Free ambulance services established through public-private partnerships sometimes provide antivenom for critically ill patients during transport but the impact of these initiatives is hindered by a shortage of services in most rural areas, suboptimal response times or non-attendance, inadequately trained paramedics in standardized resuscitation protocols ( Bharti and Singh, 2015 ; Ralph et al., 2019 ). In Nepal community education and motorcycles have been used to shorten the delay between bite and access to antivenom treatment, successfully reducing case fatality rates from 10.5% to 0.5% ( Sharma et al., 2013 ). Solving the challenges posed by physical geography requires the use of tools that improve our understanding of factors influencing antivenom accessibility. Geographic Information Systems (GIS) in particular have emerged as powerful technological advances for the measurement of geographic access to healthcare over the past 3 decades ( Neutens, 2015 ; Delamater et al., 2012 ). One particularly well-suited approach for modeling timely physical accessibility to health services uses a least-cost path approach informed by local travelling constraints (e.g., terrain, road network, barriers to movement, modes and speeds of transport) ( Ray and Ebener, 2008 ). It is currently being used to evaluate access to snakebite treating facilities in Cameroon and Nepal ( Alcoba et al., 2021 ). When SBE risk is not uniformly distributed in a region of interest, modeling the vulnerability of the population to SBE can be instrumental in helping to plan antivenom distribution and referral networks ( Longbottom et al., 2018 ). In Costa Rica high resolution geospatial data, snakebite incidence data, locations of health facilities and ambulance stations, and data on the geographical extent of habitat suitable for Bothrops asper enabled identification of areas in need of improved access to antivenom ( Hansson et al., 2013 ). Prioritizing collection of geospatial data on snake ecology and distribution ( Pintor, 2021 ) and developing innovative methods to collect field data may help to enable improved prediction of snakebite hotspots ( Geneviève et al., 2018 ; Goldstein et al., 2021 ). Central to all these issues is the need to improve health systems and infrastructure, particularly in rural areas, and ensure that UHC is accessible and affordable for all. 4.2 Rational use of antivenom According to the WHO, rational use of medicines requires that “ patients receive medications appropriate to their clinical needs, in doses that meet their own individual requirements, for an adequate period of time, and at the lowest cost to them and their community ” ( WHO Website, c ). In this context a snakebite patient's real need for antivenom must first be considered, especially since not all snakebites lead to envenoming, and not all cases of envenoming are serious enough to warrant antivenom. Snakebites that do not require antivenom treatment, such as dry bites, in which no venom is injected, and bites caused by snakes of no medical importance, may represent up to 60% of all snakebites ( WHO, 2019 ). In these cases, the mistaken administration of antivenom has no clinical benefit to the patient, but may still potentially lead to early or late adverse reactions. Inappropriate clinical judgements for antivenom treatment have been documented in several countries, leading to unnecessary antivenom usage ( Fung et al., 2009 ; da Silva et al., 2019b ). At the same time, more rational use has also been reported after implementation of new treatment protocols, notably in Bangladesh and India ( Harris et al., 2010 ; Ghosh et al., 2008 ). Furthermore, we can never forget that SBEs are time-critical medical emergencies. The risk of possible overuse of antivenoms needs to be balanced with the imperative that safe, effective and affordable antivenoms are available as close to the patient as possible, in adequate doses that can be administered early. Conversely, inadequate treatment with antivenom results in incomplete neutralization of toxins and poor clinical outcomes. The cost of antivenoms was blamed for under-dosing of patients in Cameroon, while in Myanmar rationing due to shortages of products meant that less severe cases were administered lower doses ( Einterz and Bates, 2003 ; Alfred et al., 2019 ). In the Amazon 52% of patients with severe envenoming caused by Bothrops spp., and 82% of those with severe Lachesis spp. envenoming were under-dosed ( Feitosa et al., 2015a , 2015b ). In this region, increased lethality was significantly associated to lack of antivenom administration (53.5% of the fatal cases) and antivenom underuse (63.3% of fatal cases using antivenom) ( da Silva Souza et al., 2018 ). Antivenom under-dosing was more common in indigenous populations compared to urban and countryside populations, although antivenom is available free-of-charge across the country ( Fan and Monteiro, 2018 ; Monteiro et al., 2020 ). Several factors can lead to under-dosing, especially inferior potency and low immunoglobulin content in poorly designed or low quality products. The variability of products, even from one batch to another, can result in considerable uncertainty when it comes to estimating dose at the bedside [See Box 2 ]. In countries where substandard antivenoms have dominated the market for decades, the confidence of health care workers is eroded, which may lead to antivenom underuse. Health workers in remote settings may be apprehensive about treating snakebites, for fear of not being able to manage antivenom-associated adverse reactions should they occur ( Ralph et al., 2019 ). Knowledge about snakebite management, antivenom use and management of antivenom-associated adverse reactions is often poor ( Michael et al., 2018 ; Taieb et al., 2018 ; Bala et al., 2020 ; Sapkota et al., 2020 ; Ameade et al., 2021 ). Developing new or improved treatment guidelines, supporting training programs for public and private health workers and improving the quality, safety and effectiveness of antivenoms are key steps towards optimising use of antivenom and achieving consistent, improved outcomes. 4.3 Community perceptions of antivenom The first pillar of the WHO snakebite envenoming strategy is engagement with, and the empowerment of, affected communities. In LMIC settings there are a multitude of cultural, social and economic barriers that contribute to delayed access and it is important to consider these contextual factors in relation to the patient. Large proportions of patients choose traditional or faith healers ahead of allopathic medicine with a range of associated outcomes ( Sloan et al., 2007 ; Lam et al., 2016 ; Alcoba et al., 2020 ). Plants, animals and mineral-based therapies, blessings and prayers, as well as self-medication with orthodox medications, are commonly used by patients before making the decision to search for the health service ( Pierini et al., 1996 ; da Silva et al., 2019a ). Use of these self-care practices are recorded across the world as the cause of late medical assistance and poor outcomes in SBE ( da Silva et al., 2019a ; Schioldann et al., 2018 ; Mahmood et al., 2019 ; Longkumer et al., 2017 ; Alirol et al., 2010 ). The fact that snakebite victims resort to traditional medicine does not necessarily mean that they mistrust modern medicine. In Kenya, 60% of community members that were interviewed believed that antivenom works for the treatment of snakebite and 91% believed in the effectiveness of medicines in general ( Ooms et al., 2021 ). But inadequate knowledge about appropriate first aid methods is widespread ( da Silva et al., 2019a ; Silva et al., 2020 ; Michael et al., 2011 ) and high costs of treatment also influence decision-making. In Bangladesh envenomed victims of snakebite requiring antivenom spent more time with traditional healers than victims of non-venomous snakebite ( Harris et al., 2010 ). The perception of the seriousness of SBE often varies, some victims seeking help only after more severe symptoms (e.g. onset of unbearable pain, disfiguring oedema, bleeding or decreasing functional mobility) develop ( Cristino et al., 2021 ). On some occasions, utilization of healthcare services by snakebite victims was a reflection of the resistance of the snakebite victim to seek medical assistance, which was only overcome by pressure from family members. For some traditional populations, the displacement of an indigenous patient to a hospital setting to receive antivenom after a snakebite is a radical event ( Guimarães, 2015 ). Engagement with traditional healers at community level is needed to reduce the occurrence of harmful care practices and encourage prompter referrals to a healthcare facility equipped with antivenom. 5 Conclusion SBE is a neglected disease that is most endemic in rural areas of LMICs, where health infrastructure is often deficient, and antivenoms are complex, bespoke biotherapeutics supplied through inconsistent and fragmented markets challenged by variable regulatory compliance. There are multiple barriers to antivenom access, but they can be overcome with appropriate measures [See Fig. 4 ]. We hereafter propose a list of concrete recommendations requiring a fully-financed, coordinated response [See Box 3 ]. WHO has estimated that programme costs for SBE will be US$136.76 million between 2019 and 2030. This does not include the cost of commodities such as antivenoms, other treatments, medical consumables, or investments by countries themselves. Incorporation of SBE into the national health plans of affected countries, along with appropriate resource allocation and investment across a broad range of activities is essential. Modernized infrastructure, incorporating new technology and pragmatic collaboration between academia, manufacturers and government could reduce costs per capita of antivenom production and drastically improve quality, production capacity, sustainability and clinical effectiveness. North-South and South-South models of technology transfer should be pursued. With almost 140,000 deaths and hundreds of thousands of disabilities caused each year, SBE is a threat to the health and economic growth of LMIC communities in all parts of the world. Concerted action, led by WHO and strongly supported by governments, NGOs, donors and the pharmaceutical community is imperative. Credit author statement Julien Potet : design of the study, collection of information, preparation of the first draft, editing of the first draft, revision of the final draft, validation of the final draft. David Beran : design of the study, collection of information, editing of the first draft, revision of the final draft, validation of the final draft. Nicolas Ray : collection of information, preparation of the first draft, editing of the first draft, validation of the final draft. Gabriel Alcoba : collection of information, editing of the first draft, validation of the final draft. Abdulrazaq Garba Habib : collection of information, preparation of the first draft, editing of the first draft, validation of the final draft. Garba Iliyasu : collection of information, editing of the first draft, validation of the final draft. Benjamin Waldmann : collection of information, editing of the first draft, validation of the final draft. Ravikar Ralph : collection of information, preparation of the first draft, editing of the first draft, validation of the final draft. Mohammad Abul Faiz : collection of information, preparation of the first draft, editing of the first draft, validation of the final draft. Wuelton Marcelo Monteiro : collection of information, preparation of the first draft, editing of the first draft, validation of the final draft. Jacqueline de Almeida Gonçalves Sachett : collection of information, editing of the first draft, validation of the final draft. Jose Luis di Fabio : collection of information, preparation of the first draft, editing of the first draft, validation of the final draft. María de los Ángeles Cortés : collection of information, preparation of the first draft, editing of the first draft, validation of the final draft. Nicholas I. Brown : collection of information, preparation of the first draft, editing of the first draft, revision of the final draft, validation of the final draft. David J. Williams : design of the study, collection of information, preparation of the first draft, editing of the first draft, revision of the final draft, validation of the final draft. Ethical statement On behalf of the group of co-authors, Julien Potet confirms that the manuscript “Access to antivenoms in the developing world: A multidisciplinary analysis” was prepared following standard ethical guidelines for scientific publications. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgements Nicolas Ray acknowledges the support of the Swiss National Science Foundation (SNSF) [project number 315130_176271 , website: http://p3.snf.ch/project-176271 ]. Abdulrazaq Garba Habib and Garba Iliyasu are members of the African Snakebite Research Group (ASRG) project and the Scientific Research Partnership for Neglected Tropical Snakebite (SRPNTS) that are supported by NIHR (UK) and DfID (UK) respectively. Wuelton Monteiro has project funding from the Brazilian Ministry of Health (no. 733781/19–035 ).
REFERENCES:
1. ABUBAKAR I (2010)
2. AHONKHAI V (2016)
3. AINSWORTH S (2020)
4. ALCOBA G (2020)
5. ALCOBA G (2021)
6. ALFRED S (2019)
7. ALIROL E (2015)
8. ALIROL E (2010)
9. AMEADE E (2021)
10. ARIK M (2020)
11. BAJPAI V (2014)
12. BALA A (2020)
13. BAWASKAR H (2020)
14. BERAN D (2021)
15. BHARTI O (2015)
16. BLESSMANN J (2018)
17. BOYER L (2015)
18. BROWN N (2010)
19. BROWN N (2012)
20. CALVETE J (2016)
21. CHIPPAUX J (2015)
22. CRISTINO J (2021)
23. DASILVA A (2019)
24. DASILVA A (2019)
25. DASILVASOUZA A (2018)
26. DELAMATER P (2012)
27. DENBOER M (2021)
28. DEROECK D (2006)
29. DIFABIO J (2021)
30. EINTERZ E (2003)
31. ERICKSON L (2020)
32. FAN H (2018)
33. FEITOSA E (2015)
34. FEITOSA E (2015)
35. FUNG H (2009)
36. GAMPINI S (2016)
37. GAUS D (2013)
38. GENEVIEVE L (2018)
39. GHOSH S (2008)
40. GLOBALSNAKEBITEINITIATIVE (2020)
41. GOLDSTEIN E (2021)
42. GUIMARAES S (2015)
43. GUIMARAES W (2018)
44. GUTIERREZ J (2012)
45. GUTIERREZ J (2019)
46. GUTIERREZ J (2017)
47. HABIB A (2013)
48. HABIB A (2015)
49. HABIB A (2020)
50. HAMZA M (2016)
51. HANSSON E (2013)
52. HARRIS J (2010)
53. HARRISON R (2017)
54. IGAWE P (2020)
55. ILIYASU G (2015)
56. IMANI (2019)
57. ISLAM A (2014)
58. KNUDSEN C (2021)
59. LAM A (2016)
60. LEON G (2018)
61. LONGBOTTOM J (2018)
62. LONGKUMER T (2017)
63. MAHMOOD M (2019)
64. MCGAIN F (2004)
65. MICHAEL G (2018)
66. MICHAEL G (2011)
67. MONTEIRO W (2020)
68. MORAIS V (2009)
69. NEUTENS T (2015)
70. OCHOA C (2020)
71. OKUMU M (2019)
72. OOMS G (2021)
73. OOMS G (2020)
74. PECOUL B (1999)
75. PETRAS D (2011)
76. PIERINI S (1996)
77. PINTOR A (2021)
78. PLA D (2019)
79. POTET J (2019)
80. RALPH R (2019)
81. RAY N (2008)
82. SAPKOTA S (2020)
83. SCHIOLDANN E (2018)
84. SHARMA D (2015)
85. SHARMA S (2013)
86. SHRESTHA B (2017)
87. SILVA J (2020)
88. SIMPSON I (2007)
89. SINGH S (2014)
90. SLOAN D (2007)
91. SURAWEERA W (2020)
92. TAIEB F (2018)
93. TAN C (2016)
94. THEAKSTON R (2000)
95. UNITEDNATIONS (2017)
96. VAIYAPURI S (2013)
97. VILLALTA M (2016)
98. VISSER L (2008)
99. VONGPHOUMY I (2015)
100. WARRELL D (2008)
101. WATSON J (2020)
102. WHITAKER R (2012)
103. WHO (2020)
104. WHO (2019)
105. WHO (2017)
106. (1977)
107. WHOWEBSITEA
108. WHOWEBSITEB
109. WHOWEBSITEC
110. WILLIAMS D (2019)
111. WILLIAMS D (2011)
112. WILLIAMS D (2018)
113. WIRTZ V (2017)
114. (2002)
115. YATES V (2010)
116. YEN C (2015)
|
10.1016_j.jocmr.2024.100248.txt
|
TITLE: Kiosk 10R-TA-03 Acute Sheetlet Mobility Predicts Adverse Left Ventricular Remodelling After STEMI
AUTHORS:
- Rajakulasingam, Ramyah
- Ferreira, Pedro
- Scott, Andrew
- Khalique, Zohya
- Azzu, Alessia
- Dwornik, Maria
- Conway, Miriam
- Falaschetti, Emanuela
- Cheng, Kevin
- Hammersley, Daniel
- Cantor, Emily
- Tindale, Alexander
- Beattie, Catherine
- Banerjee, Arjun
- Wage, Ricardo
- Soundarajan, Rajkumar
- Dalby, Miles
- Nielles-Vallespin, Sonia
- Pennell, Dudley
- Silva, Ranil de
ABSTRACT: No abstract available
BODY:
Background: Dynamic alterations in myocardial microstructure may determine adverse left ventricular (LV) remodelling and altered cardiac function after acute ST-elevation myocardial infarction (STEMI). Biphasic diffusion tensor cardiovascular magnetic resonance (DT-CMR) quantifies dynamic changes in laminar microstructures termed sheetlets (E2A), which reorient from more wall-parallel in diastole (E2A DIA ) towards wall-perpendicular in systole (E2A SYS) during myocardial thickening. We determined whether acutely altered regional microstructural dynamics detected using biphasic DT-CMR a) predict adverse LV remodelling; b) associate with myocardial injury and contractile dysfunction. Methods: Biphasic DT-CMR was performed in STEMI patients at 4 days (n=70) and 4 months (n=66) following reperfusion, and healthy volunteers (HVOLs) (n=22). Adverse LV remodelling was defined as an increase in left ventricular end-diastolic volume ≥20% at 4 months. Results: In the acute infarct zone, E2A DIA was raised and E2A SYS reduced, resulting in impaired E2A mobility (∆E2A) compared to adjacent and remote zones, and HVOLs (all p< 0.001). With increasing LGE volume, acute E2A SYS and ∆E2A in the infarcted myocardium significantly reduced, whereas E2A SYS and ∆E2A in the remote zone increased. E2A mobility and myocardial strain were significantly correlated. At 4 months, E2A DIA decreased in all regions (all p< 0.001), whereas E2A SYS increased globally (p=0.008), resulting in a significant increase in E2A mobility in all regions (all p < 0.002). By multivariate analysis, acute global E2A mobility was the only independent predictor of adverse LV remodelling (odds ratio 0.77; 95% CI: 0.63 – 0.94; p=0.010). Using a threshold of < 26.4°, acute global E2A mobility offered 80% sensitivity and 84% specificity, with an area under the curve of 0.86 (p< 0.001) in predicting adverse LV remodelling. Conclusion: Biphasic DT-CMR detects aberrant microstructural dynamics in acute STEMI in diastole and systole, including a unique pattern of sheetlet dynamics in the infarct zone and their compensatory changes in the remote myocardium of large infarctions. Impaired acute sheetlet mobility was the only independent predictor of adverse LV remodelling at 4 months. Biphasic DT-CMR offers mechanistic insights into contractile dysfunction in STEMI which are not identifiable using conventional imaging techniques. Author Disclosure: R Rajakulasingam : Nothing to disclose; P Ferreira : N/A; A Scott : Siemens; Z Khalique : N/A; A Azzu : N/A; M Dwornik : N/A; M Conway : N/A; E Falaschetti : N/A; K Cheng : N/A; D Hammersley : N/A; E Cantor : N/A; A Tindale : N/A; C Beattie : N/A; A Banerjee : N/A; R Wage : N/A; R Soundarajan : N/A; M Dalby : N/A; S Nielles-Vallespin : N/A; D Pennell : N/A; R de Silva : N/A
REFERENCES:
No references available
|
10.1016_j.geogeo.2023.100206.txt
|
TITLE: Precise determination of Sr and Nd isotopic compositions of Chinese Standard Reference samples GSR-1, GSR-2, GSR-3 and GBW07315 by TIMS
AUTHORS:
- Guo, Kun
- Yu, Jimin
- Fan, Di
- Hu, Zhifang
- Liu, Yanli
- Zhang, Xia
- Zhang, Yu
ABSTRACT:
In this contribution, the Sr and Nd isotopic compositions of four Chinese National Standards are investigated to verify whether they meet the requirements of international reference materials. Powdered samples were digested using conventional HF and HNO3 acid dissolution protocols, and chemical purification was performed using Sr Spec, AG 50W-X12 and LN resin. All measurements were carried out on a Thermal Ionization Mass Spectrometry (TIMS) at Laoshan Laboratory. We report Sr and Nd isotopic compositions of the four standards as the follows:
GSR-1 (granite): 87Sr/86Sr = 0.738296 ± 0.000019 (2SD, n = 30), 143Nd/144Nd = 0.512210 ± 0.000012 (2SD, n = 17)
GSR-2 (andesite): 87Sr/86Sr = 0.704929 ± 0.000012 (2SD, n = 45), 143Nd/144Nd = 0.512377 ± 0.000014 (2SD, n = 17)
GSR-3 (basalt): 87Sr/86Sr =0.704093 ± 0.000010 (2SD, n = 52), 143Nd/144Nd = 0.512885 ± 0.000018 (2SD, n = 17)
GBW07315 (marine sediment): 87Sr/86Sr =0.710261 ± 0.000011 (2SD, n = 15),
143Nd/144Nd = 0.512359 ± 0.000011 (2SD, n = 17)
These results represent an extensive examination of the four Chinese geochemical rock materials. The long-term (half a year) results support that they are suitable to be used as reference materials for Sr and Nd isotopic compositions.
BODY:
1 Introduction The 87 Rb- 87 Sr and 147 Sm- 144 Nd system have been widely used in geochemistry, geochronology, and cosmochemistry since the last century ( De Paolo and Wasserburg, 1979 ; Guo et al., 2018 ; Wang et al., 2022 ). With the development of mass spectrometry (MS), especially multi-collector thermal ionization mass spectrometry (TIMS) and multi-collector inductively coupled plasma mass spectrometry (MC-ICP-MS), the accuracy and precision of Sr and Nd isotopic analyses have been greatly improved ( Weis et al., 2006 ; Yang et al., 2010 ; Balaram et al., 2022 ). The use of Sr-spec and LN resins makes the chemical separation of Sr and Nd simple and efficient ( Chu et al., 2014 ; Li et al., 2015 ; Jweda et al., 2016 ). It is well accepted that standard reference materials (SRM) are essential in isotope analyses as a quality control of measurement protocols, and for inter-laboratory data comparison during the process of chemical separation and MS analysis ( Cheng et al., 2015 ). Generally, standards from the Unites States Geological Survey (USGS), the National Institute for Standards and Technology (NIST), and the Japanese Geological Survey (JGS) are commonly used igneous rock standard materials ( Weis et al., 2006 ). However, the amount of SRM is limited, and problems are emerging as some materials such as SRM USGS BHVO-1 (Basalt Hawaiian Volcano Observatory), USGS BCR-1 (Basalt, Columbia River), standard zircon 91,500 are being exhausted ( Cheng et al., 2015 ). Chinese National Standards granite GSR-1, andesite GSR-2, basalt GSR-3 and marine sediment GBW07315 are commonly used as standards for major and trace elements analyses in Chinese laboratories ( Wong et al., 2009 ; Lee et al., 2010 ; Zhang et al., 2012 ). Reported data show consistent results among laboratories. Furthermore, these materials are characteristic of homogeneity and abundance. Therefore, they have great potential to be the used as new isotope standards. A few researchers have assessed their feasibility as isotope standards for Hf, Mg and Fe isotopes ( Cheng et al., 2015 ; Chen et al., 2020 ; Wang et al., 2022 ). In this study, we carry out a systematic investigation to assess the suitability of using these four standards as new SRM for Sr and Nd isotopic analyses. 2 Experimental materials 2.1 Reagents All AR grade acids (HNO 3 , HCl and HF) are purified using Savillex™ sub-boiling distillation system once or twice. GR grade H 3 PO 4 (≥85 wt% in H 2 O) is purified using cation exchange chromatographic methods. Deionised water is purified with a Milli-Q (Millipore) system (resistivity: 18.2 MΩ·cm −1 at 25 °C) and used for diluting concentrated acids. 2.2 Sr-Nd standard solutions NIST NBS-987 and JNdi-1 are, respectively, used as standards for Sr and Nd analyses in this study. Strontium international reference (SrCO 3 , 99.999% purity) is diluted to 1000 ppm with 2 wt% HNO 3 . Neodymium international reference (Nd 2 O 3 , 99.999% purity) is diluted to 1000 ppm with 2 wt% HNO 3 . 2.3 Resin and column Cation-exchange columns (8 mm i.d.) packed with 0.8 ml Sr Spec resin produced by Eichrom company are used to separate Sr from sample matrix. Cation-exchange columns packed with 2 ml 200–400 mesh Bio-Rad AG 50 W X12 resin are used to separate REE from sample matrix. Cation-exchange columns packed with 1.6 ml Eichrom-LN resin are used to separate Nd from other REEs. 2.4 Rock standard samples Rock powders of four certified reference materials are GSR-1, GSR-2, GSR-3 and GBW07315, whose geochemical compositions are given in Table 1 . BCR-2 is processed as a reference material to control all procedures. GSR-1 is a gray medium-granited biotite granite from Chenzhou, Hunan, China that formed in a contact zone with carbonate rocks with abundant mineralization of tungsten, tin, and molybdenum ( Xie et al., 1985 ; Cheng et al., 2015 ). GSR-2 is a hornblende andesite sample obtained in the vicinity of the Meishan iron mine, Nanjing. Feldspar exists as the principal mineral in GSR-2 ( Xie et al., 1985 ). GSR-3 is an olivine basalt obtained from Zhangjiakou, Hebei, and its principal minerals are olivine, plagioclase, magnetite and augite ( Xie et al., 1985 ). GBW07315 is a marine sediment taken from Clarion-Clipperton zone in the Eastern Pacific Ocean Basin by “Haiyang 4″ during the DY85–1 cruise in 1991 ( Wang et al., 1998 ). This sample mainly consists of siliceous ooze, with main minerals including illite, kaolinite, feldspar, biogenic calcite, quartz and halite. 2.5 W and Re filaments The efficiency of ionization is related to the work function and temperature of filament and the first ionization potential of the ionized element. Hence, the filament with high work function and fusion point is the first option. Tungsten and rhenium meet these requirements. A tungsten filament, produced by H. Cross Company with 0.03 mm thick, 0.7 mm wide and 99.95% pure, is used for Sr isotope analyses. A rhenium filament, produced by H. Cross Company with 0.03 cm thick, 0.7 cm wide and 99.995% pure, is used for Nd isotope analyses. Single W filament and double Re filaments were used for Sr and Nd isotopic analyses, respectively. 2.6 Activators For elements with high ionizing potential, it is difficult to ionize them using regular sample loading methods. Based on our experience, the efficiency of ionization could be enhanced by mixing a small quantity of pure materials with the sample on filament. This pure material is generally known as an activator. Sr isotope analysis uses TaF 5 as an activator. About 0.2 g TaCl 5 is added into a Teflon-PFA vial, and 3 ml deionised water is subsequently added into the vial to induce hydrolysis of TaCl 5 . Later on, 0.1 ml concentrated H 3 PO 4 and boiled concentrated HF, HNO 3 and 5 ml deionised water into vial are added. After screwing on the lid, the vial is placed on a hot plate at 120 °C to ensure balance between hydrolysate and liquid. Nd isotope analysis uses 10 wt.% H 3 PO 4 as an activator which is diluted with deionised water. 3 Chemical separation All chemical pretreatment work was carried out in a Class 100 clean laboratory environment at Marine Carbon Cycle Research Center, Laoshan Laboratory. 3.1 Sample digestion About 50 mg sample powder of rock standards are weighed and transferred into 25 ml Teflon vials. Each standard is digested with 1 ml of 22 N HF on a hot-plate (120 °C). Then, the excess HF and volatile SiF 4 are expelled by evaporation of the sample to dryness. After adding 1.5 ml 22 N HF and 1 ml 12 N HNO 3 into Teflon vials, put them into steel jackets in oven at 190 °C with lid on. After three days, take the vials from steel jackets and put them on the hot-plate (160 °C) to dryness. In the following step, the samples are dissolved in 1 mL 12 N HNO 3 at 160 °C and evaporated to dryness, and the step is repeated thrice. Finally, dissolve the samples in 1 ml 3.5 N HNO 3 at 120 °C for two hours and transfer to centrifuge tubes for chemical separation. The total chemical procedure blank for Sr and Nd is lower than 120 pg and 50 pg, respectively. 3.2 Column chemistry A single-step and two-steps column separation scheme is used for Sr and Nd separation, respectively, as shown in Fig. 1 and Table 2 . The columns are washed with 10 ml deionised water 3 times before load the resins. The Sr spec resin are loaded into the column and washed with 10 ml 6 N HCl, 10 ml deionised water and 10 ml 3.5 N HNO 3 , successively. The elution profile for BCR-2 is shown in Fig. 2 . The dissolved sample in 1 ml 3.5 N HNO 3 is loaded onto the Sr spec resin column. Then, elute matrix elements, such as Mg, Al, K, Na, Ca, Ti, Cu, Rb and so on, by washing with 10 ml 3.5 N HNO 3 . If the sample have relative high concentration of Rb, repeat above washing with a certain 3.5 N HNO 3 . Subsequently, Sr is eluted with 5 ml 0.05 N HNO 3 , and caught by PFA screw-cap beakers ( Fig. 2 ). Finally, put them on a hot-plate (110 °C) and heat to dryness to ready for load on filaments. Two-steps column separation is used for Nd separation: the first resin column is used to separate REEs and second one is used to separate Nd from REEs. The elution profiles for BCR-2 are shown in Fig. 3 . Load the AG50W-X12 resin into column, followed by washing with 10 ml 6 N HCl, 10 ml 3.5 N HNO 3 and 10 ml deionised water, successively. The dissolved sample in 1 ml 3.5 N HNO 3 is loaded onto the resin column. Elute matrix elements by washing with 10 ml 3.5 N HNO 3 . REEs are eluted with 9 ml 7 N HNO 3 , caught by PFA beakers. Put them on a hot-plate (110 °C) and heat to dryness. Then dissolve the solutions in 1 ml 0.25 N HCl at 120 °C two hours readying for the second column separation step. Load the LN resin into column, followed by washing with 10 ml 6 N HCl, 10 ml deionised water and 5 ml 0.25 N HCl. The REEs solution in 1 ml 0.25 N HCl is loaded onto the resin column. Elute other REEs by washing with 4 ml 0.25 N HCl. Finally, Nd is eluted with 7 ml 0.25 N HCl, collected by PFA beakers. Put them on a hot-plate (110 °C) and heat to dryness to ready for loading on filaments. Although there are some Ce and Pr in the Nd solution, Nd isotopic analysis is not influenced by their existence. 4 Analytical procedure 4.1 Loading samples on filaments All filaments have been degassed in a degassing machine prior to use. Single W filament is used for Sr isotopic determinations. Sr samples are dissolved in 1 μl 2.5 N HNO 3 and then loaded onto degassed W filaments. The droplet on the filament is evaporated to dryness by applying 0.8 A electric current. 1 μl TaF 5 activator solution is then loaded and evaporated to dryness by 0.8 A electric current. The electric current is increased to ∼2.0 A afterwards, turning the filament to dark red. After 3 s, the electric current is turned off. Double Re filaments are used for Nd isotopic determinations. Nd samples are dissolved in 1 μl 2.5 N HCl and loaded on the degassed Re filaments. The solution is evaporated to dryness by 0.8 A electric current. Then 1μl 10 wt% H 3 PO 4 activator solution is loaded and evaporated to dryness by 0.8 A electric current. The electric current is increased to ∼2.0 A till the filament becomes dark red. After 3 s, the electric current is turned off. 4.2 Thermal ionization mass spectrometry Sr and Nd isotopic composition measurements are determined on a Triton XT TIMS (Thermo Scientific, USA), located on a Class 10,000 clean laboratory environment, at Marine Carbon Cycle Research Center, Laoshan Laboratory. The temperature and humidity of the laboratory are 22 °C and 45% perennially, due to the stable air conditioning system. A gain calibration of the amplifiers is performed before the measurement. Sr isotopic compositions are measured in static mode with virtual amplifier. In order to eliminate errors from man-made operation, Automatic filament heater programs of TIMS for Sr analysis is used with detailed steps shown in Table 3 . The source lens parameters, based on experience, are set for receiving maximum signals. The auto focus would be started to adjust the source lens parameters to the best condition. Once the faraday cup receives 88 Sr signals, which could be more than 8 mV. The Automatic filament heater programs could eliminate the influence on the test from artificial factors. The function purpose of waiting for 5 mins is supposed to evaporate Rb as possible at 1100 °C when Sr is not activated. The purpose of another wait is to stabilize Sr signals. When the ion source vacuum is better than 1 × 10 −7 mbar (generally, the ion source vacuum could be better than 5 × 10 −8 mbar after addition of liquid nitrogen), the measurement of Sr ion beams could be started. When a stable ion beam of 88 Sr reaches 10 V and 85 Rb/ 86 Sr is below 0.001, the data acquisition could start. At this moment, the temperature of filament could be up to 1500 °C. The cup configuration for the Sr isotopic measurements is set up as shown in Table 4 . The measurement run consists of 10 blocks of data with 15 cycles per block. The integration time per cycle is 4.194 s and the idle time is 3 s. Prior to data acquisition of each block, a peak-center routine is run, and then the baseline is measured. A virtual amplifiers rotation and a baseline adjusting are run every blocks. When 5 blocks finished, a peak-center is run. The 87 Sr signal intensity is corrected for the potential bias caused by the remaining isobaric overlap of 87 Rb on 87 Sr using an 87 Rb/ 85 Rb value of 0.385041 ( Charlier et al., 2006 ) before mass fractionation correction ( Li et al., 2015 ). In this paper, the 85 Rb/ 86 Sr ratios in all standard samples are lower than 0.0002 (Supplementary Data, Tables S1 and S2), implying negligible isobaric interference from 87 Rb. Correct 87 Sr/ 86 Sr values are normalized to 88 Sr/ 86 Sr = 8.375209 using an exponential law ( Li et al., 2015 and therein references). Each running analysis consisted of maximum of 150 cycles that are divided into 10 blocks. The NBS-987 standard is analyzed during the sample measurement period to monitor the instrument status. Two equations are used to correct isotope mass fractionations. Ɛ exp is the mass bias factor of exponential. Y and true Y are the true value and observed value of the normalizing ratio, respectively. obs m and 1 m are the atomic mass of the normalizing isotope, respectively. 2 R and true R are the true value and observed value of the target isotopic ratio, respectively. obs m and 3 m are the atomic mass of the target isotope, respectively. 4 (1) ε e x p = l n ( Y t r u e Y o b s ) l n ( m 1 m 2 ) (2) R t r u e R o b s = ( m 3 m 4 ) ε e x p Nd isotopic compositions also are measured in static mode with virtual amplifier. Automatic filament heater programs of TIMS for Nd analysis are used and the detailed steps are shown in Table 5 . When the ion source vacuum is better than 1 × 10 −7 mbar, the measurement of Nd ion beams could be started. When a stable ion beam of 144 Nd reach 2 V, the data acquisition could start. At this moment, the temperature of ionization filament could be about 1800 °C, and the electric current of evaporation filament could be about 1600 mA. The cup configuration for the Nd isotopic measurements is set up as in Table 4 . The measurement run consisted of 14 blocks of data with 10 cycles per block. The integration time per cycle is 4.194 s and the idle time is 3 s. Prior to data acquisition of each block, a peak-center routine is run, and then the baseline is measured. A virtual amplifiers rotation and a baseline adjusting are run every blocks. After 7 blocks are finished, a peak-center is run. The 144 Nd signal intensity is corrected for the potential bias caused by the remaining isobaric overlap of 144 Sm on 144 Nd using an 147 Sm/ 144 Sm value of 4.838710 before mass fractionation correction. Correct 143 Nd/ 144 Nd values are normalized to 146 Nd/ 144 Nd = 0.7219 using an exponential law ( Guo et al., 2018 ; Luu et al., 2022 ). Each running analysis consisted of maximum of 140 cycles that are divided into 14 blocks. JNdi-1 is analyzed during the sample measurement period to monitor the instrument status. 5 Results and discussion 5.1 The correction of RB and Sm Since 87 Rb share the same atomic mass with 87 Sr, the presence of Rb could influence the accurately of 87 Sr/ 86 Sr measurements. Thus, complete chemical separation between them is necessary. Usually, one faraday cup which is used to monitor 85 Rb concentration could set on the cup configuration ( Table 4 ). The value of 87 Rb/ 85 Rb of 0.385041 is used to correct 87 Rb influence for 87 Sr ( Charlier et al., 2006 ). Nevertheless, the correction could only be accurate while 85 Rb/ 86 Sr ratio is controlled below 0.001, because 87 Rb/ 85 Rb also would occur mass fractionation but not always be 0.385041. Sr isotope analyses on TIMS have a natural advantage in that Rb and Sr would be ionized at different temperatures: Rb at ∼1100 °C whereas Sr at ∼1450 °C. Hence, most Rb could be burned away at ∼1100 °C which can be judged by 85 Rb/ 86 Sr to drop below 0.001, the data acquisition would be started. In this case, there is no need to consider any influences from 87 Rb. The presence of Sm could influence the accuracy of 143 Nd/ 144 Nd measurements, because of the same atomic mass between 144 Sm and 144 Nd. One faraday cup, which is used to monitor 147 Sm concentration, could set on the cup configuration ( Table 4 ). The value of 147 Sm/ 144 Sm of 4.838710 are used to correct for 144 Sm influence on 144 Nd under conditions of 147 Sm/ 144 Nd < 0.003. Usually, Nd is ionized earlier (evaporation filament is about 1600 mA) than Sm (evaporation filament is above 2400 mA) on TIMS. However, if the loaded sample have a significant amount of Sm, say about one-tenth, the Nd would be ionized later than 1600 mA of evaporation filament. Hence, complete chemical separation between them is desired. The high resistance amplifier of 10 13 ohm is used to help to minimize influences from Rb and Sm. One faraday cup with 10 13 ohm amplifier, which does not rotate with other faraday cups, is used to monitor signals from 85 Rb and 147 Sm. The result shows little change between 10 11 and 10 13 ohm amplifiers. But using a 10 13 ohm amplifier enhances the accuracy of the peak overlap. Because the intensity of 85 Rb and 147 Sm are usually small, and their peak pattern are serration when using 10 11 ohm amplifier but stable when using 10 13 ohm amplifier. 5.2 Results of SRM The 87 Sr/ 86 Sr and 143 Nd/ 144 Nd obtained for standard solutions and USGS standard BCR-2 are shown in Table 6 and Figs. 4 and 5 . The Sr standard NBS-987 yielded 87 Sr/ 86 Sr = 0.710245 ± 0.000013 (2SD, n = 54) and The Nd standard JNdi-1 yielded 143 Nd/ 144 Nd = 0.512102 ± 0.000009 (2SD, n = 63). The standard BCR-2 yielded 87 Sr/ 86 Sr = 0.705020 ± 0.000018 (2SD, n = 46) and 143 Nd/ 144 Nd = 0.512619 ± 0.000008 (2SD, n = 7). The 87 Sr/ 86 Sr and 143 Nd/ 144 Nd obtained for SRM in this study are shown in Table 6 and Figs. 4 and 5 . The 87 Sr/ 86 Sr ratio of GSR-1, GSR-2, GSR-3 and GBW07315 are 0.738296 ± 0.000019 (2SD, n = 30), 0.704929 ± 0.000012 (2SD, n = 45), 0.704093 ± 0.000010 (2SD, n = 52) and 0.710261 ± 0.000011 (2SD, n = 15), respectively. There are several Sr isotopic data previously reported for GSR-1, GSR-2, GSR-3. Few data were reported in the literature. Dai et al. (2017) reported one data for GSR-1 with a 87 Sr/ 86 Sr ratio of 0.738262 ± 0.000035 (2SE), which is less than 30 ppm compared to our results. Cheng et al. (2015) suggested GSR-1 is inhomogeneity in terms of Hf isotopes. The presence of inherited, ancient zircons, supported by a model age of the samples of around 3000 Ma, may influence Hf isotopic homogeneity. However, we treat 30 ppm variability to be acceptable for most 87 Sr/ 86 Sr measurements. The 87 Sr/ 86 Sr ratio reported by Dai et al. (2017) is close to our minimum value. Yang et al. (2020) reported one data of GSR-2 with a 87 Sr/ 86 Sr ratio of 0.704914 ± 0.000015 (2SE) using TIMS, identical to our data within errors. Richardson et al. (1996) and Wu et al. (2021) reported one data of GSR-3 with 87 Sr/ 86 Sr ratios of 0.704090 ± 0.000020 (2SE) and 0.704090 ± 0.000016 (2SE), respectively, obtained by TIMS, which again agree well with our results. The consistency between previous and our data support that GSR-1, GSR-2 and GSR-3 standards are sufficiently homogeneous in terms of Sr isotopic compositions. No GBW07315 standard Sr isotope has been reported previously. The 143 Nd/ 144 Nd ratio of GSR-1, GSR-2, GSR-3 and GBW07315 are 0.512210 ± 0.000012 (2SD, n = 17), 0.512377 ± 0.000014 (2SD, n = 17), 0.512885 ± 0.000018 (2SD, n = 17) and 0.512359 ± 0.000011 (2SD, n = 17), respectively. Dai et al. (2017) reported one data of GSR-1 with a 143 Nd/ 144 Nd ratio of 0.512199 ± 0.000006 (2SE), which was obtained by MC-ICP-MS. Yang et al. (2020) reported one data of GSR-2 with a 143 Nd/ 144 Nd ratio of 0.512382 ± 0.000010 (2SE) obtained by TIMS. Wu et al. (2021) reported one data of GSR-3 with a 143 Nd/ 144 Nd ratio of 0.5129 ± 0.000012 (2SE), which was obtained by TIMS. The 143 Nd/ 144 Nd ratios of three GSR standards analysed in this study are consistent with previously reported values within error, which supports good homogeneity of Nd isotopic composition. No GBW07315 standard Nd isotope has been reported previously. 5.3 The precision and accuracy of SRM The precision and accuracy of analysis method are generally assessed by conducting replicate measurements of the pure standard solutions and geological standard rocks ( Bai et al., 2022 ). Following the purification and analytical methods in this study, repeated measurements of 87 Sr/ 86 Sr ratio of standards yield 2SD of 13 ppm ( n = 54) for NBS-987, 18 ppm ( n = 46) for BCR-2, 19 ppm ( n = 30) for GSR-1, 12 ppm ( n = 45) for GSR-2, 10 ppm ( n = 52) for GSR-3 and 11 ppm ( n = 15) for GBW07315 in about half a year. The results suggest that the reproducibility of 87 Sr/ 86 Sr ratio is better than 20 ppm. The values of 2SD of NBS-987 and BCR-2 are consistent with the data reported in the references, and our achieved analytical precisions for GSR-1, GSR-2 and GSR-3 are better than the reported data. Duplicate 143 Nd/ 144 Nd measurements of standards yield 2SD of 9 ppm ( n = 63) for JNdi-1, 8 ppm ( n = 7) for BCR-2, 12 ppm ( n = 17) for GSR-1, 14 ppm ( n = 17) for GSR-2, 18 ppm ( n = 17) for GSR-3 and 5 ppm ( n = 17) for GBW07315. These results suggest that the reproducibility of 143 Nd/ 144 Nd ratio is better than 20 ppm, which is a better precision. The accuracy of Sr and Nd isotopic analyses for GSR-1, GSR-2, GSR-3 and GBW07315 standard samples are tested by NBS-987, JNdi-1 and BCR-2. The Sr standard NBS-987 yielded 87 Sr/ 86 Sr = 0.710245 ± 0.000013 (2SD, n = 54), is in good agreement with previously reported values within error ( Deniel and Pin, 2001 ; Weis et al., 2006 ; Yang et al., 2010 ; Li et al., 2012 ; Wu et al., 2021 ; Luu et al., 2022 ). The Nd standard JNdi-1 yielded 143 Nd/ 144 Nd = 0.512102 ± 0.000009 (2SD, n = 63), which is slightly lower than the reference value of 0.512115 ± 0.000007 ( Tanaka et al., 2000 ), but identical to the high-precision data recently reported by Pin and Gannoun (2019a , 2019b ) of 0.512101 ± 0.000003 and 0.512104 ± 0.000003. The values of two above standards, used to monitor the instrument status, indicating that TIMS used in this study is in the best working status during the whole testing process. The standard BCR-2, which is used to monitor chemical process and instrument status, yielded 87 Sr/ 86 Sr = 0.705020 ± 0.000018 (2SD, n = 46) and 143 Nd/ 144 Nd = 0.512619 ± 0.000008 (2SD, n = 7). The results of Sr and Nd isotope of BCR-2 in this study are nearly consistent with previously reported values within error ( Chauvel and Blichert-Toft, 2001 ; Raczek et al., 2003 ; Weis et al., 2006 ; Yang et al., 2010 ; Makishima et al., 2008 ; Carpentier et al., 2014 ; Zhang and Hu, 2020 ; Di et al., 2021 ; Luu et al., 2022 ), indicating that our method of chemical process is efficient. Taken together, these data demonstrate the reliability of our measurements. 5.4 Comparison with other SRM Many SRM are used to monitor the analysis process of Sr and Nd isotope of igneous rocks, such as BCR-1, BHVO-1, AGV-1, RGM-1, G-2, GSP-2 from USGS, and JA-1, JB-2, JB-3, JP-1 from JGS. However, an emerging problem with using these popular SRM is their exhaustion (e.g., BHVO-1, BCR-1) after many years of use ( Cheng et al., 2015 ). By contrast, much less amount of materials have been used for GSR-1, GSR-2, GSR-3 and GBW07315. These standards are widely used for trace element analyses which show very consistent results ( Yang et al., 2018 ). This lays a favorable foundation for assessing them for Sr and Nd isotope analyses. Besides, these materials show high reproducibility in Fe, Hf and Mg isotopic analyses, a sign for their stability ( Cheng et al., 2015 ; Chen et al., 2020 ; Wang et al., 2022 ). Meanwhile, GSR-1, with 87 Sr/ 86 Sr = 0.738296 ± 0.000019 (2SD, n = 30) falling between the values of SRM and GSP-2 (about 0.76500), can be more useful for samples with relatively high 87 Sr/ 86 Sr ratios. We recommend that GBW07315 could be used as a new marine sediment standard for Sr and Nd isotope analyses. 6 Conclusions In this study, Sr and Nd isotopic ratios are determined for four Chinese national standard materials using the high precision TIMS technology. Our investigation indicates that these reference materials are homogenous in terms of Sr-Nd isotopic compositions, which are therefore suitable to be used as reference materials for Sr and Nd isotope analyses. Additionally, our chemical separation works effectively with high Rb samples. The 10 13 ohm high resistance amplifier could be used to correct potential influence from Rb and Sm, but did not significantly improve precision or accuracy. Considering the relatively high Rb/Sr ratio (4.40), GSR-1 can potentially be used as a primary or secondary reference material for Sr isotopic measurements of samples with high Rb concentrations. Moreover, Sr and Nd isotopic compositions of GBW07315 are reported for the first time. Overall, our results support using the four Chinese national standards as primary or second reference materials for Sr and Nd isotopic analyses as well as for inter-laboratory comparisons. Credit author statement Kun Guo: Writing - original draft, Methodology, Sample analysis, Investigation; Jimin Yu: Writing - review & editing, Language polishing; Di Fan: Language polishing; Zhifang Hu: Writing - review & editing; Yanli Liu: Column chemistry method; Xia Zhang: Writing - review & editing; Yu Zhang: Funding acquisition. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgments This work is supported by the Natural Science Foundation of China (No. 42276085 ), soft science project of Shandong Province key research and development plan (2019RZA02002) and Laoshan laboratory. Three anonymous reviewers and editor are sincerely appreciated for their works and constructive comments, which helped greatly improve the quality of the manuscript. Supplementary materials Supplementary material associated with this article can be found, in the online version, at doi:10.1016/j.geogeo.2023.100206 . Appendix Supplementary materials Image, application 1 Image, application 2
REFERENCES:
1. BAI J (2022)
2. BALARAM V (2022)
3. BOWER N (1993)
4. CARPENTIER M (2014)
5. CHARLIER B (2006)
6. CHAUVEL C (2001)
7. CHENG T (2015)
8. CHEN L (2020)
9. CHU Z (2014)
10. DAI F (2017)
11. DEPAOLO D (1979)
12. DENIEL C (2001)
13. DI Y (2021)
14. GUO K (2018)
15. JOCHUM K (2008)
16. JWEDA J (2016)
17. LEE B (2010)
18. LI C (2015)
19. LI C (2012)
20. LUU T (2022)
21. MAKISHIMA A (2008)
22. PIN C (2019)
23. PIN C (2019)
24. RACZEK I (2003)
25. RICHARDSON J (1996)
26. TANAKA T (2000)
27. WANG C (2022)
28. WANG Y (1998)
29. WEIS D (2006)
30. WONG J (2009)
31. WU S (2021)
32. XIE X (1985)
33. YANG J (2018)
34. YANG Y (2020)
35. YANG Y (2010)
36. ZHANG G (2012)
37. ZHANG W (2020)
|
10.1016_j.net.2021.06.005.txt
|
TITLE: The influence of BaO on the mechanical and gamma / fast neutron shielding properties of lead phosphate glasses
AUTHORS:
- Mahmoud, K.A.
- El-Agawany, F.I.
- Tashlykov, O.L.
- Ahmed, Emad M.
- Rammah, Y.S.
ABSTRACT:
The mechanical features evaluated theoretically using Makishima-Mackenzie's model for glasses xBaO-(50-x) PbO–50P2O5 where x = 0, 5, 10, 15, 20, 30, 40, and 50 mol%. Wherefore, the elastic characteristics; Young's, bulk, shear, and longitudinal modulus calculated. The obtained result showed an increase in the calculated values of elastic moduli with the replacement of the PbO by BaO contents. Moreover, the Poisson ratio, micro-hardness, and the softening temperature calculated for the investigated glasses. Besides, gamma and neutron shielding ability evaluated for the barium doped lead phosphate glasses. Monte Caro code (MCNP-5) and the Phy-X/PSD program applied to estimate the mass attenuation coefficient of the studied glasses. The decrease in the PbO ratio has a negative effect on the MAC. The highest MAC decreased from 65.896 cm2/g to 32.711 cm2/g at 0.015 MeV for BPP0 and BPP7, respectively. The calculated values of EBF and EABF showed that replacement of PbO with BaO contents in the studied BPP glasses helps to reduce the number of photons accumulated inside the studied BPP glasses.
BODY:
1 Introduction Materials used for nuclear safety must-have sole characteristics such as a suitable transparent, non-corrosive, withstanding high stress, and non-toxic [ 1 ]. Wherefore, great attention was directed to the use of the glass materials as radiation shielding due to their superior functionality compared to traditional shielding materials such as concrete and polymer [ 2–4 ]. Recently, glass compositions have been utilized in a wide range of protons, alpha, and ionizing radiation absorption in X-ray examination rooms in hospitals and X-ray screening rooms in airports [ 5 ]. Consequently, more materials would synthesize and investigate their shielding potentials. The unique physical, thermal, mechanical, and optical properties of glass places them in a pivot position among materials that would be significant players in the quest to find novel shielding materials [ 6–9 ]. The reasons, as mentioned above, explain why many glasses suggested for their radiation shielding capability with exciting results. These include but not limited to, silicate-, borosilicate-, tellurite-based glass systems, etc., [ 10–16 ]. Phosphate-based glasses are characterized by noticeable features that make them more appropriate for several applications such as laser host matrices [ 17 , 18 ], optical devices [ 17–19 ], and glass-to-metal seals [ 18 ]. The addition of certain metal oxides such as Al 2 O 3 , Fe 2 O 3 , ZnO, Bi 2 O 3 , and PbO to P 2 O 5 glasses improve their chemical durability [ 18 , 20 , 21 ]. These additives lead to an increase in the average bond strength and cross-link density within phosphate chains. Therefore, phosphate glasses are very favorable candidates for long term storage of high-level nuclear wastes [ 20 , 22 , 23 ]. Among various phosphate glass systems, the alkali free PbO–P 2 O 5 glass systems are known to be more stable against devitrification and moisture resistant. In contrast to the conventional alkali/alkaline earth oxide modifiers, PbO has the ability to form stable glasses due to its dual role; one as the modifier [ 24 , 25 ] if Pb–O is ionic and the other as the glass former, if Pb–O is covalent. From formal reports, lead oxide (PbO) plays as network modifier depending on its content [ 26 ], whereas barium oxide (BaO) is considered as network modifier oxide in the glass structure [ 27 , 28 ]. Doweidar et al. [ 28 , 29 ] reported that in B 2 O 3 –PbO glass systems, the PbO plays as a modifier oxide up to 50 mol%, and above that content, it can form PbO 4 tetrahedra through sharing corners with PO 4 units. This trend leads to an increase in the cross-linking of the network via the formation of P–O–Pb linkages. On the other hand, Doweidar et al. [ 30 ] reported that the PbO 4 tetrahedra were formed with increment the content of PbO for more than 20 mol% in the glass composition. The role of PbO insertion on structure, physical, optical, and radiation shielding properties investigated via several authors [ 27–35 ]. In radiation shielding using shields, the emphasis is mostly placed on photons (gamma- and X rays) and neutrons because of their penetrating ability compared to other forms of ionizing radiation. Validation of the competency of any glass system to function as a shield must, therefore, include the study of shielding parameters of these radiations [ 31–36 ]. The objective of this work is to evaluate the mechanical properties of the lead phosphate doped BaO glasses. Furthermore, assess and analyze the ability of barium doped lead phosphate glasses to attenuate the incoming photons and fast neutrons. 2 Materials and methods 2.1 Mechanical properties To achieve the objective of this study, eight glasses of barium doped lead phosphate glasses concerning the formula xBaO+(50-x)PbO+50P 2 O 5 , where x = 0–50 mol % were chosen from Ref. [ 36 ] as listed in Table 1 . The density observed to decrease from 4.70 to 3.65 g cm −3 , with an increase in the BaO insertion ratio from 0 to 50 mol %. The main reason for reducing the glass density is the replacement of the PbO content with molecular weight (MW = 223.199 g/mol) by BaO with molecular weight (MW = 153.326 g/mol). According to Makishima-Mackenzie's model, the elastic properties of BaO–PbO–P 2 O 5 glasses evaluated at room temperature (298 K). The elastic moduli computed theoretically based on the values of the packing factor (V i ) and the heat of formation for each compound (dissociation energy, G i , kJ/cm3) constituting the glass network [ 37 , 38 ], it is described by equation (1) . Based on the calculated values of dissociation energy (G t ) and the packing density (V t ), elastic characteristics such as Young (Y), bulk (B), shear (S), and longitudinal (L) modulus calculated according to equations (3)–(6) . Poisson ratio ( ) is a measure for the expansion generated in the material in a direction perpendicular to the compression direction, it can be described by equation μ (7) . Moreover, the micro-hardness (H, GPa) is used to describe the hardness of the materials in micro scale when it is exposed to low applied loads. It can be described by equation (8) . (1) G t ( k J / c m ) = ∑ i X i G i (2) V t ( c m 3 / m o l ) = ρ M ∑ i X i V i (3) Y = 2 V t G (4) B = 1.2 V t E (5) S = 3 Y B ( 9 B − Y ) (6) L = B + 3 4 S (7) μ = Y 2 S − 1 (8) H = ( 1 − 2 μ ) 6 ( 1 + μ ) (9) T g = M 0.5074 ρ v s 2 The , G i , and V i represent the dissociation energy, packing factor, and molar fraction of the ith constituent compound in the glass network while X i and ρ refer to the density and molecular weight of the studied glass. M 2.2 Shielding parameters simulation The average track length of gamma simulated for the barium doped lead phosphate (BPP) glasses using the Monte Carlo code (MCNP-5) at different gamma-photon energies. Accurate estimation of the average track length for ionizing radiation using MCNP required an input file. This input file should contain all necessary data about the geometry locations and constituents, the used sources (SDEF card), and composition (material card). In the present study, the geometry used for simulation is specified in Fig. 1 where all tools are set up according to the experimental facilities. A monoenergetic gamma-ray source located inside a collimator of lead with the highest 15 cm and diameter 7 cm and has a slit with a diameter of 1 cm. According to the present work, the samples molded in a disk form with a diameter of 5 cm. The studied samples placed at a distance of 10 cm from the radioactive source. Finally, the detector assumed to be F4 tally and located at a distance 10 cm far from the BPP glasses. The simulation was carried out using NPS card = 10 6 particles. The uncertainty recorded in the MCNP-5 code output file varied around 1% [ ± 39–41 ]. After that, the estimated average track length used to calculate essential shielding parameters such as linear attenuation coefficient (LAC), mass attenuation coefficient (MAC), and others that can be estimated via Phy-X/PSD program [ 42 ]. 3 Results and discussion The present work deals with a study of the substitution effect of PbO by BaO contents on the mechanical properties of the investigated glasses. Wherefore, the elastic characteristics such as Young's, bulk, shear, and longitudinal modulus, Poisson ratio (μ), micro-hardness (H), and softening temperature (Ts) are calculated. The calculations based on the dissociation energy (G i ) and packing factor (V i ) for metal oxides constituting the investigated glass. Packing factor (V i ) and dissociation energy per unit volume (G i , kJ cm −3 ) are the basic units in the Makishima-Mackenzie theory. The computed data of G t , V i , and packing density (V t ) listed in Table 2 . The variation of the calculated dissociation energy (G t , kJ cm −3 ), and packing factor (V i , cm 3 mol −1 ) versus the BaO content illustrated in Fig. 2 . Fig. 2 depicts that the variation of the calculated dissociation energy (G t , kJ cm −3 ) versus the BaO content is completely opposite to the packing factor (V i ) of the investigated glasses. The dissociation energy for the investigated glasses found to increase in the range between 26.750 and 33.850 kJ cm −3 while the packing factor decreased from 23.250 to 21.900 cm 3 mol −1 with increasing the BaO between 0 and 50 mol %. This is due to the substitution of the PbO (G i = 25.3 kJ cm −3 and V i = 11.7 cm 3 mol −1 ) with BaO (G i = 39.5 kJ cm −3 and V i = 9 cm 3 mol −1 ) [ 43 ]. The elastic properties calculated according to Eqs (1)–(9) and listed in Table 2 . Table 2 utilized that Young's model is directly proportional to the dissociation energy of investigated glasses. Wherefore, Young's modulus observed to increase with an increase in the BaO contents. The variation of Young's modulus versus the BaO insertion ratio illustrated in Fig. 3 . In Fig. 3 , Young's model increased in the range from 32.021 to 36.655 GPa. Eqs (4)–(6 ) depicts that bulk's, shear, and longitudinal modulus are based on and directly proportional to Young's modulus. Therefore, mechanical moduli (bulk, shear, and longitudinal) observed to increase with an increase in the BaO contents. This increase is due to the increase of the glasses's Gt form 26.750–33.850 kJ cm −3 and decreasing the packing density from 0.599 to 0.541 with the replacement of PbO with BaO content. Also, the detected increase mechanical moduli this is also related to replacement of Pb–O bonds with (G i = 25.3 kJ cm −3 and V i = 11.7 cm 3 mol −1 ) by Ba–O bonds with (G i = 39.5 kJ cm −3 and V i = 9 cm 3 mol −1 ). The straight lines in Fig. 3 showed the linear fitting of the calculated results where the values of R-square are 0991, 0.987, 0.995, and 0.959 for Young's, bulk, shear, and longitudinal modulus, respectively. Poisson ratio and the micro-hardness of the investigated glasses observed to have construct modes of variation with the BaO ratio see Fig. 4 . Poisson ratio was found to decrease between 0.268–0.243, while the micro-hardness increased from 1.953 to 2.521 GPa with an increase in the BaO ratio from 0 to 50 mol%. The predicted results were compared to some experimental results for two previously fabricated glasses like our BPP1 and BPP7 glasses, as listed in Table 2 . The predicted mechanical moduli as well as the Poisson ratio, and micro-hardness for BPP1 and BPP7 glasses are comparable to the experimental results received for glasses 50P2O5–50PbO and 50P2O5–50ZnO that reported in Ref. [ 44 ]. In addition, softening temperature (T g ) used to describe the critical temperature where the glass transition occurred above this temperature. The variation of T g versus the BaO content illustrated in Fig. 5 . The calculated values of T g observed to increase gradually with an increase in the BaO insertion ratio. T g values increased from 205.682 to 321.900 °C. This increase in the T g is due to the replacement of PbO (melting point = 888 °C) by BaO with a higher melting point (melting point = 1923 °C). To examine the shielding capacity of the investigated glasses, the mass attenuation coefficient (MAC) has been evaluated. Phy-X/PSD program and MCNP-5 code are two different methods used to estimate the MAC. Fig. 6 introduces the simulated and theoretically calculated MAC versus the photon energy in the range 0.015–15 MeV. It is more evident from the two figures that the MAC acquires the highest values at the very low energy photons. After that, the obtained MAC values decreased sharply with an increase in the photon energy up to 0.1 MeV, where the photoelectric interaction probabilities decreased with increasing the photons energy in this region and vary with Z 4 /E 3.5 . During the interaction all photon energy is consumed to eject one boundary photon and the photon annihilated inside the glass layers. Thus, the attenuation capacity is higher in the PE interaction region. From 0.1 up to 3 MeV, the MAC has a quiet variation with photon energy. This is due to Compton scattering. Besides, the MAC has a minimal change with the incoming photon energy at gamma photon energy range from 3 up to 15 MeV because of the pair production interaction in this zone. To study the effect of replacement of PbO by BaO contents on the shielding capacity of the glasses, one found that the decrease in the PbO ratio has a negative effect on the MAC values that decreased from 65.896 cm 2 /g to 32.711 cm 2 /g at 0.015 MeV for BPP0 and BPP7, respectively. At 15 MeV, the MAC values decreased from 0.041 cm 2 /g to 0.032 cm 2 /g for BPP0 and BPP7, respectively. These results assured the fact that the replacement of PbO with high atomic weight containing heavy element; (Pb, Z = 82) with a high ratio with an element with another oxide with atomic number less than PbO as the BaO that contains (Ba element, Z = 56) weakens the shielding capacity of the glasses under study. It illustrated in Figs. 2 and 3 that within the decrease of the MAC values in the low energy photons, one noted an increase in one or two positions according to the K-absorption edges of elements forming the composition of the glasses (Pb and Ba). In Fig. 6 , one found only a characteristic and sharp peak at 0.088 MeV related to Pb with the highest weight fraction ratio 0.5674 while no peak found related to Ba because of this sample free of barium. The peak associated with Pb gradually decreased till it vanished at BPP7 that is free of lead, on the other hand, an increase in MAC found at 0.0374 MeV (tiny peak) associated with Ba in the glass composition. This peak began to increase till reaching its maximum position at BPP7 that characterized the highest weight fraction of Ba is 0.4651. The obtained data from theoretical and simulation methods presented in Tables S1 , the relative deviation calculated, and the results matched each other. Finally, we find the MAC values exhibit the order that (MAC) BPP0 > (MAC) BPP1 > (MAC) BPP2 > (MAC) BPP3 > (MAC) BPP4 > (MAC) BPP5 > (MAC) BPP6 > (MAC) BPP7 . These results introduced BPP0 as a superior candidate to shield against gamma radiation among all BPP glasses. Half value layer is a parameter that presents the desired material thickness needed to attenuate half of the incident photons and is plotted in Fig. 7 for the BPP glasses. The HVL is inversely proportional to the LAC of the studied glasses. Thus, the thinner values of the HVL achieved at low gamma-photon energy. After that, the HVL increased gradually with increasing photon energy. Glass coded BPP7 has the highest HVL value varied in the range of 0.006–5.927 cm at gamma-photon energy varied in the range between 0.015 and 15 MeV. On the other hand, glass coded BPP0 has the lowest HVL values and changes between 0.002 and 3.595 cm at gamma photon energy between 0.015 and 15 MeV, respectively. It is worth noting that BPP compared with two commercial materials, namely RS-253-G18 and RS 360. Results reveal that most of the selected BPP glasses have better gamma-ray shielding capacity than RS-253-G18, while all BPP glasses have better shielding capacity than RS 360. Dependence of the effective atomic number ( ) of BPP0–BPP7 glasses on photon energy depicted in Z e f f Fig. 8 . The highest obtained for BPP0 and varied in the range between 67.76–32.25 while the lowest achieved for glass BPP7 and changed from 42.21 to 21.88. The behavior of the Z e f f can be similarly discussed as mass attenuation coefficient trend with photon energy according to the interaction mechanism of photons with matter. Furthermore, the sample with higher MAC (BPP0) has also the highest Z e f f . Z e f f Moreover, the software program Phy-X/PSD was applied to calculate the buildup factors EBF and EABF (exposure buildup factor and energy absorption buildup factor respectively) for the studied BPP glasses at various penetration depth (PD) and gamma photon energy (E) [ 42 ]. The obtained EBF and EABF results for the studied BPP glasses were plotted against the incoming gamma photon energy and the barium concentration, as presented in Fig. 9 . According to these figures, two essential points should be discussed. The first point is the variation of EBF and EABF values versus the incoming gamma photon energy. The lowest EBF and EABF values detected for low photon energy, where the photoelectric interaction (PE) is the dominant gamma photon interaction. The incoming photon energy is totally absorbed by the boundary electron in the studied glasses. Thus, the photons absorbed in the studied glasses and prevented from accumulating. In the PE interaction region, there two increases in the values of EBF and EABF were detected around 0.04 and 0.09 MeV. The first increase is due to the X-ray absorption K-edges of Ba, while the second is due to the absorption K-edges of Pb. The amplitude of the K-absorption peaks for Ba and Pb are directly proportional to the ratios of their compounds (BaO and PbO) in the studied BPP glasses. Above 0.1 MeV, the Compton scattering interaction (CS) began to increase and reached the maximum around 1 MeV. Thus, the calculated values of EBF and EABF were progressively increasing with the increase of the incoming gamma photon energy. The gradual increases in the EBF, and EABF values are due to the accumulation of incoming photons inside the studied BPP glasses. During the Compton scattering process, only a part of the incoming photon energy is removed to eject a boundary electron. Above 1.5 MeV, the calculated values of the buildup factors were slowly reduced again due to the pair production interaction (PP). During the PP interaction, the energy of incoming photons consumed to produce an electron-positron pair. Above 8 MeV, a rapid increase in the EBF and EABF values was detected, especially at high penetration depth (PD > 20). The second point is the variation of the buildup factors is the BaO contents. The calculated values of EBF and EABF showed that replacement of PbO with BaO contents in the studied BPP glasses helps to reduce the number of photons accumulated inside the studied BPP glasses. In order to assume a good suitable shielding material, they should have both suitable shielding capacity and hardness. Thus, the variation of the glasses’ microhardness and the LAC (shielding capacity) versus the BaO ratio inserted to the glass was illustrated in Fig. 10 . The investigated glasses LAC increases while the micro-hardness decreases with increasing the BaO insertion ratio. The best shielding capacity achieved for glasses without BaO concentration (i.e., BPP1) while the best hardness achieved for glasses with BaO of 50 mol% (i.e., BPP7) while the lowest achieved for glasses without BaO contents (i.e., BPP1). As shown in Fig. 11 , the best glass samples which have a suitable shielding and mechanical materials are that have BaO content around 25 and 30 mol%. These glasses have linear attenuation coefficient around 0.465 cm-1 at 0.5 MeV, and their micro-hardness 2.25 GPa. The calculated effective removal cross-section ∑ R (cm 2 g −1 ) of the fast neutrons for the studied BPP0-BPP7 glasses plotted in Fig. 11 . The ∑ R observed to increase gradually with increase BaO contents on the studied BPP glasses. This increase may be due to the replacement of PbO, which has a higher density (9.53 g cm −3 ) and molar mass (223.2 g mol −1 ) with BaO, which has a lower density (5.72 g cm −3 ) and molar mass (153.33 g mol −1 ). The smallest fast neutron effective removal cross-section is equaling 0.0211 cm 2 g −1 and obtained for the studied glass BPP0 while the highest effective removal cross-section is equaling 0.0248 cm 2 g −1 ) and received by the studied glasses BPP7. 4 Conclusion In this article, the mechanical properties of the barium doped lead phosphate glasses evaluated using Makishima-Mackenzie's model. The elastic moduli, such as Young, bulk, shear, and longitudinal, enhanced with the replacement of the PbO by BaO contents. Also, the micro-hardness and softening temperature increase with an increase in the ratio of BaO in the investigated glass. Furthermore, the radiation shielding properties for gamma - photons and fast neutrons analyzed and evaluated. Results revealed that the decrease in the PbO ratio has a negative effect on the MAC values. Besides, in terms of HVL, the BPP7 glass sample has the highest value while BPP0 has the lowest value. Furthermore, the calculated values of EBF and EABF showed that replacement of PbO with BaO contents in the studied BPP glasses helps to reduce the number of photons accumulated inside the studied BPP glasses. Finally, the lowest fast neutron effective removal cross equaling (∑ R = 0.0211 cm 2 g −1 ) and obtained for the studied glass BPP0 while the highest effective removal cross-section equaling 0.0248 cm 2 g −1 and received for the studied glasses BPP7. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgment Taif University, Saudi Arabia is kindly acknowledged for Supporting our work through the Project number (TURSP-2020/84). Appendix A Supplementary data The following is the Supplementary data to this article: Multimedia component 1 Multimedia component 1 Appendix A Supplementary data Supplementary data to this article can be found online at https://doi.org/10.1016/j.net.2021.06.005 .
REFERENCES:
1. ISSA S (2017)
2. GAIKWAD D (2018)
3. RAMMAH Y (2020)
4. SINGH V (2018)
5. RAMMAH Y (2020)
6. BOOTJOMCHAI C (2012)
7. DARWISH A (2016)
8. ELBASHIR B (2018)
9. ISSA S (2018)
10. ISSA S (2018)
11. MIRJI R (2017)
12. SAYYED M (2017)
13.
14. ALI A (2019)
15. RAMMAH Y (2020)
16. RAMMAH Y (2020)
17. MOULIKA G (2018)
18. HWANG M (2016)
19. SAAD M (2019)
20. HSU J (2018)
21. SOUISSI F (2018)
22. PASCUTA P (2010)
23. LAI Y (2011)
24. CORMIER G (1994)
25. LITTLEFLOWER G (2007)
26. DAYANAND C (1996)
27. IVASCU C (2011)
28. DOWEIDAR H (2014)
29. DOWEIDAR H (2018)
30. DOWEIDAR H (1997)
31. RAMMAH Y (2020)
32. RAMMAH Y (2020)
33. PERISANOGLU U (2020)
34. ALBURIAHI M (2020)
35. ALHADEETHI Y (2020)
36. HAMEDMISBAH M (2020)
37. MAKISHIMA A (1973)
38. MAKISHIMA A (1975)
39. KURTULUS R (2021)
40. CHEN Q (2021)
41. RAMMAH Y (2020)
42. SAKAR E (2020)
43. INABA S (1999)
44. SIDEK H (2016)
|
10.1016_j.iotech.2024.101030.txt
|
TITLE: Preclinical data and design of a phase I clinical trial of neoantigen-reactive TILs for advanced epithelial or ICB-resistant solid cancers
AUTHORS:
- Palomero, J.
- Galvao, V.
- Creus, I.
- Lostes, J.
- Aylagas, M.
- Marín-Bayo, A.
- Rotxés, M.
- Sanz, M.
- Lozano-Rabella, M.
- Garcia-Garijo, A.
- Yuste-Estevanez, A.
- Grases, D.
- Díaz-Gómez, J.
- González, J.
- Navarro, J.F.
- Gartner, J.J.
- Braña, I.
- Villalobos, X.
- Bayó-Puxan, N.
- Jiménez, J.
- Palazón, A.N.
- Muñoz, S.
- Villacampa, G.
- Piris-Giménez, A.
- Barba, P.
- Codinach, M.
- Rodríguez, L.
- Querol, S.
- Muñoz-Couselo, E.
- Tabernero, J.
- Martín-Lluesma, S.
- Gros, A.
- Garralda, E.
ABSTRACT:
Background
Adoptive cell therapy (ACT) of ex vivo expanded tumor-infiltrating lymphocytes (TILs) can mediate objective tumor regression in 28%-49% of metastatic melanoma patients. However, the efficacy of TIL therapy in most epithelial cancers remains limited. We present the design of a phase I clinical study that aims to assess the safety and efficacy of NEXTGEN-TIL, a TIL product selected based on ex vivo neoantigen recognition, in patients with advanced epithelial tumors and immune checkpoint blockade (ICB)-resistant solid tumors.
Materials and methods
Pre-rapid expansion protocol (REP) TIL cultures expanded in high-dose interleukin 2 (HD-IL-2) from patients with metastatic solid tumors were screened for recognition of autologous tumor cell lines (TCLs) and/or neoantigens. Six good manufacturing practice (GMP)-grade validations of pre-REP TIL expansion were carried out and TIL cultures from these six intermediate products were selected to carry out the clinical-scale GMP validation of the REP.
Results
TILs expanded in 82% of patient-derived tumor biopsies across different cancer types and these frequently contained tumor- and neoantigen-reactive T cells. During GMP validations, a variable number of TIL cultures expanded, constituting the intermediate products (pre-REP). Three finished products were manufactured using a REP which reached cell doses ranging from 4.3e9 to 1.1e11 and met the established specifications. The NEXTGEN-TIL clinical trial entails a first expansion of TILs from tumor fragments in HD-IL-2 followed by TIL screening for neoantigen recognition and REP of selected neoantigen-reactive TIL cultures. Treatment involves a classical non-myeloablative lymphodepleting chemotherapy followed by NEXTGEN-TIL product administration together with HD-IL-2.
Conclusions
NEXTGEN-TIL exploits ex vivo expanded neoantigen-reactive TIL to potentially improve efficacy in patients with epithelial and ICB-resistant tumors, with a safety profile like traditional TILs.
BODY:
Introduction Immune checkpoint blockade (ICB) has shown unprecedented results in a variety of aggressive and hard-to-treat cancers. However, transient responses and low response rates in many metastatic epithelial cancers remain significant challenges. 1 Thus, there is an unmet need to develop more effective therapies. Adoptive cell therapy (ACT) with autologous tumor-infiltrating lymphocytes (TILs), TIL-based ACT (TIL-ACT), aims to augment the immune system’s ability to specifically recognize and kill cancer cells. 2 3 , This therapeutic regimen consists of a non-myeloablative lymphodepleting (NMA-LD) chemotherapy followed by the infusion of 4 ex vivo expanded autologous TILs and administration of high-dose interleukin 2 (HD-IL-2) to sustain lymphocyte activity. TIL-ACT has been extensively explored for the treatment of metastatic melanoma and represents an interesting therapeutic alternative. 5 In a meta-analysis combining data from 13 studies (published 1988-2016) using TIL-ACT, in 410 heavily pretreated patients (some with brain metastasis), the pooled objective response rate (ORR) estimate was 41%, and the complete response rate was 12%. 6 Subsequent studies including ICB and targeted therapy-refractory population have shown an ORR ranging between 28% and 49%, 7 leading to its accelerated approval for the treatment of adult patients with unresectable or metastatic melanoma refractory to standard therapy. 8-13 A recent meta-analysis found no difference in ORR or complete response rate between studies with and without prior anti-PD-1/PD-L1 treatment in TIL-ACT efficacy for melanoma patients. This suggests that previous anti-PD-1/PD-L1 treatment does not affect clinical response or survival benefit from TIL-ACT in advanced cutaneous melanoma, supporting its use as a second-line treatment option. 14 TILs can be successfully expanded from various types of tumors, and promising preliminary antitumor responses have been observed in non-small-cell lung cancer patients 15 and human papillomavirus-positive (HPV+) cancer patients. 16 Therefore, current evidence supports further testing of this therapeutic approach in solid tumors other than melanoma. Nevertheless, the adoptive transfer of ‘unselected TIL’ (without selection of anticancer T cells), commonly used to treat patients with metastatic melanoma, has shown limited activity in other epithelial cancers. 17 18 , 19 Accumulating evidence supports that lymphocytes targeting neoantigens play an important role in the antitumor efficacy of cancer immunotherapy, including TIL therapy. TIL infusion products from melanoma patients who experienced complete tumor regression following TIL therapy frequently contain neoantigen-reactive TILs. 20-26 The absolute number of infused tumor-reactive TILs, 25-27 tumor mutation burden, neoantigen burden 28 as well as the frequency of neoantigen-reactive lymphocytes 24 have been positively associated with TIL therapy efficacy in patients with advanced melanoma. This association between the clinical activity of TIL therapy and mutational burden and detection of neoantigen reactivity was also observed in patients refractory to ICB, 26 suggesting that delivering an enriched neoantigen-reactive TIL product or selecting patients for TIL therapy based on the detection of neoantigen reactivity could improve clinical outcome in this setting. In addition, adoptive transfer of TIL cultures selected for neoantigen recognition has been shown to induce antitumor responses in selected patients with cholangiocarcinoma, colorectal (CRC), breast (BrCa) and cervical cancers. 29 These findings suggest that enriching TILs for neoantigen recognition could prove critical to enhance the efficacy of TIL therapy in patients with epithelial cancers. 30-34 In this clinical study, we propose to manufacture a T-cell product composed of TIL cultures selected for their ability to recognize tumor-specific neoantigens and to use this product to individually treat patients with metastatic, refractory epithelial tumors as well as ICB-resistant solid tumors. Besides, we also propose screening tumors and T cells at baseline and after treatment to determine whether specific phenotypes and functional features may be related to clinical outcomes. Materials and Methods Patient samples and TIL expansion for preclinical testing For assessing the preclinical expansion of TILs from core tumor biopsies and for testing the clinical-scale good manufacturing practice (GMP) validation of TIL expansion, patients were enrolled on a project approved by the institutional review board of the Vall d’Hebron Hospital [PR(AG)252-2016, PR(AG)318-2018, respectively] and signed an informed consent. All patients had metastatic solid tumors with variable tumor burden and had received a wide range of prior therapies including, in some instances, immunotherapy. A fresh tumor biopsy and at least one blood sample were obtained for each patient. Peripheral blood mononuclear cells (PBMCs) were isolated from the blood samples or leukapheresis using a Ficoll gradient centrifugation and were cryopreserved for future use. To expand TILs, core needle (trucut) biopsies from tumors (∼2 mm × 12 mm) were cut into 6-18 2 mm × 2 mm fragments, which were used to expand independent TIL cultures in 24-well plates in 1 : 1 T-cell medium [RPMI-1640 with l -glutamine and AIM-V (Thermo Fisher Scientific, Paisley, UK) supplemented with penicillin 100 U/ml, streptomycin 100 μg/ml (Thermo Fisher Scientific, Paisley, UK), Hepes 12.5 mM (Biowest, Nuaillé, France), 10% human serum (prepared in-house) and IL-2 6000 IU (Novartis, Schiphol, Netherlands)]. TIL expansion was considered successful when the TIL culture reached confluency in 4 wells from a 24-well plate, time at which they were cryopreserved for future use. This phase of expansion constituted the pre-rapid expansion protocol (REP). TILs that did not reach this level of expansion were not considered for subsequent neoantigen screening. Additional tumor biopsies were frozen in optimal cutting temperature compound and were used as a source for DNA and RNA extraction. Cell lines Fresh tumor-derived fragments or 1 × 10e6 tumor-single suspension cells were cultured in RPMI-1640 with 20% Hyclone fetal bovine serum (FBS) (GE Healthcare), penicillin 100 U/ml, streptomycin 100 μg/ml and Hepes 25 mM (Thermo Fisher Scientific), at 37°C in 5% CO 2 . Medium was replaced monthly until cell lines were established. Established tumor cell lines (TCLs) were sequenced and cryopreserved at early passage. NIH 3T3 CD40L cells were obtained from the National Cancer Institute by transduction of NIH 3T3 cells (American Type Culture Collection) with a retrovirus encoding CD40L. NIH 3T3 CD40L cells were maintained in Dulbecco’s modified Eagle’s medium with 10% Hyclone FBS, penicillin 100 U/ml, streptomycin 100 μg/ml and Hepes 25 mM, at 37°C in 5% CO 2 . Identification of non-synonymous mutations by tumor whole-exome sequencing and NA sequencing Genomic DNA and total RNA were purified from optimal cutting temperature compound-embedded tumor sections and normal DNA was extracted from PBMCs. The percentage of tumor was assessed by immunohistochemistry. DNA concentration was measured using Qubit™ Fluorometer (Thermo Fisher Scientific) and the quality and size of tumor and normal DNA were assessed using 4200 TapeStation (Agilent). Whole-exome sequencing (WES) libraries were generated by exome capture of ∼20 000 coding genes using SureSelect human. The All Exon V6 Kit (Agilent Technologies, Santa Clara, California, USA) and paired-end sequencing were carried out in the Illumina NovaSeq 6000 platform. The average sequencing depth ranged from 200× to 300× for each of the individual libraries generated. Alignment of WES to the reference human genome build hg19 was carried out using bwa-men before quality trimming with TrimGalore. 35 Aligned reads were processed following the GATK4 36 best practices (MarkDuplicates and BaseRecalibration). Variant calling was carried out with VarScan2, 37 Strelka2, 38 SomaticSniper 39 and Mutect2. 40 All the somatic non-synonymous mutation (NSM) variants detected were filtered according to the following criteria: minimum coverage of 10 reads, minimum 4 variant reads, >7% variant allele frequency and called by two or more callers (single nucleotide variants) or one for insertions and deletions. Filtered variants were merged and annotated using VEP 41 and epitopes were generated from the variants using Varcode. 42 Affinity binding scores were assigned to the epitopes using MHCflurry 43 and the human leukocyte antigen (HLA) typing obtained from OptiType. 44 Epitopes were manually vetted using integrative genomics viewer and NSMs were selected as candidate neoantigens for generation of tandem minigenes (TMGs) based on the presence in the tumor. For VHIO-08, epitopes were prioritized for TMG generation using the affinity binding scores. 45 When possible, a messenger RNA (mRNA) sequencing library was generated from the respective samples using the Illumina TruSeq RNA Library Prep Kit. Alignment of RNA to the reference human genome build hg19 was carried out using STAR before quality and adapter trimming with TrimGalore. Aligned reads were processed following the GATK4 best practices (MarkDuplicates, IndelRealignment and BaseRecalibration). Gene counts and fragments per million mapped reads values were calculated using featureCounts 46 and used to assess the expression of candidate mutations. The analysis pipeline (WES and RNA) is available at: 47 https://github.com/jfnavarro/hla_pipeline . Design and generation of TMGs NSMs for immunological screening were selected based on their detection in the tumor exome. For each NSM identified by WES, one minigene construct was designed encoding the mutated amino acid (aa) flanked by 12 aa of the wild-type (wt) sequence; up to 25 minigenes were stringed together to generate TMG in a single open reading frame. TMG constructs were codon optimized and subcloned into pcDNA3.1+ modified to include two copies of the β-globin 5′ untranslated region to enhance RNA stability. In vitro transcribed RNA was then generated using the TMG constructs as a template using the HiScribe T7 RCA mRNA Kit with tailing (New England Biolabs) as instructed by the manufacturer and was subsequently used to transfect autologous B cells. Transfection of autologous B cells with TMG RNA Antigen-presenting B-cell lines were generated by CD19 microbead (Miltenyi Biotec) isolation from PBMCs. B cells were expanded using irradiated NIH 3T3 CD40L cells in B-cell medium, comprising Iscove’s modified Dulbecco’s medium (Quality Biological Inc.) with 10% human AB serum (processed in-house), penicillin 100 U/ml, streptomycin 100 μg/ml, l -glutamine 2 mM and IL-4 200 U/ml (PeproTech). On day 3, fresh B-cell medium was replenished. B cells were used fresh or cryopreserved at day 5-6. When used after cryopreservation, cells were thawed into B-cell medium the day before electroporation. As a complementary neoantigen-reactive T-cell screening approach, 25-mer peptides encoding for one NSM and minimal epitopes were also purchased from JPT. Crude peptides were used for the initial in vitro screening of T cells. To validate reactivities observed in the initial screen, selected high-performance liquid chromatography (HPLC)-purified mutant peptides and their wt counterparts were purchased. Peptide synthesis and pulsing We purchased peptides from JPT. Crude peptides were used for the initial in vitro screening of T cells. To validate reactivities observed in the initial screen, we ordered selected HPLC-purified mutant peptides and their wt counterparts. B cells were harvested, washed and resuspended at 2e6-5e6 cells/ml in their corresponding media supplemented with the appropriate cytokines with 10 μg/ml or 1 μg/ml for 25-mer and minimal epitopes, respectively. Pulsing of peptide pools (PPs) was carried out at a final concentration of 10 μg/ml per peptide. After overnight pulsing, B cells were resuspended in T-cell medium, and immediately used in coculture assays. Assessment of T-cell reactivity using interferon-γ ELISPOT assay and detection of 4-1BB To test reactivity to neoantigens, 2e4-5e4 T cells were co-incubated with 1e5 or 3e5 peptide-pulsed or TMG RNA-electroporated B cells, respectively. Media alone were used as background for the assay, and irrelevant PPs or TMGs were used as negative controls. Alternatively, T cells were cocultured with 1e5 tumor cells to evaluate tumor reactivity. In this case, T cells were cultured with media and an irrelevant HLA mismatched TCL as negative controls. All cocultures were carried out in the absence of exogenously added cytokines. For all the assays, plate-bound anti-CD3 (OKT3) (1 μg/ml; Miltenyi Biotec) was used as a positive control. Cell-surface T-cell activation-induced receptors OX40 and 4-1BB were assessed by flow cytometry at ∼20 h after co-incubation. Briefly, cocultured cells were pelleted, resuspended in staining buffer and incubated with anti-CD3, -CD4, -CD8, -4-1BB and -OX40 antibodies for 30 min at 4°C. Cells were washed, resuspended in staining buffer containing PI (Sigma-Aldrich) and acquired on a BD LSRFortessa flow cytometer. T-cell activation was also measured using an interferon (IFN)-γ ELISPOT assay to detect secreted IFN-γ. The criteria used to classify tested TIL cultures as tumor- or neoantigen-reactive were a frequency of expression of 4-1BB or OX40 on CD8+ or CD4+ T cells ≥0.5% and ≥2-fold the frequency of the corresponding negative control, and/or the presence of ≥40 IFN-γ spots and >2× the negative control. Moreover, the reactivity had to be detected in two consecutive coculture experiments. Identification of the specific neoantigen recognized by reactive TILs in preclinical studies To identify the specific mutations recognized within the reactive TMG/PP, expanded TIL cultures were cocultured with TMG-electroporated autologous B cells for 20 h. CD3+CD8+ cells expressing 4-1BB were sorted in BD FACS Aria and expanded using a REP. Briefly, 4-1BB+CD8+ cells were seeded in T25 flasks in T-cell medium containing anti-CD3, IL-2 3000 IU/ml and irradiated PBMCs pooled from three allogeneic donors. At day 14, cells were harvested and either used in coculture experiments or cryopreserved until further analysis. Crude 25-mer peptide preparations were used for neoantigen screening, and the reactivities were further confirmed with HPLC-grade peptides. TIL expansion for GMP validations, pre-REP and REP GMP-grade validations of TIL expansion were carried out for the pre-REP phase until cryopreservation of the intermediate TIL cultures and for the REP phase separately. Of note, the validation of the REP was carried out without screening the neoantigen reactivity of the intermediate TIL cultures. TIL expansion from tumor trucut biopsies was done following the same procedure as the TIL expansion for preclinical testing at the qualified GMP facility of the Banc de Sang i Texits (BST), Barcelona. Briefly, TIL cultures reaching confluency in 4 wells from a 24-well plate were analyzed by flow cytometry and when TIL cultures contained a minimum of 9e6 viable total cells, a viability of >70% and a frequency of >95% CD45+ cells, they were cryopreserved in at least three cryovials, constituting the intermediate cell products. The validations of the REP phase were done following a very similar procedure to the one for the preclinical studies. However, some modifications were incorporated to obtain sufficient cell dose of the final product. Briefly, the intermediate TIL cultures were thawed and 1e6-3e6 TILs were seeded in G-REX vessels with irradiated PBMCs (pooled from 10-12 donors) in T-cell medium containing anti-CD3 and IL-2 3000 IU/ml. TIL cultures were split and REP was harvested on day 11. Results Preclinical testing of TIL expansion from core tumor biopsies and tumor and neoantigen reactivity assessment Given that surgical resection is not standard for patients with metastatic disease, and it considerably increases the costs of TIL therapy, we aimed to expand TILs from image-guided tumor core biopsies. To test expansion of TILs from tumor core biopsies, we expanded TILs from n = 154 trucut tumor biopsies from patients presenting different types of solid tumors all of which were refractory to standard therapies before inclusion in phase I clinical trial. TILs were successfully expanded from 82% of the biopsies ( Figure 1 ). We observed a high variability in the percentage of tumor fragments giving rise to ex vivo expanded TILs, but TILs could be generated irrespective of the origin of the tumor and irrespective of whether the patient was immunotherapy experienced ( Figure 2 A and B ). We generated 10 short-term culture TCLs from these biopsies, which enabled us to test whether TILs expanded at the pre-REP phase recognized the corresponding autologous TCL. As shown in Figure 2 C, for patient VHIO-055, 4 out of 18 TIL cultures expanded ex vivo contained tumor-reactive TILs and were classified as tumor-reactive (TIL F6, F8, F14 and F18) since 4-1BB was up-regulated on CD8+ cells and IFN-γ was secreted when TIL cultures were cocultured against their autologous TCL, but not an irrelevant one. We carried out tumor reactivity assays by co-culturing the expanded TIL cultures against TCLs in 10 patients and 9 out of 10 independent cases of TIL cultures contained tumor-reactive TILs ( Figure 2 D). Importantly, the frequency of tumor-reactive TILs based on 4-1BB or OX40 up-regulation was very heterogeneous, ranging from 1.5% to 68% ( Table 1 ). Next, to test TILs derived from tumor core needle biopsies for neoantigen reactivity in preclinical studies, we carried out WES from tumor and normal DNA of four patients VHIO-008, VHIO-009, VHIO-029 and VHIO-055, and complemented it with RNA sequencing of the tumor samples, to identify all tumor-specific NSMs expressed and screened the ex vivo expanded TIL cultures for recognition of B cells electroporated with TMGs or pulsed with PPs encoding for the candidate neoantigens identified. This method to screen for neoantigen recognition was first reported in 2014, 27 , and can be exploited to identify and select for TILs capable of recognizing neoantigens and autologous tumor. 33 48 , 49 The results of a representative screening of VHIO-009 TILs for neoantigen recognition are shown in Figure 3 . In this patient, TIL cultures 8, 9 and 10 recognized neoantigens encoded by TMG7, TMG2 or TMG6 and TMG8, respectively, as measured either by IFN-γ spots or by up-regulation of 4-1BB on the CD3+CD8+ TILs ( Figure 3 A and B). As shown in Figure 3 C, TIL culture 9 recognized TP53RKp.S108C but not other mutations included in TMG2/PP2; moreover, it was HLA-I restricted (data not shown). TIL culture 10 recognized NSMCE1p.N156D (HLA-II restricted) and GBPp.E359K (HLA-I restricted) (data not shown). The neoantigen within TMG7 that was recognized by TIL culture 8 is yet unknown. The rest of the TIL cultures did not appear to recognize any of the neoantigens tested. Interestingly, TIL cultures 8, 9 and 10 were among the five TIL cultures that displayed the highest frequency of tumor-reactive CD8+ TILs based on the expression of 4-1BB (data not shown). TIL cultures derived from three additional patients, VHIO-008, VHIO-029 and VHIO-055, were also screened for neoantigen recognition in detail and the results are summarized in Table 2 . The number of NSMs identified for these patients’ tumors ranged from 183 to 2485. Given the high number of mutations identified in VHIO-008, we selected the top-ranking candidate mutations predicted to bind to the patient’s HLA molecules and constructed 12 TMGs encoding for up to 24 mutated minigenes each. VHIO-008 TIL3 and TIL4 contained CD8+ T cells targeting MAGEB2 p.E167Q, encoded by TMG4, and TIL5 contained CD8+ T cells recognizing RPL14 p.H20Y, encoded by TMG3. Hence, three TIL cultures recognized two HLA-I restricted neoantigens detected by WES. VHIO-029 TILs were screened for recognition of all candidate neoantigens identified by WES. In this patient, TIL cultures 3 and 5 were found to recognize neoantigen ETV1 p.E455K, while TIL7 recognized GEMIN5 p.S1360L. Finally, TIL cultures derived from VHIO-055 recognized a neoantigen derived from TPD53BPp.629L. In summary, we were able to detect TIL cultures capable of recognizing neoantigens identified by tumor WES in four out of four patients screened. Consistent with previous data from our laboratory and others, using this approach neoantigen-specific TIL cultures can be detected in ∼85% of cancer patients screened. 49 , Consequently, this technique represents an attractive approach to screen and select TILs for patient treatment given that TCLs or fresh tumor targets are often not available. 50 Process development and validation of TIL expansion under GMP For the GMP validations carried out at the classified facilities of the cellular therapy of the BST, patients with different epithelial cancers (colon, lacrimal gland adenocarcinoma, mesothelioma and cervical adenopathy) were enrolled irrespective of their mutational load. Of a total of six additional trucut tumor biopsies processed, a variable number of TIL cultures expanded which constituted the intermediate cell products and were subsequently cryopreserved in independent vials. The median days in culture of the pre-REP phase was 25 and on average, 67% of tumor fragments seeded expanded ex vivo , ranging from 11% to 83% depending on the tumor. Moreover, the viability of the expanded TIL cultures during pre-REP before cryopreservation was 87% ± 5% ( Figure 4 ). Three intermediate product batches (TIL19001, TIL19003 and TIL19010) were selected to carry out the GMP validations of the REP based on different frequencies of CD3, CD4 and CD8+ populations and different pre-REP expansion times. Exceptionally, the GMP validations of the REP were carried out without prior assessment of neoantigen recognition in the pre-REP TIL cultures, since the goal of the GMP validations was to ensure that TILs could be expanded to very high numbers and meeting all quality standards established by the regulatory authorities. In total, six intermediate product batches (TIL19001, TIL19002, TIL19003, TIL19006, TIL19009 and TIL19010) as well as three finished products (REP20002/TIL19001, REP20003/TIL19003 and REP20004/TIL19010) were generated following the validation design defined in the manufacturing process. The generated batches met the established specifications in all cases ( Tables 3 and 4 ). The number of total viable cells obtained in each case was 9.94e9 (REP20002), 1.10e11 (REP20003) and 4.30e9 (REP20004). The finished product consisted of expanded T lymphocytes derived from autologous tumor adjusted to the defined dose range in conditioning solution (Plasmalyte supplemented to 2% w/v with human albumin). Clinical study design Based on the aforementioned preclinical data showing detection of neoantigen-reactive TILs in all four patients tested and the literature, as well as the GMP-grade validation of TIL expansion, we decided to design a clinical trial. This single-center, open-label, phase I trial aims to assess the safety and efficacy of neoantigen-reactive ex vivo expanded TIL cultures (NEXTGEN-TIL) in patients presenting advanced epithelial tumors and ICB-resistant solid tumors. TIL-ACT with NEXTGEN-TIL product aims at eliminating the tumor cells by using autologous neoantigen-selected TIL cultures expanded ex vivo, in combination with HD-IL-2 and preceded by the administration of an NMA-LD regimen. This study consists of two separate phases depicted in Figure 5 , each requiring separate informed consent and meeting specific inclusion criteria. The first phase ( Figure 5 A) is the pre-treatment/screening phase, where the patient’s tumor and blood samples are extracted to select TILs for ex vivo expansion (pre-REP), followed by the WES-identified neoantigen recognition. Patients with ≥1 neoantigen-reactive TIL cultures and satisfying the eligibility criteria progress to the second phase ( Figure 5 B), the treatment phase, where selected neoantigen-reactive TIL cultures undergo REP while the patient receives a preparative lymphodepleting chemotherapy. Then, the expanded selected TIL cultures are re-infused back, followed by administration of HD-IL-2. Given that the first phase can take between 1.5 and 3 months and that the patients’ clinical condition can deteriorate, patients can receive a bridge treatment meanwhile (between the first and second phase). The NMA-LD regimen involves cyclophosphamide (60 mg/kg) and fludarabine (25 mg/m 2 ) on days −5 and −4, and fludarabine alone on days −3 to −1 ( Figure 6 ). Post-regimen, IL-2 is administered intravenously at 720 000 IU/kg every 8 h, up to six doses as tolerated. This clinical protocol has been approved by both the institutional ethics committee and the national regulatory agency (Spanish Agency of Medicines and Medical Devices). All patients recruited in the trial will be asked to sign an informed consent form, as required. The trial has been published both in the Spanish Trial Registry (EudraCT 2020-005778-90) and in ClinicalTrials.gov (NCT05141474). Selection of neoantigen-specific TIL cultures for patient treatment The technical details of the screening phase of this clinical study are depicted in Figure 5 . Ex vivo expanded TIL cultures, constituting the intermediate pre-REP cell products, will be cryopreserved and, as soon as all the reagents needed are available (B cells, TMGs and PPs), they will be screened against neoantigen-loaded autologous antigen-presenting cells. TIL cultures comprising neoantigen-reactive TILs (specific criteria detailed in the ‘Materials and Methods’ section) will be selected and pooled for REP up to high numbers and patient treatment. In order to release intermediate pre-REP cell products to undergo REP, the following security specification criteria need to be met: intermediate pre-REP cultures have to be sterile, mycoplasm negative and test for endotoxin ≤1 EU/ml. Moreover, at least two pre-REP cultures need to expand from all seeded fragments and have to be neoantigen-reactive. Selection of patients To be eligible for the study, patients must have histologically or cytologically proven metastatic or unresectable solid tumors. The disease must have progressed to at least one standard therapy (including at least one prior line with ICB for the group of patients with tumors where ICB is approved), or the patient is unable/unwilling to receive standard therapy, or no standard therapy exists for a particular disease. Additionally, patients must meet all the defined criteria in the clinical study before enrollment. The population in this study is heterogeneous. Based on literature, we hypothesize that TIL cultures enriched for neoantigen recognition (NEXTGEN-TIL) may be superior to unselected TILs at mediating tumor regression in patients with epithelial tumors and other solid tumors where ICB is approved and used as part of standard therapy. The primary objective of this study aims to evaluate the safety and tolerability of NEXTGEN-TIL products in patients with metastatic or unresectable epithelial tumors and ICB-resistant solid tumors. The secondary objectives are to determine the success of producing neoantigen-reactive TILs and to evaluate the initial clinical activity of the NEXTGEN-TIL products in our target patients. This study also has several exploratory objectives. Firstly, to study the phenotypic and transcriptomic traits of TIL and their functionality, as well as their persistence in peripheral blood following transfer, and to explore the relationships between these features and clinical outcome. Secondly, to better understand the relationship between the diversity of neoantigens targeted, the clonality of the neoantigens and the diversity and frequency of the T-cell receptors (TCRs) targeting each of these neoantigens. Further, we aim to identify the contribution of heterogeneity of the specific neoantigens targeted or their loss of expression to tumor progression and clinical outcomes. Lastly, we will carefully analyze the economic cost of this therapy at VHIO to determine the feasibility of escalating its application from pilot to a regular health care procedure. Clinical study data analysis All safety parameters in the study will be summarized. Safety will be determined by adverse events, laboratory tests, vital signs, electrocardiograms, physical examinations and performance status. Adverse event data will be reported in listings. The first safety evaluation will be carried out with the first six patients. If there is a maximum of one treatment-limiting toxicity (TLT), the study will continue recruiting patients for up to a total of 10 patients. If more than one TLT or any of the other criteria defined in the clinical protocol are observed in the first six patients, an adjustment in TIL product will be needed for further examination. For categorical endpoints (i.e. ORR), counts and percentages, with 95% confidence intervals (CIs), will be calculated. For the univariate analysis, the logistic regression will be carried out to identify prognostic factors to response. Continuous variables will be summarized with descriptive statistics (mean, standard deviation, range and median). For the ORR analysis, only patients who have measurable disease at baseline and have had their disease re-evaluated will be considered. Time-to-event variables (i.e. progression-free survival) will be analyzed according to the Kaplan–Meier method. Kaplan–Meier survival curves will be reported, along with associated 95% CIs. Waterfall plots will be used to describe the best variation of the sum of target lesions during the follow-up. Extensive longitudinal data analysis will be used to analyze the percentage of change in tumor size from baseline at 6 and 12 weeks and then every 3 months in the first year and every 6 months in the second year and as per principal investigator’s discretion thereafter. Discussion Recent publications have determined to what extent patients with common epithelial cancers contain TILs that recognize neoantigens, a key prerequisite for developing such personalized T-cell products. In 2019, Parkhurst et al. detected neoantigen-reactive TILs in 62 out of 75 (83%) patients with specific gastrointestinal (GI) cancers using high-throughput immunologic screening of TILs to candidate mutant gene products identified through WES. A total of 124 TIL populations reactive against neoantigens were identified, all of which were private except for one. Furthermore, the results of 50 in vitro T-cell recognition studies showed that 1.6% of candidate NSMs are immunogenic. These results indicate that most epithelial cancers induce T-cell responses to neoantigens, making a neoantigen-enriched TIL product a real possibility. TIL administration was associated with durable tumor regression in one BrCa patient and two patients with HPV+ head and neck or cervical cancer in whom the infusion product showed predominantly neoantigen reactivity. 30 , Subsequently, in 2022, it was reported that TILs were isolated and grown in culture from the resected lesions of 42 patients with metastatic BrCa, and a median number of 112 (range 6-563) NSMs per patient were identified. 31 Twenty-eight of 42 (67%) patients contained TILs that recognized at least one immunogenic somatic mutation (median 3 neoantigens per patient, range 1-11), and 13 patients demonstrated robust reactivity appropriate for adoptive transfer. In this study, six patients were enrolled on a protocol of ACT of enriched neoantigen-specific TILs, in combination with pembrolizumab (≤4 doses). Objective tumor regression was noted in three patients, including one complete response (now ongoing over 5.5 years) and two partial responses (6 and 10 months). In addition, infusion of TIL products highly enriched for neoantigen recognition induced tumor regression in patients with cholangiocarcinoma and CRC. 32 33 , 34 Based on our preclinical testing of TILs for neoantigen recognition in four patients, only a small fraction of the TIL cultures screened comprised neoantigen reactivity ( Figure 3 and Table 2 ). This, together with previous literature showing tumor regression in selected patients with epithelial cancers following treatment with neoantigen-selected TILs, supports the selection of TILs containing reactivity as a means to enrich for neoantigen-reactive TILs to potentially enhance clinical efficacy. Nonetheless, one consideration is that the screening and selection of neoantigen-reactive TIL cultures occurs in the intermediate pre-REP T-cell product, which needs to be further expanded for patient treatment. Unless the TCR clonotype(s) displaying neoantigen recognition is already highly oligoclonal, the rapid expansion of TILs could lead to changes in the TCR repertoire which could either be beneficial or detrimental if they result in an increase or a decrease in the neoantigen-reactive TCR clonotypes. In the case of GI cancers, TIL expansion during the REP was found to decrease the frequency of neoantigen-reactive lymphocytes, 30-34 but whether this also occurs in TILs derived from other tumors is unknown. Further purifying neoantigen-reactive TCR clonotypes or driving their specific expansion are a few strategies that are being investigated by us as well as other groups, to potentially maintain or enhance neoantigen reactivity during the REP. 49 Our results, as well as previous data, 29 , support that TILs can expand from ICB-naive and ICB-refractory tumors and that they can recognize tumor. However, recent data show that anti-programmed cell death protein 1 (PD-1)-experienced patients harbor tumors with lower mutational burden and TILs derived from these patients recognize fewer neoantigens, as compared with anti-PD-1-naive patients. 51 Despite this, TIL products capable of mediating antitumor responses following transfer of TILs in anti-PD-1-experienced patients in this study still recognized more neoantigens, than in non-responders. The decrease in the detection of neoantigen-reactive TILs could represent a challenge for the feasibility of treating anti-PD-1-experienced patients with neoantigen-selected TIL products as described here. On the contrary, it may also help select anti-PD-1-experienced patients who are more likely to respond to TIL therapy. 29 Concerning safety, in the more recent studies in solid tumors other than melanoma, toxicities were found to be like the ones previously described, and treatment was described as well tolerated with manageable toxicities. 52 , Following this rationale, we do not anticipate a different safety profile in patients with other solid tumors from patients with metastatic melanoma, because they are treated by the same TIL-ACT standard therapy (i.e. NMA-LD chemotherapy and TIL infusion followed by HD-IL-2). TIL therapy has rarely shown toxicities that could be attributed to the T-cell product. In few instances when these occurred, patients developed autoimmune toxicities associated with infusion of TILs targeting antigens shared by the tumor and normal tissues such as melanoma differentiation antigens. 53 By enriching our TIL product for TIL cultures targeting neoantigens, which are exclusively expressed by the tumor, we intend to make our product safer, but also, potentially, more efficacious. Although the clinical experience with neoantigen-reactive TILs is yet limited, thus far all the neoantigen-selected TIL products have been used in patients with solid cancers (other than melanoma) and have demonstrated to be safe. 54 The results derived from this project might provide novel therapeutic options for patients with metastatic epithelial tumors and ICB-resistant solid tumors, where there is currently an unmet clinical need, as well as contribute to a better understanding of tumor and T-cell traits influencing the clinical efficacy of ACT in this scenario. Funding This work was supported by Instituto de Salud Carlos III (ISCIII) [grant number ICI20/00076] and co-funded by the European Union , and by the Comprehensive Program of Cancer Immunotherapy & Immunology II (CAIMI-II) supported by the BBVA Foundation [grant number 53/2021]. AG was the recipient of a Miguel Servet Contract from ISCIII [grant number MS15/00058] and the Investigator Consolidation Award [grant number CNS2023-145343] from the Ministerio de Ciencia, Innovación y Universidades, Spain. AYE was supported by the Agència de Gestió d’Ajuts Universitaris i de Recerca (AGAUR) [grant number 2021 FI_B 00365]. We are grateful to Ricardo Pujol for revising the manuscript. Disclosure AG is a member of the scientific advisory board (SAB) of Achilles Therapeutics plc, SingulaBIO, RootPath, Inc., BioNTech SE and is a consultant advisor for Instil Bio; is a co-inventor of patents licensed (E-059-2013/0 E-085-2013/0, E-149-2015/0) and with royalties from Intima Bioscience Inc., Intellia Therapeutics, Inc., Tailored Therapeutics, LLC, Cellular Biomedicine Group Inc. and Geneius Biotechnology, Inc. EG is a consultant advisor of Roche/Genentech, F. Hoffmann/La Roche, Ellipses Pharma, Neomed Therapeutics Inc, Boehringer Ingelheim, Janssen Global Services, SeaGen, TFS, Alkermes, Thermo Fisher-Bristol-Mayers Squibb, MabDiscovery, Anaveon, F-Star Therapeutics, Hengrui. EMC is a consultant advisor of Bristol Myers Squibb, Merck Sharp & Dohme, Novartis, Pierre Fabre, Roche, Sanofi, Regeneron and received research funding from MSD, Sanofi, BMS. She declares speaking symposiums with Amgen, Bristol Myers Squibb, Merck Sharp & Dohme, Novartis, Pierre Fabre, Regeneron and clinical trial participation (principal investigator) with Amgen, Bristol Myers Squibb, GlaxoSmithKline, Merck Sharp & Dohme, Novartis, Pierre Fabre, Roche, Sanofi, Iovance, Regeneron. PB is a consultant advisor of Allogene, Amgen, Autolus Therapeutics, Bristol Myers Squibb/Celgene, Jazz Pharmaceuticals, Kite/Gilead, Incyte, Miltenyi Biomedicine, Novartis, Nektar, Pfizer and Pierre Fabre. JT reports personal financial interest in the form of scientific consultancy role for Alentis Therapeutics, AstraZeneca, Aveo Oncology, Boehringer Ingelheim, Cardiff Oncology, CARSgen Therapeutics, Chugai, Daiichi Sankyo, F. Hoffmann-La Roche Ltd, Genentech Inc, hC Bioscience, Immodulon Therapeutics, Inspirna Inc, Lilly, Menarini, Merck Serono, Merus, MSD, Mirati, Neophore, Novartis, Ona Therapeutics, Ono Pharma USA, Orion Biotechnology, Peptomyc, Pfizer, Pierre Fabre, Samsung Bioepis, Sanofi, Scandion Oncology, Scorpion Therapeutics, Seattle Genetics, Servier, Sotio Biotech, Taiho, Takeda Oncology and Tolremo Therapeutics; stocks from Oniria Therapeutics, Alentis Therapeutics, Pangaea Oncology and 1TRIALSP; and also educational collaboration with Medscape Education, PeerView Institute for Medical Education and Physicians Education Resource (PER). GV reports speaker’s fee from Pfizer, MSD, GSK and Pierre Fabre; advisory role with AstraZeneca; and consultant fees from Reveal Genomics. All other authors have declared no conflicts of interest.
REFERENCES:
1. MORAD G (2021)
2. SHARMA P (2023)
3. GRANHOJ J (2022)
4. ROSENBERG S (2015)
5. GOFF S (2016)
6. BORCH T (2020)
7. DAFNI U (2019)
8. BESSER M (2020)
9. SEITTER S (2021)
10. SARNAIK A (2021)
11. ORCURTO A (2021)
12. ROHAAN M (2022)
13. CHESNEY J (2022)
14. MULLARD A (2024)
15. MARTINLLUESMA S (2024)
16. CREELAN B (2021)
17. STEVANOVIC S (2019)
18. AMARIA R (2024)
19. PEDERSEN M (2018)
20. SNYDER A (2014)
21. RIZVI N (2015)
22. VANALLEN E (2015)
23. VANROOIJ N (2013)
24. LAUSS M (2017)
25. ROBBINS P (2013)
26. KRISTENSEN N (2022)
27. LU Y (2014)
28. ANDERSEN R (2016)
29. LEVI S (2022)
30. ZACHARAKIS N (2018)
31. STEVANOVIC S (2017)
32. ZACHARAKIS N (2022)
33. TRAN E (2014)
34. TRAN E (2016)
35. JUNG Y (2022)
36. MARTIN M (2011)
37. VANDERAUWERA G (2013)
38. KOBOLDT D (2012)
39. KIM S (2018)
40. LARSON D (2012)
41. BENJAMIN D
42. MCLAREN W (2016)
43. KODYSH J (2020)
44. ODONNELL T (2018)
45. SZOLEK A (2014)
46. DOBIN A (2013)
47. LIAO Y (2014)
48. GROS A (2016)
49. GROS A (2019)
50. PARKHURST M (2019)
51. ANDERSEN R (2018)
52. PEDERSEN M (2018)
53. SCHOENFELD A (2024)
54. YEH S (2009)
|
10.1016_j.nme.2017.08.003.txt
|
TITLE: 22nd International Conference on Plasma-Surface Interactions in Controlled Fusion Devices
AUTHORS:
- Mazzitelli, G.
- Maddaluno, G.
- Apicella, M.L.
- Buratti, P.
- Crisanti, F.
- Tudisco, O.
- Viola, B.
- Visca, E.
ABSTRACT: No abstract available
BODY:
The 22 nd International Conference on Plasma-Surface Interactions in Controlled Fusion Devices (PSI-22), organized by ENEA, the Italian National Agency for New Technologies, Energy and Sustainable Economic Development, was held in the Pontificia Università Urbaniana, Roma, Italy, from May 30 th to June 3 rd 2016. The PSI conference, held every 2 years, is the most important exchange of views among researchers working in the field of plasma edge physics and plasma-wall materials interaction in magnetic confinement fusion devices. The main topics dealt in this conference were: Physics processes at the plasma-material interface. Material erosion, migration, mixing, and dust formation Plasma fuelling, particle exhaust and control, tritium retention Wall conditioning and tritium removal techniques Impurity sources, transport and control Edge and divertor plasma physics Power exhaust, plasma detachment, and heat load control Far SOL transport and main chamber plasma-wall interaction Plasma edge and first wall diagnostics Plasma exhaust and plasma-material interaction for fusion reactors A tutorial course with contributions from experts in the fields covered by the meeting was organized on May 29 th . The attendance at the conference reached 464 participants. The number of abstracts submitted was 502, which is the largest ever since the first PSI conference in 1974. After the selection, 4 reviews, 22 invited, and 36 contributed orals were presented at the conference together with 366 posters. The resulting 228 papers appearing in these proceedings were reviewed and accepted by at least two independent peer reviewers. On behalf of the PSI program committee we invite you to the next Conference on Plasma-Surface Interactions in Controlled Fusion Devices which will be held in Princeton University, NJ, USA in June 17-22, 2018, organized by Princeton Plasma Physics Laboratory and chaired by Dr. Rajesh Maingi. Guest Editors
REFERENCES:
No references available
|
10.4103_2225-4110.136544.txt
|
TITLE: Ethnomedical Properties of Taxus Wallichiana Zucc. (Himalayan Yew)
AUTHORS:
- Juyal, Deepak
- Thawani, Vijay
- Thaledi, Shweta
- Joshi, Manoj
ABSTRACT:
Taxus wallichiana Zucc., known as Himalayan yew, belongs to the family Taxaceae. It is a medium-sized, temperate, Himalayan forest tree of medicinal importance. In India, this evergreen tree is found at altitudes between 1800 and 3300m above mean sea level (MSL). It has been used by the native populations for treating common cold, cough, fever, and pain. Its uses are described in Ayurveda and Unani medicine. It received attention recently as its leaves and bark were found to be the prime source of taxol, a potent anticancer drug. It possesses many other biological activities also. We focus on its importance in traditional medicine for its multiple medicinal properties.
BODY:
INTRODUCTION Taxus wallichiana Zucc., or Himalayan yew, belongs to the family Taxaceae and is found in India as an evergreen tree in the temperate Himalayas at altitudes between 1800 and 3300 m and in the hills of Meghalaya and Manipur at an altitude of 1500 m. [ 1 ] Taxus is distributed in Europe, North America, North India, Pakistan, China, and Japan. [ 1 ] It is a small medium-sized evergreen tree growing from 10 to 28 m in height. The leaves are flat, dark green, and arranged spirally on the stem. [ 2 ] In Asia, its distribution stretches from Afghanistan through the Himalayas to the Philippines, and it is widely distributed in Pakistan and India. In India, it grows in its natural habitat in Nanda Devi Biosphere Reserve (NDBR) of Garhwal Himalayas, particularly on the north to north-west slopes. [ 3 ] The Himalayan yew, known as Thuner in western Himalayas, has high medicinal value and ethnobotanical importance. [ 3 ] The plant holds an important place in traditional medicine and its products are used by the local populations for treating common infections. It received wide attention recently because its leaves and bark were found to be the prime source of taxol, a potent anticancer drug which has a unique property of preventing the growth of cancerous cells and is used in the treatment of breast and ovarian cancers. [ 4 ] Taxol was first isolated from the bark of Taxus brevifolia , [ 5 ] and since then, taxol and related bioactive taxoids have been reported from various other species of the genus Taxus . [ 6,7 ] Excellent clinical results with taxol in the treatment of various cancers, particularly in refractory ovarian and breast cancers, have led to substantial demand for this drug. [ 8,9 ] The leaves and bark of T. brevifolia , T. wallichiana , and other Taxus species have been used for the extraction of taxol. Due to overexploitation, many species are now endangered and on the verge of extinction. [ 10 ] Moreover, several species are disappearing at an alarming rate mainly at higher altitudes due to over-harvesting, habitat destruction, and abrupt climate change. Available literature on T. wallichiana shows its analgesic, antipyretic, anti-inflammatory, immunomodulatory, antiallergic, anticonvulsant, anticociceptive, antiosteoporotic, antibacterial, antifungal, antiplatelet, and antispasmodic activities and vasorelaxing effect. [ 2,11–14 ] USES IN TRADITIONAL MEDICINE The Himalayan yew has a remarkable history of its usage in the traditional system of medicine. The indigenous people live in nearby forests and possess substantial amount of traditional wisdom on plant utilization. Himalayan medicinal plants form important constituents of alternative medicinal systems such as Amchi, Ayurveda, Han Chinese, Unani, and other traditional medicine systems that are prevalent in this region. Native populations and the inhabitants of the buffer zone villages of NDBR use these plants and their products in folk medicine for the treatment of common infections. Himalayan yew has been used traditionally for the treatment of high fever and painful inflammatory conditions. It is consumed as decoctions, herbal tea, and juice for treating cold, cough, respiratory infections, indigestion, and epilepsy. As poultice, it is used locally on the infected wounds and burns. [ 13,15 ] Its bark and leaves are used in steam baths to treat rheumatism, and the paste made from its bark is used to treat fractures and headaches. Extracts from the tree are also used in medicinal hair oils. In Pakistan, decoction of the stem is used in the treatment of tuberculosis. [ 16 ] The bark and leaves of T. wallichiana are used in Unani medicine as a source of the drug Zarnab, which is prescribed as a sedative, aphrodisiac, and as a treatment for bronchitis, asthma, epilepsy, snake bite, and scorpion stings. [ 3 ] Young shoots of the plant are used in Ayurveda to prepare a tincture for the treatment of headache, giddiness, feeble and falling pulse, coldness of extremities, diarrhea, and severe biliousness. [ 1 ] ANTI-INFLAMMATORY AND ANALGESIC ACTIVITIES The analgesic and anti-inflammatory properties of the T. wallichiana bark extract have been studied. [ 12 ] Tasumatrol B, 1,13-diacetyl-10-deacetylbaccatin III (10-DAD), and 4-deacetylbaccatin III (4-DAB) were isolated from the bark extract of T. wallichiana Zucc. The compounds were assessed for anti-inflammatory and analgesic activities using an acetic acid induced writhing model, carrageenan-induced paw edema model, and in vitro lipoxygenase inhibitory assay. All the compounds, especially tasumatrol B, showed significant anti-inflammatory activity in carrageenan-induced paw edema model, [ 12 ] which is used extensively to determine the anti-inflammatory effect of new investigational agents. [ 17 ] Taxusabietane A, isolated from the bark extract of T. wallichiana , was analyzed for in vivo and in vitro anti-inflammatory activities using the lipoxygenase inhibitory assay and the carrageenan-induced paw edema model, where taxusabietane A showed significant anti-inflammatory activity. [ 11 ] Using the acetic acid induced abdominal writhing model, the analgesic properties of the bark extracts were analyzed. All compounds, particularly tasumatrol B, revealed significant analgesic activity. [ 12 ] Acetic acid plays a critical role in nociception. [ 17 ] Involving the prostaglandin and cyclooxygenase biosynthetic pathway, it releases arachidonic acid. Analgesic properties of T. wallichiana extract may be due to its inhibitory effects on the biosynthesis of arachidonic acid metabolites. [ 12 ] The potential of tasumatrol B as a new lead compound for the management of pain and inflammation can be further explored. ANTICONVULSANT AND ANTIPYRETIC ACTIVITIES It was found that the methanol extracts of T. wallichiana possess potent anticonvulsant and antipyretic activities. [ 2 ] The plant extract controlled the pentylenetetrazol-induced convulsions in mice. The plant extract, when administered in doses of 100 mg/kg and 200 mg/kg, significantly inhibited myoclonus and clonus, while inhibition of tonus and hind limb tonic extension were found to be more significant. In the same study, the antipyretic activity of the plant extract was also shown, where in yeast-induced pyrexia model, a 200 mg/kg dose caused a significant inhibition. However, in doses of 50 mg/kg and 100 mg/kg, it caused less significant inhibition. The anticociceptive and antipyretic activities may be attributed to the presence of phenols, polyphenols, tannins, saponins, anthraquinones, alkaloids, steroids, and especially, the diterpenes found in the crude extract of the plant. [ 1,7,13 ] The anticonvulsant and antipyretic activities of T. wallichiana Zucc. support its traditional uses in epilepsy and pyrexia. ANTICANCER ACTIVITIES After the discovery of the anticancer drug taxol (Paclitaxel) from the bark of Pacific yew tree T. brevifolia , [ 5 ] in 1971, lot of work was carried out on the chemical investigation of almost all parts (needles, bark, root, seed, heartwood) of several yew species, [ 7,14,18–21 ] resulting in the isolation and characterization of 300 taxoids. Systemic studies conducted on the chemical constituents acquired from different parts of T. wallichiana revealed several taxoids of different structural types, with five of them being novel molecules. [ 22 ] Three ligands have been isolated, viz. taxiresinol 1, isotaxiresinol 2, and (−)-secoisolariciresinol 3, from the heartwood of the plant, which possess anticancer activity. [ 22 ] Among these, taxiresinol 1 showed notable in vitro anticancer activity against liver, colon, ovarian, and breast cancer cell lines. [ 4,8,9,22 ] Taxol is a highly substituted polyoxygenated cyclic diterpenoid characterized by the taxane ring. It inhibits cell proliferation by promoting the stabilization of microtubules at the G2-M phase of the cell cycle, due to which the depolymerization of microtubules to soluble tubulin is blocked. [ 23,24 ] ANTIBACTERIAL AND ANTIFUNGAL ACTIVITIES Extracts from various Taxus species have been reported to possess antibacterial and antifungal activities. Taxoids isolated from Taxus cuspidata var. nana have been reported to possess antifungal activity against plant pathogenic fungi. [ 25 ] Heartwood extract from Taxus buccata has potential antibacterial and antifungal activity. [ 26 ] Bilobetin, a biflavone obtained from the needles of T. buccata , has also been reported to possess antifungal activity. [ 27 ] Methanol extracts of the leaf, bark, and heartwood of T. wallichiana were tested against six bacterial and six fungal strains using the hole diffusion and microdilution methods. [ 13 ] All extracts and fractions from the plant displayed significant antimicrobial effects, and the minimum inhibitory concentration (MIC) values for the bacterial strains ranged from 0.23 to 200 mg/ml and from 0.11 to 200 mg/ml for fungi. Taxol and related bioactive taxoids from T. wallichiana may be responsible for the antimicrobial activities. These activities may also be attributed to the presence of phenols, polyphenols, tannins, saponins, anthraquinones, alkaloids, steroids, and especially, the diterpenes found in the plant extract. These families of natural products and phytochemical groups are known to display antimicrobial activities. [ 1,7,13 ] CONCLUSION The extracts of T. wallichiana Zucc. have been found to possess therapeutic potential, and the plant has its important place in traditional medicine. However, traditional knowledge, which passes orally from generation to generation, is on the verge of extinction due to the disruption of cultural set-ups, caused by rapid socio-economic transformation and modernization of society. The diverse biological activities demonstrated by researchers open the door for its potential use in modern medicine. The extracts from various parts of the plant have significant activity against pain, inflammation, fever, fungal and bacterial infections, convulsions, and cancer. Further elaborative studies can lead to development of the safe actives for therapeutic use in modern medicine and will offer better understanding of its mechanism of action.
REFERENCES:
1. KHAN M (2006)
2. NISAR M (2008)
3. PUROHIT A (2001)
4. KOVACS P (2007)
5. WANI M (1971)
6. BALA S (1999)
7. PRASAIN J (2001)
8. BISHOP J (1997)
9. GAUTAM A (2003)
10. SHINWARI Z (2011)
11. KHAN I (2011)
12. QAYUM M (2012)
13. NISAR M (2008)
14. CHATTOPADHYAY S (2006)
15. GONZALEZ J (1980)
16. AHMED E (2004)
17. KHAN H (2010)
18. WALL M (1995)
19. PARMAR V (1999)
20. BALOGLU E (1999)
21. WITHERUP K (1990)
22. CHATTOPADHYAY S (2003)
23. JENNEWEIN S (2001)
24. ASHRAFI S (2010)
25. TACHIBANA S (2005)
26. ERDEMOGLU N (2001)
27. KRAUZEBARANOWSKA M (2003)
|
10.1016_j.sajce.2022.01.003.txt
|
TITLE: Numerical modeling of wastewater treatment using HOLLOW fiber membrane contactors based on the stiff spring method
AUTHORS:
- Poormohamadian, Seyed Jalil
- Koolivand, Hadis
- Koolivand-Salooki, Mahdi
- Esfandyari, Morteza
ABSTRACT:
In this study the efficiency of a porous HOLLOW fiber membrane module (PHFMM) was studied using a simple and generic method. Stiff spring method (SSM) was applied in order to solve two-dimensional transient model equations of transport of diluted species through an extractive HOLLOW fiber membrane contactor (HFMC). The influence of equilibrium partition coefficient on membrane efficiency and exit concentration has also been investigated. The predictions made by the present model matched the experimental data obtained for Cu2+removal in a kerosene solution of di 2-Ethylhexyl phosphoric acid (D2EHPA) from wastewater. The results of the model showed that the predictions are in good agreement with the experimental data obtained for different values of species concentration. Furthermore, it was demonstrated that the membrane efficiency and exit concentration could not be predicted well for many ranges of equilibrium partition coefficients by applying non-stiff method. The efficiency of membrane which predicted 56%, can be calculated by non-stiff method only if the partition coefficient considered 800 or more. In contrast, the membrane efficiency was completely fixed and in quite good agreement with the experimental data over a wide range of equilibrium partition coefficients from very low values of 0.02 to a very high values of more than 800 upon application of stiff spring-based method. Similarly, the exit concentration of Cu2+ predicted very well to about 0.4 mol/m3 when the inlet concentration was 1 mol/m3 for a very wide range of partition coefficient by stiff-spring method. However, the outlet concentration of Cu2+ could not be predicted well by non-stiff method unless the partition coefficient supposed to be more than 800. Ultimately, simulation results verify that the stiff spring method is more reliable and accurate in comparison with non-stiff spring method.
BODY:
Nomenclatures Parameter–Dimension–Description – C L ( m o l / m 3 ) Diluted species concentration in the lumen – D d s ( m 2 / s ) Correlated diluted species diffusion coefficient in the lumen – R N ( m o l / m 3 s ) Rate of extraction of diluted species in the lumen – V Z ( m / s ) Fluid velocity in the lumen – u ¯ ( m / s ) Fluid average velocity in the lumen – C 0 ( m o l / m 3 ) Inlet diluted species concentration in the lumen – C M ( m o l / m 3 ) Diluted species concentration in the membrane – D d s ′ ( m 2 / s ) Correlated diluted species diffusion coefficient in the membrane (the effective diffusivity) – k dimensionless Partition coefficient – N ( m o l / m 2 s ) Fluid flux – C s h ( m o l / m 3 ) Diluted species concentration in the shell – D d s ″ ( m 2 / s ) Diluted species diffusion coefficient in the shell – R N ″ ( m o l / m 3 s ) Rate of extraction of diluted species in the shell – V Z ″ ( m / s ) Fluid velocity in the shell – p 0 ( P a ) Outlet pressure in the shell – S ( t ) ( m o l / m 3 ) Inlet diluted species concentration in the shell – V Z , 0 ″ ( m / s ) Inlet fluid velocity in the shell – V r e s ( m / s ) Volume of the reservoir – Q ( m 3 / s ) Lumen side flow rate – Q r e s ( m 3 / s ) Reservoir volumetric rate – a dimensionless Constant – d ( m ) Fiber diameter – D d s ( m 2 / s ) Diluted species diffusion coefficient (ordinary diffusion coefficient) – L ( m ) Fiber length – l ( m ) Fiber web thickness – r ( m ) Radial coordinate – r 1 ( m ) Inner radius of the fiber – r 2 ( m ) Outer radius of the fiber – r 3 ( m ) Radius of the shell – t ( s ) Time – z ( m ) Axial coordinate Greek letters – ε dimensionless Fractional porosity – τ dimensionless Pore path tortuosity – ε p dimensionless Percolation threshold – τ b dimensionless Bulk tortuosity – ρ k g / m 3 Liquid density – η k g / m . s Dynamic viscosity Abbreviations CFD Computational fluid dynamics FEA Finite element analysis HFMC HOLLOW fiber membrane contactor 1 Introduction Since the inception of wastewater treatment, various and efficient technologies have been used for the extraction of heavy and transition metals to provide considerably enhanced quality water( Basha et al., 2008 ; El-Shafie et al., 2021 ; Hua et al., 2012 ; Manyuchi et al., 2018 ; Mbareck et al., 2009 ; Moyo et al., 2021 ; Sahu, 2019 ; Song et al., 2011 ; Stephenson et al., 2000 ; Susanto et al., 2020 ; Yang et al., 2020 ). The application of porous HOLLOW fiber membranes for removal of heavy/transition metals from wastewater brilliantly combines the benefits of any conventional process (packed towers, mixer-settlers, etc.) to achieve superior performance of individual processes( Gabelman and Hwang, 1999 ; Pabby and Sastre, 2013 ; Poormohammadian et al., 2015 ; Takassi et al., 2011 ). Membrane contactors with non-dispersive process( Juang et al., 2000 ; Malamis et al., 2012 ) have many industrial applications compared with conventional dispersed phase contactors. The absence of emulsions, no flooding at high flow rates, no unloading at low flow rates, no density difference requirement between fluids are the unique advantages of membrane contactors. Noted surprisingly high interfacial active area, flexible capacity, no phase separation, low volume, low weight and extreme phase ratios( Pabby and Sastre, 2013 ; Rezakazemi et al., 2012 ). In some cases, membrane contacting has appeared as a technology to fulfill certain commercial requirements, higher quality product requirements, environmental legislation, and energy efficiency demands in addition to cost reduction( Ghadiri et al., 2013a ; Pabby and Sastre, 2013 ). Porous membrane design is greatly dependent on the precise insight and detailed control of the interphase( Gabelman and Hwang, 1999 ). Much valuable information on the development of membrane processes may be provided by computational fluid dynamic techniques. Membrane efficiency in the vicinity of the aqueous-organic interface is of great importance. This is basically associated with the surface/volume ratio and of the interphase interaction energy variation and thus the contact area of two phases for mass transfer( Hassan et al., 2013 ; Marjani and Shirazian, 2011 ). Membrane contactors (dispersion-free extraction), which use nano and micro porous membranes for heavy metal removal and mass transfer phenomenon have been widely reviewed and discussed by a number of researchers( Ghadiri and Shirazian, 2013 ; Guo and Ho, 2008 ; Juang and Huang, 2003 ; Pabby and Sastre, 2013 ; Patil et al., 2008 ; Prasad and Sirkar, 1990 ; Saghatoleslami et al., 2011 ; Sciubba et al., 2009 ; Vajda et al., 2004 ; Yeh and Chen, 2001 ). Ghadiri et al. (2013) performed a modeling and CFD simulation of water desalination using nano porous membrane contactors. In this study, a simulation was carried out to better choose the module configuration and the results of the simulation were validated by experimental data( Ghadiri et al., 2013a ). Resistance-in-series model or conservation equations for the metal ions in whole phases may be used to describe the mass transfer modeling of diluted removal from wastewater using membrane contactors( Ghadiri et al., 2012 ; Juang and Huang, 2003 ). The total resistance can be expressed as three resistances in series in membrane contactors including the individual resistances in each flowing phase and the membrane resistance( Ghadiri et al., 2012 ; Marjani and Shirazian, 2011 ; Pabby and Sastre, 2013 ). The resistance at the shell side is ignored due to the instantaneous chemical reaction between the diluted species and extractive solvent( Lemanski and Lipscomb, 1995 ). Partition coefficient, defined as the ratio of concentration at equilibrium in the solvent phase to the concentration in the feed, is an important factor affecting the membrane resistance. In hydrophobic membranes, the mass transfer coefficient of the membrane is expected to be greater than that of the feed for a large value of partition coefficient. The selection of membrane properties is based on the partition coefficient value, which is often justified experimentally( Noble and Stern, 1995 ). The partition coefficients between the donor phase and the membrane can be measured using membrane-coated fiber (MCF) technique( Xia et al., 2005 ). Chemical membrane contactors can be virtually prototyped by Computational Fluid Dynamics (CFD). The local variations of the fluid and thermal and mass transport properties can be visualized compared with simple simulations given that CFD is based on control volume or finite element methodology and can be applied in the design of HFMC. CFD is the best analytical tool for membrane contactors since permeation is dependent on local conditions near the membrane surface( Hajilary and Rezakazemi, 2018 ; Rezakazemi, 2018 ; Rezakazemi et al., 2012 ). It is necessary to completely understand the fluid dynamics and mass transfer mechanisms in industrial membrane separation processes for appropriate equipment design and optimization purposes. However, the application of CFD methods may provide a detailed prediction of membrane separation operations in any geometrical configurations and module scale. Thus, CFD appears to be a very promising technique. However, the development of appropriate modeling strategies and strict tests on their predictive capabilities are still necessary for reliable adaptation of CFD codes as a design tool in this field. The hydrodynamic behavior of membrane separation processes using membrane modules has been extensively simulated by CFD( Fimbres-Weihs and Wiley, 2010 ; Ghidossi et al., 2006 ; Li et al., 2011 ; Schwinge et al., 2004 ; Tung et al., 2012 ). This method has been applied in the simulation of gas separation and solvent extraction processes in membrane contactors. There is an excellent agreement between the experimental data and modeling and simulation results( Ghadiri et al., 2014 , 2013a , 2013c , 2013b , 2012 ; Ghadiri and Shirazian, 2013 ; Ranjbar et al., 2013 ). However, there are some difficulties in the convergence of the proposed models. For example, the partition coefficient is presumably independent on the concentration in other works( Sanaeepur et al., 2012 ) because of the discontinuity of concentrations in the boundaries. Severe problems may arise in solving the governing equations by the fluctuations of partition coefficient. A finite element method based numerical technique has been applied in this work for solving the two-dimensional axisymmetric flow field and convective diffusion equation for heavy metal transport in laminar flow over a permeable surface in a tubular membrane. A mathematical model has been developed and solved for the simulation of transition metal extraction in a membrane contactor in this work. Therefore, it is easier to describe the technology and basics of its operation by consideration of the mass transfer process in membrane contactors. The effects of variations of partition coefficient on the membrane efficiency and exit concentration of the heavy metals in the lumen side have also been investigated. 2 Mathematical modeling The reversible reaction, Eq. (1) , can be used to express the extraction of such divalent transition metal ions as Cu 2+ from sulfate solutions using D2EHPA solution in kerosene( Hajilary and Rezakazemi, 2018 ). Where, the overbar refers to the organic phase and (1) C u 2 + + 2 ( H R ) ‾ 2 ⇔ C u R 2 ( H R ) ‾ 2 + 2 H + shows D2EHPA dimeric form. Hydrophobic membranes with a pore size in the range of ( H R ) 2 are not readily wetted by water. However, they may be wetted by hydrocarbons and most organic solvents ( 10 − 3 − 10 − 2 μ m Juang and Huang, 2003 ). The assumptions of the model such as flow conditions, physical properties of the fluid, flow regime, and resistance to concentration polarization are the same as in the previous works ( Tahvildari et al., 2016 ). The principal mass and momentum balance couples of equations are formulated and solved by the membrane contactor CFD model using numerical techniques such as FEM. These equations are non-linear and cannot be analytically solved in any case. Therefore, the equations must be linearized and solved over all the nodes and grids. In order to predict the contactor performance, the conservation equations for the solute in the contactor were derived and solved. Fig. 1 shows that the model is developed for a HOLLOW fiber. There are three sections in the membrane contactor system, namely lumen side, membrane, and the shell side. As observed in Fig. 1 , the feed phase in the lumen side of micro porous HOLLOW fiber membrane is laminar with a completely developed velocity profile. The solvent, which is in the opposite direction of the feed supply phase, surrounds the fiber. Experimental data reported by Juang and Huang( Juang and Huang, 2003 ) on extraction were used in order to confirm the results of the simulation. The simulations were carried out under the same conditions as those in the experiments conducted by Juang and Huang in 2003. C u 2 + 2.1 Mass balance for the lumen side Eq. (2) expresses mass conservation for heavy and transition metals in the lumen side: (2) ∂ C L ∂ t = D d s [ ∂ 2 C L ∂ r 2 + 1 r ∂ C L ∂ r + ∂ 2 C L ∂ z 2 ] − V Z ∂ C L ∂ t + R N The reaction rate, R , is zero since no chemical reaction takes place on the lumen side. N Eq. (3) is used to obtain the velocity distribution on the lumen side: (3) V Z = 2 u ¯ [ 1 − ( r r 1 ) 2 ] The feed phase enters in the lumen side with a known concentration, C , the outlet feed concentration gradient being zero. Thus, the boundary conditions for mass balance equation in the lumen side are as follows: 0 (4) C L = C 0 a t z = 0 (5) n . ( D d s ∇ C L ) = 0 a t z = L All mass passing through convective flux boundary in the radial direction is presumably transferred by convection mechanism and diffusion mass transfer is negligible. (6) ∂ C L ∂ r = 0 a t r = 0 (7) C L = C M a t r = r 1 2.2 Mass balance for the membrane side The mass conservation equation for heavy and transition metal component transferred through the membrane can be written as: (8) ∂ C M ∂ t = D d s ′ [ ∂ 2 C M ∂ r 2 + 1 r ∂ C M ∂ r + ∂ 2 C M ∂ z 2 ] The term reaction is not considered in the equilibrium equation of the membrane due to the absence of a chemical reaction in the membrane. Fick's law can be considered because the propagation of pores occurs only by the diffusion of a normal molecule towards the membrane. Similar to the lumen side, the effective diffusion coefficient value in the membrane side must be correlated in an anisotropic one for numerical simulation. The boundary conditions for the membrane side are given by the following equations: (9) { n . N = 0 a t z = 0 N = − D d s ′ ∇ C M a t z = L (10) { C M = k C L a t r = r C M = C s h a t r = r 2 2.3 Material balance for the shell side Fick's law of diffusion is used to obtain the mass balance equation to estimate the diffusive flux of heavy metal components in the membrane shell side. (11) ∂ C s h ∂ t = D d s ″ [ ∂ 2 C s h ∂ r 2 + 1 r ∂ C s h ∂ r + ∂ 2 C s h ∂ z 2 ] + R N ″ − V Z ″ ∂ C s h ∂ z The heavy or transition metal components flow through the lumen of a HOLLOW fiber micro porous membrane and diffuse through the membrane pores in HFMC. The reaction of transferring diluted species occurs on the shell side of the membrane. The mass transfer phenomena are caused by the concentration gradient through the membrane. It should be indicated that the concentrations of species at the membrane organic interface are presumably identical since the interface is basically homogeneous because of the medium porosity of the membrane. The formation and dissociation of the metal–D2EHPA complexes is fast in comparison with the diffusion in the aqueous and organic layers based on the kinetics of Cu 2+ with D2EHPA( Juang and Huang, 2003 ). Therefore, the rate-limiting step, in this case, is film diffusion. Consequently, the stoichiometry is maintained due to the introduction of variable diffusion resistances. Therefore, in this model, the bulk liquid flow (macroscopic point of view) in the shell side presumably considers the flow behavior of the liquid, not affected by the reaction. Consequently, one may ignore the reaction rate expression ( R″ )( N Mulder and Mulder, 1996 ). A momentum balance using Navier-Stokes equation is regarded in the case of liquid flow. Simultaneous solving of the mass and the momentum equations yields the concentration and velocity distribution. Eq. (13) defines Navier-Stokes and the continuity equations in this case ( Bird et al., 2006 ): (12) ρ ∂ V Z ″ ∂ t + ρ V Z ″ . ∇ V Z ″ = ∇ . η ( ∇ V Z ″ + ( ∇ V Z ″ ) T ) − ∇ p (13) ∇ . V Z ″ = 0 The radius of the shell side, r , was estimated using the Happel's model( 3 Happel, 1959 ) ( Fig. 2 ). The shell side boundary conditions are given as: • Continuity equation (14) C s h = C M a t r = r 2 (15) n . N = 0 ; N = − D d s ″ ∇ C s h + C s h V Z ″ a t r = r 3 (16) C s h = s ( t ) a t z = L (17) n . ( − D d s ″ ∇ C s h ) = 0 a t z = 0 • Momentum equation (18) V Z ″ = 0 N o s l i p a t r = r 2 (19) V Z ″ = 0 N o s l i p a t r = r 3 (20) V Z ″ = V Z , 0 ″ a t z = L (21) p = p 0 a t z = L A large, completely mixed reservoir with constant density, which is connected to the membrane module shell side, is considered to complete the simulation based on the recycling mode. The Eq. (22) expresses the mass balance on the reservoir: in which the initial condition at time (22) d ( V r e s . s ( t ) ) d t = Q . C L ( L , r , t ) − Q r e s . s ( t ) t = 0 is taken as s(t) = 0. Ignoring the pipe volumes with respect to the system total volume, the following mass balance for the metal component yields a relationship between the reservoir and the HFMC unit( Marcos et al., 2009 ): (23) Q . C L ( L , r , t ) = ∫ 0 r 1 V Z ( L , r , t ) C L ( L , r , t ) 2 π r d r 3 Numerical solution methodology COMSOL Multi physics software, version 4.3 was applied to solve the equations through the finite element method (FEM). The UMFPACK direct solver was since it is suits 1D and 2D models. The columns were changed to minimize the fill-in Using this solver, the COLAMD and AMD approximate minimum degree preordering algorithms. As an implicit time stepping scheme, this solver is an appropriate choice for solving stiff and non-stiff non-linear boundary value problems. Fig. 2 shows the schematic diagram of generated meshes by COMSOL software to determine the behavior of diluted species in the membrane contactor. Scaling factor of 120 was used in the z-direction, which the COMSOL automatically scaled back the geometry following meshing in order to avoid considerable quantities of elements and nodes( Marjani and Shirazian, 2011 ). An anisotropic mesh of about 2800 triangular elements is generated by this. Adaptive mesh refinement in COMSOL, which gives the best and most minimal meshes, was applied in meshing the extractive membrane contactor geometry( Comsol and Burlington, 2007 ). 3.1 Scaling Scaling is aimed at the reduction of the number of parameters in a given model. Therefore, the knowledge of the governing equations in the system is the pre-requisite for the scaling technique. The output of scaling is not necessarily dimensionless quantities. A scale model generally represents an object physically, to provide accurate relationships between the important aspects of the model without the need for preserving the absolute values of the original properties. Through such a feature it is possible to demonstrate some important features of the original object without examining it. In order to avoid considerable quantities of elements and nodes in the numerical solver, it is necessary to perform scaling in z -direction because r to z ratio is very small. As a result, a newly scaled z -coordinate, , and a corresponding differential for the mass transports have been developed: z ^ (24) z ^ = z s c a l e (25) d z = s c a l e . d z ^ Scaling the diffusive part of the flux could be regarded as an anisotropic diffusion coefficient in which the diffusion in the z -direction is scaled by the factor . This yields the following diffusion-coefficient matrix: ( 1 / s c a l e ) 2 (26) D ¯ = [ D 0 0 D ( s c a l e ) 2 ] , is differentiated two times in the diffusion term in the mass transport equations. This implies that the diffusive flux vector's z-component must be multiplied by. The convective component is only differentiated once and must consequently be multiplied by z ^ . ( 1 scale ) The velocity vector must be multiplied by in order to account for the newly scaled z-coordinate. ( 1 / s c a l e ) 3.2 Stiff spring method Stability is particularly important. If stability is lost, the linearly implicit methods will not be suitable to solve stiff differential equations. Stiff differential equations appear in fluid mechanics, elasticity, electrical networks, chemical reactions and many other areas of physical importance( Bui and Bui, 1979 ; Butcher, 2016 ; Flaherty and O'Malley, 1977 ; Ixaru et al., 2002 ; Kaps and Wanner, 1981 ). No unique definition of stiffness has been reported in the literature. Nevertheless, essential properties of stiff systems exist under certain initial conditions. Solutions which change slowly and those in the neighborhood of these smooth solutions converge quickly. Computation time may increase due to high frequency fluctuations in second-order differential equations. These high-frequency solution components are eliminated by stiff spring method in the dynamic simulation of multi-body systems by adaptation of appropriate boundary conditions. Considerable problems in time integration may be caused by the increased model complexities in the technical simulation. High-frequency oscillations, which have very small amplitudes, can be regarded as typical examples of model components, not associated with the practical application point of view, but lead to stability problems in explicit time integration methods and can retard implicit integrators as a result of convergence problems in the corrector iteration. Since there are discontinuities in the concentration profile at the boundaries between liquid and membrane phases in membrane contactors, three separate variables were used in the description of the concentration in the respective phases. A special type of boundary condition was applied using the stiff spring method in order to get continuous flux over the phase boundaries. Instead of defining Dirichlet concentration conditions, which destroy the flux continuity, continuous flux conditions were defined to force the concentrations to the desired values: (27) ( − D d s ∇ C L ) . n = M ( C M − k C L ) a t r = r 1 ( i n s i d e t h e l u m e n ) where (28) ( − D d s ′ ∇ C M ) . n = M ( k C L − C M ) a t r = r 1 ( i n s i d e t h e m e m b r a n e ) M is non-physical velocity, which is large enough to let the concentration differences in the brackets approach zero thus satisfying Eqs. (6) and (9) . These boundary conditions also give a continuous flux across the interfaces if M is sufficiently large. 4 Results and discussion The governing equations have been solved for water purification. To solve the governing equations with the boundary conditions given in extractive membrane contactor, the system specifications and physical conditions are required, which are shown in Table 1 . 4.1 Lumen side concentration and velocity profiles Concentration distribution of Cu 2+ is the most important factor in the process optimization. Fig. 3 shows the vectors of total fluxes (diffusive and convective) and the concentration profiles of Cu 2+ in the lumen side. Cu 2+ concentration in the feed phase is at one side of the contactor. z = 0 is at its maximum value and flow direction is from this point. As the feed flows through the lumen side due to the concentration gradient, the solute moves towards the membrane, its concentration decreasing along the fiber length in the lumen side. In other word, Cu 2+ is transferred toward the membrane due to the concentration gradients in radial direction. Thus, the concentration of Cu 2+ is increased in the radial direction and is decreased in axial direction. When it reaches at the membrane interface, the chemical reaction based on Eq. (1) occurs and the metallic complex is formed. Finally, the complex diffuses through the membrane pores, reaches at the shell side and is swept by the moving extractant and leaves the extractor( Fadaei et al., 2011 ; Shirazian et al., 2012 ). Fig. 4 shows the Cu 2+ concentration contour plot in the lumen side. As observed, the concentration gradient at the regions near the lumen side inlet is important. These results can explain that the higher mass fluxes are due to the higher concentration gradient and thus higher driving force in these regions. Upon the feed flow along the lumen in the z -direction, the Cu 2+ radial diffusion to the shell side causes lower concentration gradient at the lumen outlet. Fig. 5 shows Cu 2+ surface concentration profile for four different times in the lumen. According to the results, at the beginning of the process the concentration variations of Cu 2+ is much important. Since at the entrance of the lumen, the concentration gradient has the highest value, thus the Cu 2+ moves rapidly to the radial direction. A comparison between Figs. 3 and 5 shows that after about 15 min, the Cu 2+ concentration in the lumen reaches its steady state value. 4.2 Velocity field in the shell side Fig. 6 shows the Cu 2+ surface velocity profile in the shell side of the membrane contactor. According to the simulation results (red area) shown in Fig. 6 the velocity profile is nearly parabolic with a mean velocity increasing with membrane length because of the Cu 2+ proximate and continuous fluid permeation. In addition, the effect of viscous forces at the region adjacent the shell wall is shown by the blue area of Fig. 6 . The viscous force effect in these regions is important and leads to zero velocity (no slip) at the shell. The velocity profile is not developed at the inlet regions. The velocity profile is fully developed within a short distance from the inlet. 4.3 Membrane efficiency The membrane efficiency in the lumen is defined as: (29) Efficiency = Q ( C L ( 0 , r , t ) − C L ( L , r , t ) ) Q C L ( 0 , r , t ) Fig. 7 shows the effect of feed flow rate on the membrane efficiency. Upon increasing the feed flow rate, the feed residence time in the lumen decreases and thus the purification of membrane efficiency and Cu 2+ total mass transfer from the lumen to the shell side reduce. Increasing the feed flow rate cause to increasing the velocity of the components through the lumen side and affect the kinetic movement of the components, Thus, there is not enough time for the Cu 2+ ions to react with organic phase at the interface of membrane which inhibit good contact between the substances. Therefore, fewer organic complexes are produced and much of the Cu 2+ is exit from the end of the lumen without any reaction. The membrane efficiency values have been compared with the data reported by ( Fadaei et al., 2011 ) and the results have been found to be in good agreement with the previously reported data which is a confirmation of data obtained from the stiff-spring method of this study. 4.4 Effect of partition coefficient on membrane efficiency Figs. 8 and 9 show the effects of partition coefficient on membrane efficiency with and without applying stiff spring method, respectively. The partition coefficient value, defined as the distribution of the solute between the feed and the solvent phase at equilibrium, can affect the output results in non-stiff method as shown in Fig. 9 , since the membrane efficiency varies from very low values of about 0.3 at very low partition coefficients to nearly 56 for very high values of the partition coefficient. This variation of the membrane efficiency cause to achieve different values of Cu 2+ exit concentration and Cu 2+ removal values for the specific and constant values of the inlet flow with constant Cu 2+ concentration. Whereas the membrane efficiency does not fluctuate and is completely stable by using stiff spring method as shown in Fig. 8 . Thus, the results of stiff-spring method are accurate and more reliable in comparison to non-stiff results. As observed in Fig. 9 , at very high values of partition coefficients, more than 800, the results of non-stiff method will become approximately near the results of stiff-spring method. Thus, in the solving of the equations by non-stiff method, one should obtain very high values for the partition coefficient value as it is mentioned in the literature( Fadaei et al., 2011 ; Tahvildari et al., 2016 ). Based on the simulation results shown in Fig. 9 , an appropriate partition coefficient value should be selected in order to achieve converged solution of PDEs extracted from mass balance equations and obtain the correct membrane efficiency and exit concentration. However, as previously discussed, partition coefficient is based on the choice of membrane properties and is justified experimentally. Fig. 8 shows, on the other hand, that partial differential equations can be solved by using different values of partition coefficients when applying stiff spring method. The outlet concentration of Cu 2+ in the lumen side is another criterion for the performance of HOLLOW fiber membranes. Although the exit concentration of the substances has been investigated with respect to inlet flow rate by Shirazian et. al( Shirazian et al., 2012 ), but the effect of partition coefficient on the exit concentration of the transition metals has not been investigated in the literature upon our knowledge. Fig. 10 shows the effect of partition coefficient on Cu 2+ exit concentration in the lumen side with and without using stiff spring method. The results show that in the non-stiff method, the exit concentration cannot be predicted correctly for the values below 800 of the partition coefficients. On the other hand, reasonable value of Cu 2+ outlet concentration is achievable from simulation results when the stiff spring method is used in PDEs solution. Furthermore, the umbrages of variations of partition coefficient may be omitted by the stiff spring method due to the discontinuity of concentrations in the boundary conditions. 4.5 Model validation Cu 2+ concentration profile was predicted by solving the unsteady state mass balance equations of transition metals in the lumen side of the membrane using the stiff spring method. To validate the simulation results of our model, we used the experimental data from a previous work by Juang and Huang Juang and Huang, 2003 ) shown in Table 2 . Fig. 11 shows the comparison of the results. A mass balance was carried out on the feed tank containing the copper aqueous solution to take into account the recycling mode, Eqs. (22) and ( (23) . The trend of simulation results is in good agreement with the experimental data as shown in Fig. 11 . The application of stiff spring method in the simulation is more reliable and accurate compared to non-stiff spring method, as confirmed by the results. 5 Conclusion A mathematical solution of an HFMC stiff problem was proposed by applying combined finite element and stiff spring methods in the present work. The stiff spring integration concept has been applied to solve the PDE governing equations of the HOLLOW fiber, MBR in wastewater treatment. A new methodology for the construction of these schemes and investigation of their performance on the several characteristics of HFMC has also been discussed. The solution is very rapidly convergent by applying the presented stiff spring method. It is obvious from the stability theory that one has to usually use implicit methods for stiff problems. The objective of this work was to define implicit methods to reduce the computational complexity while simultaneously be still accurate and stable. In stiff PDE equation sets, in which the solution shows large fluctuations over small intervals, some numerical methods are only stable when a very small step size is used. In addition, the method usually achieves long time accuracy while circumventing the stringent stability restrictions on the time step incurred by standard explicit. The comparison of stiff and non-stiff methods has shown that there are difficulties when using non-stiff method in some cases. For example, exit concentration of lumen side in non-stiff method was more than the inlet value for the partition coefficients of over 3, which is not possible. This signifies that the convergence and accuracy of partial differential equation solution in the membrane are related to the partition coefficient value and this dependency is omitted by the stiff spring method. On the other hand, the greatest use of the stiff spring method is in the membrane operations in which the partition coefficient is variable according to concentration profiles. It may be possible to investigate and modify the influence of parameters affecting the contactor performance. CRediT authorship contribution statement Seyed Jalil Poormohamadian: Conceptualization, Writing – review & editing, Software. Hadis Koolivand: Data curation, Writing – original draft. Mahdi Koolivand-Salooki: Software, Investigation, Writing – review & editing, Validation. Morteza Esfandyari: Validation, Writing – review & editing. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
REFERENCES:
1. BASHA C (2008)
2. BIRD R (2006)
3. BUI T (1979)
4. BUTCHER J (2016)
5.
6. ELSHAFIE M (2021)
7. FADAEI F (2011)
8. FIMBRESWEIHS G (2010)
9. FLAHERTY J (1977)
10. GABELMAN A (1999)
11. GHADIRI M (2014)
12. GHADIRI M (2013)
13. GHADIRI M (2013)
14. GHADIRI M (2013)
15. GHADIRI M (2013)
16. GHADIRI M (2012)
17. GHIDOSSI R (2006)
18. GUO J (2008)
19. HAJILARY N (2018)
20. HAPPEL J (1959)
21. HASSAN A (2013)
22. HUA M (2012)
23. IXARU L (2002)
24. JUANG R (2000)
25. JUANG R (2003)
26. KAPS P (1981)
27. LEMANSKI J (1995)
28. LI Y (2011)
29. MALAMIS S (2012)
30. MANYUCHI M (2018)
31. MARCOS B (2009)
32. MARJANI A (2011)
33. MBARECK C (2009)
34. MOYO L (2021)
35. MULDER M (1996)
36. NOBLE R (1995)
37. PABBY A (2013)
38. PATIL C (2008)
39. POORMOHAMMADIAN S (2015)
40. PRASAD R (1990)
41. RANJBAR M (2013)
42. REZAKAZEMI M (2018)
43. REZAKAZEMI M (2012)
44. SAGHATOLESLAMI N (2011)
45. SAHU O (2019)
46. SANAEEPUR H (2012)
47. SCHWINGE J (2004)
48. SCIUBBA L (2009)
49. SHIRAZIAN S (2012)
50. SONG J (2011)
51. STEPHENSON T (2000)
52. SUSANTO H (2020)
53. TAHVILDARI K (2016)
54. TAKASSI M (2011)
55. TUNG K (2012)
56. VAJDA M (2004)
57. XIA X (2005)
58. YANG H (2020)
59. YEH H (2001)
|
10.1016_j.sdentj.2017.01.003.txt
|
TITLE: Attitudes of dental professional staff and auxiliaries in Riyadh, Saudi Arabia, toward disclosure of medical errors
AUTHORS:
- Al-Nomay, Nora S.
- Ashi, Abdulghani
- Al-Hargan, Aljohara
- Alshalhoub, Abdulaziz
- Masuadi, Emad
ABSTRACT:
Aim
To collect empirical data on the attitudes of dental professionals and dental auxiliaries in Riyadh, Saudi Arabia, regarding the disclosure of medical errors.
Methods
A cross-sectional study was conducted, involving the administration of a questionnaire to a sample of 586 participants recruited from over 10 government and private dental institutions in Riyadh between August 2015 and January 2016. The questionnaire collected information regarding participant opinions on (a) personal beliefs, norms, and practices regarding medical errors, (b) the nature of errors that should be disclosed, and (c) who should disclose errors.
Results
Most (94.4%) participants preferred that medical errors should be disclosed. However, personal preferences, perceptions of the norm and current practices with respect to which type (seriousness) of error should be disclosed were inconsistent. Only 17.9% of participants perceived that it was the current practice to disclose errors resulting in “Major harm”. Over 68% of respondents reported a personal belief, a perception of the norm and a perception of current practice that errors should be disclosed by the erring dentist. Participants at government institutions were more likely to disclose errors than those at private institutions. There were also significant differences in the responses with respect to gender, age, and nationality. The implications for the development of guidelines to help Saudi dentists adopt ethical courses of action for the disclosure of errors are considered.
Conclusions
(1) The majority of participants personally believed that errors should be disclosed, (2) there was little agreement between participant personal beliefs and perceptions of the norm and practice with respect to which type of errors should be disclosed, (3) there was strong agreement that the erring dentist is responsible for reporting errors, and (4) the attitudes of the participants varied with respect to type of institution, age, gender, and nationality.
BODY:
1 Introduction The most cited definition of a medical error is “An act of omission or commission in planning or execution that contributes or could contribute to an unintended result.” This definition includes “the key domains of error causation (omission and commission, planning and execution), and captures faulty processes that can and do lead to errors, whether adverse outcomes occur or not” ( Grober and Bohnen, 2005 ). Historically, medical errors were rarely disclosed. However, more recently, with the implementation of professional codes of ethics, disclosure of medical errors in the healthcare setting has been reinforced to prevent or reduce harm to patients and their families ( Chafe et al., 2009; Ozar and Sokol, 2002; Ghazal et al., 2014; Williams, 2012 ). Non-disclosure of a medical error is now considered a violation of ethical principles and can lead to litigation ( Rosner et al., 2000 ). The disclosure of medical errors improves the quality of the healthcare system and helps to prevent future errors ( Ghazal et al., 2014 ). Patient response to and consequences of medical errors greatly influence the attitudes of healthcare providers. It is generally accepted that full disclosure of a medical error is necessary only if there has been an adverse event which has caused harm to a patient ( Ghazal et al., 2014 ). In situations where no harm or adverse event has occurred, disclosure may not be obligatory ( Elder et al., 2006 ) as it may unnecessarily increase patient stress and anxiety. Gallagher et al. (2009) reported that some physicians believe that if patients do not enquire then error disclosure is unnecessary. Many factors may influence the decision of healthcare providers to disclose medical errors. According to the conceptual model conceived by Fein et al. (2005) , the most important influences on the decision to disclose a medical error fall into four categories: (a) provider factors, including perceived professional responsibility, (b) patient factors, including a desire for information, (c) error factors, including the level of harm to the patient, and (d) institutional culture, including the perceived tolerance for error by healthcare professionals. Birks (2014) and Ghazal et al. (2014) proposed a set of guidelines for the disclosure of medical errors, citing conceptual reasons, such as the duty of candor, respect for autonomy, the imperative principle of truth-telling, the principles of beneficence and non-maleficence, and the deontology or Kantian obligation based theory. Healthcare providers, however, are not professional ethicists, and the disclosure of medical errors is not always a component of their ethical behavior. In dentistry, medical errors include (a) incorrect medication prescription, (b) neglect of current scientific evidence regarding treatment, (c) improper maintenance of equipment, and failure to (d) properly maintain patient records, (e) acquire informed consent, (f) establish and maintain appropriate infection control measures, (g) accurately diagnose a dental condition, (h) prevent accidents or complications, (i) pursue appropriate follow-up care, and/or (j) follow statutory rules or regulations reflecting quality standards for dental care ( Negalberg, 2015 ). Thusu et al. (2012) showed that the most frequently reported incidents in the practice of dentistry were clerical errors (36%) followed by patient injuries (10%), medical emergencies (6%), accidental ingestion or inhalation of clinical materials (4%), adverse reactions (4%), and erroneous tooth extractions (2%). Although dentists have an ethical responsibility to fully disclose errors, in practice there is considerable inconsistency regarding opinions on the information that should be disclosed, and who should disclose this information ( Blood, 2015 ). Thusu et al. (2012) reported a relatively low frequency of dental error disclosure, which they attributed to the voluntary nature of reporting and the reluctance of dentists to disclose incidents for fear of loss of earnings. The disclosure of medical errors varies between clinical specialties ( Blood, 2015; Chiodo et al., 1999; Ozar and Sokol, 2002; O’Connor et al., 2010; Yamalik and Perea, 2012 ). Accordingly, dentists may carry different attitudes than medical doctors toward ethical duty for disclosure. Possible reasons for this discrepancy are hypothesized as follows. First, dental errors may be perceived as less serious. Second, medical care is most frequently provided at large institutions (e.g., hospitals), while dental care is generally more isolated at private practices. Third, medical care is generally provided by a team of doctors, while dentistry is often individually handled. Despite these differences, all medical practitioners, including dentists, have the same ethical obligation to tell the truth, respect patient autonomy, and disclose errors. The disclosure of dental errors is desired by patients and is also recommended by ethicists and professional organizations to ensure that the dental profession can be trusted ( Chiodo et al., 1999; Blood, 2015 ). A critical examination of personal preferences and perceptions of the norm in current practice regarding the disclosure of dental errors is therefore necessary to the benefit of patients, dentists, and the practice of dentistry. The aim of the current study was to obtain empirical information on the patterns of dental error disclosure among dental professionals in Saudi Arabia. In addition, personal preference, the perception of the norm, and perception of current practices relating to error disclosure were investigated at two levels: the nature (seriousness) of the error that requires disclosure, and the individual responsible for disclosure. To the best of our knowledge, this is the first study to investigate disclosure of errors in the practice of dentistry in Saudi Arabia. 2 Materials and methods 2.1 Study sample A cross-sectional study was conducted. A questionnaire was administered to dental professional staff and auxiliaries (e.g., dental assistants, dental hygienists, laboratory technicians, and dental nurses) recruited from government and private institutions in Riyadh from August 2015 to January 2016. A power analysis revealed a recommended sample size of 586 based on the prevalence of disclosure of any type of error as 50%, with a 4% margin of error and a confide Web site http://www.raosoft.com/samplesize.html . 2.2 Questionnaire The questionnaire was modified from Hammami et al. (2010) and contained 11 items. Table 1 presents the first five items asking participants to report on which type of dental errors should be disclosed. Responses were classified as “Do not disclose” errors, disclose errors leading to “Major harm”, disclose error leading to “Moderate harm”, disclose errors leading to “Minor harm”, and disclose errors even if “No harm” has been done to the patient. Participants were asked to report on their own personal belief, their perceptions of the norm (i.e., what is appropriate in general/should be done), and their perceptions of current practice. Table 2 presents the second part of the questionnaire, which consisted of six items inviting the participants to report on who (from a list of dental professionals and auxiliaries) should disclose dental errors. Again, participants were asked to report on their own personal belief, perceptions of the norm and perceptions of current practice. Prior to the survey, the modified questionnaire was reviewed by several experts in the fields of ethics and epidemiology, and a pilot study with 10 dentists was conducted to assure content validity. 2.3 Data collection Convenience and cluster sampling was used to recruit participants. Twitter, an online social media site, was used for convenience sampling. Cluster sampling was achieved by the recruitment of participants from medical and dental colleges in five regions (Central, North, East, West and South) of Riyadh. A link was posted on Twitter to recruit participants and to advertise the survey. Requests were sent to both individuals and organizations to re-tweet the survey link. A shorter version of the study’s URL was generated to fit within the 140-character limit of Twitter postings. All eligible participants were asked to complete the online questionnaire using Google forms as the platform. This platform facilitated secure, anonymous data collection and ensured confidentiality. Exclusion of duplicate data was conducted by reviewing the IP addresses of the respondents. 2.4 Statistical analysis Statistical analyses were conducted using SPSS version 20. The demographic characteristics of participants were summarized. Two cross-tabulations were constructed from the questionnaire responses. The first cross-tabulation summarized the frequencies of participants who reported which dental error to disclose (in the rows) depending on personal belief and perception of the norm of current practice (in the columns). The second cross-tabulation summarized the frequencies of responses to who should disclose dental errors (in the rows) depending on personal belief and perceptions of the norm of current practice (in the columns). Because each participant chose only one item from the list in Table 1 and one item from the list in Table 2 the frequency distributions of the choices in the cross tabulations represent hierarchical rankings of the items, and therefore the variables used in the statistical analysis were measured at the ordinal level. Kendall’s coefficient of concordance for ranks was used to determine the extent of agreement between the three dimensions of personal belief, perception of the norm and perception of practice with respect to which type of error should be disclosed. Wilcoxon’s signed rank tests were used to conduct pairwise comparisons between the related measures of which type of error should be disclosed across the three dimensions (i.e., belief vs. norm; belief vs. practice; and norm vs. practice). The demographic data violated the theoretic assumption of an ordinal logistic regression ( Hosmer and Lemeshow, 2000 ) and therefore, the types of errors that should be disclosed were collapsed into a binary format, and a binary logistic regression analysis was conducted. The forward stepwise procedure was applied to select the predictors, so that only the statistically significant demographic categories (indicated by p < 0.05 for the Ward test statistic) were included in the models. Demographic categories that were not significant predictors ( p ≥ 0.05) were excluded. Hosmer and Lemeshow goodness of fit tests confirmed that the data significantly fitted the logit function at the 0.05 level. It is imperative to report and explain the coding of all variables to reliably interpret the results of a logistic regression ( Bagley et al., 2001 ). For the first analysis, the dependent variable was coded as 1 = “Disclose” (i.e., all choices, except “Do not disclose”) or 0 = “Do not disclose”. For the second analysis, the dependent variable was coded as 1 = error resulting in “No harm” or 0 = error resulting in “Harm” (i.e., all choices except “No harm”). To ensure that the sample size in each of the demographic categories was large enough to provide sufficient statistical power to construct accurate models without Type II errors ( Bagley et al., 2001 ), the categories for the institutions and occupations were collapsed and coded as follows. Institution: 1 = Government, 0 = Private; Occupation: 1 = Dental Specialist, 0 = Other occupations; Age (Years): 1 = <30, 2 = 31–40, 3 = >40. The other demographic categories were coded as follows: Nationality: Saudi = 1, Non-Saudi = 0 and Gender: Male = 1, Female = 0. 3 Results 3.1 Demographic characteristics of participants The demographic characteristics of the participants are summarized in Table 3 . The sample size was N = 586 with approximately equal proportions of male and female participants. The most frequent age group was <30 years ( n = 364, 62.1%) and the least frequent was >50 years ( n = 27, 4.6%). The majority of participants were Saudi ( n = 427, 72.9%). Most participants were recruited from six government institutions ( n = 353, 60.3%) with the remainder from more than three private colleges or clinics. The most frequent occupations were dental specialist ( n = 148, 25.3%) and student ( n = 204, 34.8%) with smaller proportions of residents, academics, and dental auxiliaries (e.g., dental assistants, dental hygienists, laboratory technicians, and dental nurses). 3.2 Which type of dental errors should be disclosed? Table 4 presents the cross-tabulation of the frequencies and percentages of the 586 participants endorsing which type of dental errors should be disclosed depending on the participant’s personal belief, perceptions of the norm and current practice. Very few participants reported a personal belief of not disclosing dental errors (5.6%) implying that 94.4% preferred disclosure. Slightly higher proportions or participants perceived that not disclosing errors was the norm (5.8%) or was currently practiced (9.6%). Relatively few participants reported that disclosing “Major harm” was their preference (9.6%); or was a perceived norm (8.4%) or was currently practiced (17.9%). The most frequent responses were (a) 49.0% reported that their personal belief was to disclose errors resulting in “No harm”; (b) 45.4% reported that the norm was to disclose errors resulting in “Minor harm”; and (c) 32.3% reported that the current practice was to disclose errors resulting in “Minor harm”. Kendall’s coefficient evaluated participant agreement across the three dimensions of disclosure (belief, norm, and practice) for each type of dental error, based on a scale from 0 (no agreement) to 1 (100% agreement). While Kendall’s coefficient was statistically significant ( p < 0.001) the low W value (0.173) reflected a weak level of agreement. Pairwise comparisons using Wilcoxon’s signed rank tests reflected significant ( p < 0.001) differences between the ranked responses for belief vs. norm, preference vs. practice, and norm vs. practice. 3.3 Who should disclose dental errors? Table 5 presents the cross-tabulation of the frequencies and percentages of the 586 participants endorsing the person responsible for disclosure of dental errors, depending on personal belief, perceptions of the norm and perceptions of current practices. Most participants chose the erring dentist as the persons who are responsible for disclosing error as their personal belief (68.9%), as the perceived norm (73.2%) and as the current practice (68.4%). Disclosure of dental errors by the division head, the dental assistant, the patient’s relation service, and the chairman was less frequently reported as a personal belief, norm, or practice. The least frequent response was for the dental error to be disclosed by the receptionist (personal belief: 2.0%; perceived norm: 3.6%; and the current practice: 3.9%). Kendall’s coefficient evaluated the degree to which participant responses agreed among the three dimensions of disclosure (belief, norm, and practice) for each category of person responsible for disclosure. The Kendall’s coefficient was statistically significant ( p < 0.001) but the W value (0.366) reflected only a moderate level of agreement. Pairwise comparisons using Wilcoxon’s signed rank tests reflected no significant ( p > 0.05) differences between the ranked responses for preference vs. norm, preference vs. practice, and norm vs. practice. In conclusion, the participants generally agreed with who should disclose dental errors. The consensus, based on the consistently high levels of endorsement in Table 5 , was that disclosure was mainly the responsibility of the erring dentist. 3.4 Prediction of the likelihood of disclosing dental errors Table 6 presents the three binary logistic regression models to predict the likelihood of disclosing dental errors according to the resulting patient harm (i.e., “Major harm”, “Moderate harm”, “Minor harm”, or “No harm”) vs. “Do not disclose” as the reference, with respect to the three dimensions of disclosure. The preference for disclosing dental errors tended to be less frequent among older compared to younger participants. The odds ratio indicated the likelihood of preferring to disclose errors decreased by a factor of 0.48 (95% CI = 0.30, 0.77) for every one unit increase in the ordinal age scale (i.e., between 1 = <30 and 3 = ≥40). In addition, the preference for disclosing errors was greater for participants at government institutions compared to participants at private institutions. The odds ratio indicated the likelihood of a participant at a government institution disclosing errors was 6.32 (95% CI = 2.73, 14.66) times greater than a participant at a private institution. Participants reported that the perceived norm was for government institutions to be more likely to disclose dental errors than those at private institutions. The odds ratio indicated that the perceived norm was for participants at government institutions to be 6.48 times (95% CI = 2.77, 15.14) more likely to disclose dental errors than those at private institutions. Likewise, participants at government institutions were perceived to be more likely to disclose dental errors than those at private institutions. The odds ratio indicated that participants perceived government institutions to be 2.94 times (95% CI = 1.62, 5.32) more likely to disclose dental errors than those at private institutions. Lastly, participants perceived that Saudis were 1.92 times more likely to disclose dental errors compared to non-Saudis (95% CI = 1.07, 3.42). 3.5 Prediction of the likelihood of disclosing errors resulting in “No Harm” Table 7 presents the three binary logistic regression models using demographic variables to predict the likelihood of disclosing errors resulting in “No harm” vs. Harm (i.e., “Major harm”, “Moderate harm”, “Minor harm”) or not disclosing errors at all (“Do not disclose”), with respect to the three dimensions of disclosure. The odds ratio indicated that the likelihood of preferring to disclose errors resulting in “No harm” was 0.51 times less (95% CI = 0.37, 0.72) in males than in females. Furthermore, the preference for disclosing errors resulting in “No harm” was 2.36 (95% CI = 1.67, 3.35) times greater for participants at government institutions compared to those at private institutions. The odds ratio indicated that participants at government institutions were 1.50 times more likely (95% CI = 1.02, 2.19) than participants at private institutions to disclose errors resulting in “No harm” as the norm. The odds ratios also indicated the likelihood of preferring to disclose errors decreased by a factor of 0.72 (95% CI = 0.55, 0.93) for every one unit increase in the ordinal age scale. Consequently, older participants were less likely to disclose errors resulting in “No harm” than younger participants. The perceived practice was for participants at government institutions to be more likely to disclose errors resulting in “No harm”. The odds ratio indicated that participants perceived that government institutions in practice were 1.83 times (95% CI = 1.15, 2.92) more likely than those at private institutions to disclose “No harm”. 4 Discussion Empirical data were obtained from 586 participants at over 10 dental institutions in Riyadh, Saudi Arabia, regarding issues related to the disclosure of dental errors. The research focused on the associations between the professional responsibility for disclosure (categorized by personal belief, perception of norm, and perception of current practice) and the error factors (specifically the nature of the dental error to be disclosed) and the institutional culture (specifically who should disclose the error). Statistical evidence based on the analysis of cross-tabulated data was consistent with the conceptual model of Fein et al. (2005) positing that three of the most important influences controlling whether to disclose medical errors involved provider factors, error factors, and institutional culture. The current findings revealed considerable differences of opinion among participants regarding the level of dental error disclosure. The personal beliefs of participants, their perceptions of the norm and the current practices with respect to which type of dental errors should be disclosed were inconsistent, and the level of agreement between the three dimensions of disclosure was weak. These finding are in agreement with those of Blood (2015) who reported considerable inconsistency among dentists regarding how much information they disclosed about their mistakes. The finding that 94.4% of the participants in the current survey preferred that dental errors be disclosed was consistent with the high proportions of participants in previous studies, including both patients and providers, who have endorsed the disclosure of medical errors ( Hammami et al., 2010 ). Despite the high level of preference for disclosing errors, the percentage of participants that perceived that it is current practice to disclose errors was relatively low (17.9%, 22.0%, 32.3% and 18.3% to disclose errors resulting in “Major harm”, “Moderate harm”, “Minor harm” and “No harm” respectively). These results imply a perception that there is extensive non-reporting of dental errors and are in agreement with other studies, suggesting that less than half of medical errors may be disclosed to patients ( Blendon et al., 2003; Hammami et al., 2010 ). It appears that although healthcare providers are ethically bound to admit mistakes to patients, in reality, most practitioners and institutions do not disclose errors. Hammami et al. (2010) found a similar distribution of responses for perception of the norm and personal beliefs regarding type of medical errors that should be disclosed among patients attending outpatient clinics in Saudi Arabia, suggesting that the personal beliefs of the Saudi culture tended toward the norm. In the current study, however, the level of agreement between personal beliefs and the norm was relatively weak, implying a possible difference in attitudes between dentists and medical doctors. Although there were inconsistencies as to who should disclose dental errors, the majority (over 68%) of the participants reported a personal belief, a perception of the norm, and perception of practice, that medical errors should be disclosed by the erring dentist, with a moderate level of agreement. Consequently, the level of agreement between personal belief, perceived norm, and perceived practice was stronger for the person responsible for reporting errors compared to which errors should be disclosed. The current study found that the likelihood of disclosing dental errors differed between participants at government and private institutions. It is possible that participants at government institutions are more likely to report dental errors, including errors leading to “No harm” to the patient as a personal belief, perception of the norm, and perceived practice, because they feel more accountable to the public for their actions, compared to those at private institutions. Other significant demographic predictors were that (a) males were less likely to personally prefer the disclosure of errors leading to “No harm” than females, (b) older participants were less likely than younger participants to prefer disclosure of dental errors as well as disclosure of errors resulting in “No harm” as the norm, and (c) there was a perception that Saudis were more likely than Non-Saudis to disclose dental errors in practice. Hammami et al. (2010) similarly found age and gender to be predictors of disclosure of medical errors in Saudi Arabia. Older age and female gender predicted a preference for disclosure of errors leading to “No harm” while younger age and male gender predicted a preference not to disclose errors. Healthcare organizations in several countries have developed guidelines to help providers adopt an ethical course of action for the disclosure of medical errors ( Chafe et al., 2009; Williams, 2012 ). However, according to Hammami et al. (2010) the Implementation for Regulation of the Practice of Medicine and Dentistry released by the Saudi Ministry of Health in 1990 and the Ethics of Medical Profession released by the Saudi Commission of Health Specialists are “silent on this issue”. Different cultures may require different levels of disclosure and the needs and demands related to disclosure of medical errors may be different in Saudi Arabia compared to those in other countries. The findings from the present study can assist the Ministry of Health in the development of new ethical policies for the disclosure of dental errors in Saudi Arabia. Further research regarding the attitudes and practices of dentists toward the disclosure of dental errors in the context of the Islamic/Arabic Culture will also be necessary to complement the current findings. Ethical statement This survey was conducted in compliance with ICH-GCP Ethical Standards and Research Protocol #RSS 15/045 approved by Institution Review Board (IRB) of King Abdullah International Medical Research Center (KAIMRC) in the period from August 2015 to January 2016. All participants provided verbal consent. Conflict of interests The authors have no known conflict of interests associated with this study that could have influenced its outcome.
REFERENCES:
1. BAGLEY S (2001)
2. BIRKS Y (2014)
3. BLENDON R (2003)
4. BLOOD J (2015)
5. CHAFE R (2009)
6. CHIODO G (1999)
7. ELDER N (2006)
8. FEIN S (2005)
9. GALLAGHER T (2009)
10. GHAZAL L (2014)
11. GROBER E (2005)
12. HAMMAMI M (2010)
13.
14.
15. OCONNOR E (2010)
16. OZAR D (2002)
17. ROSNER F (2000)
18. THUSU S (2012)
19. WILLIAMS L (2012)
20. YAMALIK N (2012)
|
10.1593_neo.09230.txt
|
TITLE: The Plasticity of Oncogene Addiction: Implications for Targeted Therapies Directed to Receptor Tyrosine Kinases
AUTHORS:
- Pillay, Vinochani
- Allaf, Layal
- Wilding, Alexander L.
- Donoghue, Jacqui F.
- Court, Naomi W.
- Greenall, Steve A.
- Scott, Andrew M.
- Johns, Terrance G.
ABSTRACT:
A common mutation of the epidermal growth factor receptor (EGFR) in glioblastoma multiforme (GBM) is an extracellular truncation known as the de2-7 EGFR (or EGFRvIII). Hepatocyte growth factor (HGF) is the ligand for the receptor tyrosine kinase (RTK) c-Met, and this signaling axis is often active in GBM. The expression of the HGF/c-Met axis or de2-7 EGFR independently enhances GBMgrowth and invasiveness, particularly through the phosphatidylinositol-3 kinase/pAkt pathway. Using RTK arrays, we show that expression of de2-7 EGFR in U87MG GBM cells leads to the coactivation of several RTKs, including platelet-derived growth factor receptor β and c-Met. A neutralizing antibody to HGF (AMG102) did not inhibit de2-7 EGFR-mediated activation of c-Met, demonstrating that it is ligand-independent. Therapy for parental U87MG xenografts with AMG 102 resulted in significant inhibition of tumor growth, whereas U87MG.Δ2-7 xenografts were profoundly resistant. Treatment of U87MG.Δ2-7 xenografts with panitumumab, an anti-EGFR antibody, only partially inhibited tumor growth as xenografts rapidly reverted to the HGF/c-Met signaling pathway. Cotreatment with panitumumab and AMG 102 prevented this escape leading to significant tumor inhibition through an apoptotic mechanism, consistent with the induction of oncogenic shock. This observation provides a rationale for using panitumumab and AMG 102 in combination for the treatment of GBM patients. These results illustrate that GBM cells can rapidly change the RTK driving their oncogene addiction if the alternate RTK signals through the same downstream pathway. Consequently, inhibition of a dominant oncogene by targeted therapy can alter the hierarchy of RTKs resulting in rapid therapeutic resistance.
BODY: No body content available
REFERENCES:
1. WEN P (2008)
2. (2008)
3. PARSONS D (2008)
4. EKSTRAND A (1991)
5. FREDERICK L (2000)
6. JUNGBLUTH A (2003)
7. LIBERMANN T (1985)
8. PEDERSEN M (2001)
9. SUGAWA N (1990)
10. WIKSTRAND C (1998)
11. WONG A (1987)
12. SCHMIDT M (2003)
13. ABOUNADER R (2005)
14. BENVENUTI S (2007)
15. GENTILE A (2008)
16. WEINSTEIN I (2008)
17. WEINSTEIN I (2006)
18. SHARMA S (2006)
19. COMOGLIO P (2008)
20. GAZDAR A (2004)
21. BURGESS T (2006)
22. RIVERA F (2008)
23. MARTENS T (2006)
24. HUANG P (2007)
25. JOHNS T (2007)
26. COMER J (2005)
27. JO M (2000)
28. BACHLEITNERHOFMANN T (2008)
29. MIGLIORE C (2008)
30. NARITA Y (2002)
31. ZIEGLER D (2008)
32. HABIB A (1998)
33. CALZOLARI F (2008)
34. LUWOR R (2004)
35. BELDAINIESTA C (2006)
36.
37. MATHIEU V (2008)
38. VREDENBURGH J (2007)
|
10.1016_j.bioactmat.2025.07.052.txt
|
TITLE: Stable, bioactive hydrogel coating on silicone surfaces for non-invasive decontamination via photochemical treatment
AUTHORS:
- Berger, Romina
- Rahtz, Alina
- Schweigerdt, Alexander
- Stöbener, Daniel D.
- Cosimi, Andrea
- Dempwolf, Wibke
- Menzel, Henning
- Johannsmeier, Sonja
- Weinhart, Marie
ABSTRACT:
Polydimethylsiloxane (PDMS) is widely used in biomedical applications due to its biocompatibility, chemical stability, flexibility, and resistance to degradation in physiological environments. However, its intrinsic inertness limits further (bio)functionalization, and its hydrophobic recovery compromises the longevity of conventional surface modifications. To address these challenges, we developed a nanoprecipitation method for the straightforward colloidal deposition, covalent thermal crosslinking, and surface anchoring of a chemically tunable, biocompatible polyacrylamide with reactive hydroxyl groups, enabling further surface modifications. This polymer incorporates ∼6 % bioinspired catechol units, introduced via an elegant one-pot Kabachnik-Fields reaction, to facilitate thermally induced network formation and enhance adhesion to plasma-activated PDMS. The resulting uniform coatings exhibited tunable dry layer thicknesses up to 44 ± 7 nm and effectively suppressed PDMS chain rearrangement even after steam autoclaving, ensuring long-term stability in aqueous and ambient environments for at least 90 days.
The bioactive post-modification potential was demonstrated in a proof-of-concept study by immobilizing the photosensitizer rose bengal at surface concentrations of 20 or 40 μg cm−2. The coating exhibited antimicrobial activity against S. aureus, achieving a 4-log reduction (99.99 %) in colony-forming units after 30 min of irradiation at 554 nm (342 J cm−2), even when bacteria were suspended in liquid, without direct surface contact. In contrast, antimicrobial activity against E. coli was only observed with minimized liquid volume, bringing the motile bacteria into close contact with the surface.
This work established a straightforward and versatile strategy for the stable and bioactive functionalization of PDMS surfaces for application in non-invasive surface decontamination.
BODY:
1 Introduction Polydimethylsiloxane (PDMS), commonly known as silicone, is a versatile and cost-effective elastomer renowned for its hydrophobicity, gas permeability, optical transparency, and exceptional biocompatibility [ 1 ]. Its ease of processing, including rapid prototyping techniques like (micro)molding and 3D printing, enables the creation of highly customizable shapes and structures [ 2 ]. This adaptability has positioned PDMS as a cornerstone material in the fabrication of microfluidic devices and medical equipment, including catheters and shunts, widely used in clinical and biomedical applications [ 1–3 ]. However, the inherent hydrophobicity of PDMS poses significant challenges for biological and biomedical applications. This property drives nonspecific protein adsorption and denaturation – processes generally undesirable to device functionality [ 1 , 4 ] – while simultaneously promoting bacterial settlement and biofilm formation through hydrophobic interactions [ 5 , 6 ]. The clinical consequences are severe: silicone-based indwelling devices like urinary catheters and endotracheal tubes exhibit high infection rates due to bacterial colonization, directly compromising patient outcomes [ 7 ]. Such biofouling events compound clinical risks while impairing device performance. Additionally, the hydrophobic surface impedes efficient wetting in aqueous environments, limiting the utility of capillary forces − an essential feature in many microfluidic applications [ 3 ]. Addressing this issue requires surface modification to enhance hydrophilicity, which is complicated by the lack of functional groups on native PDMS for chemical attachment. To overcome this limitation, functional groups such as hydroxyl, carboxyl, or amine/amide groups can be introduced via surface activation techniques. Chemical treatments, such as piranha solution or gas-phase deposition, are effective but often involve hazardous reagents. Among these methods, plasma oxidation using oxygen or nitrogen is widely regarded as the most efficient and practical approach due to its speed, effectiveness, and ability to generate functionalized surfaces suitable for further modification. This technique has become a standard procedure for improving the hydrophilicity and functionality of PDMS surfaces in diverse applications [ 1 ]. Nevertheless, the established hydrophilic state of the interface is not sustained over long periods of time due to the inherently high chain dynamics of bulk PDMS, particularly when exposed to air. Hydrophobic chains migrate from the bulk to the surface to minimize surface energy, leading to the well-documented phenomenon of hydrophobic recovery [ 8 , 9 ]. This recovery process typically occurs within hours and is accelerated in thicker PDMS layers (>1 mm) compared to thinner ones (<1 mm) [ 10 ]. To delay hydrophobic recovery, activated PDMS is often stored in polar liquids, which temporarily preserves its hydrophilic state [ 11 ]. For more durable modifications, covalent attachment of hydrophilic polymer coatings is employed [ 12 , 13 ], offering the possibility to introduce functional groups for additional chemical modification. Among the established PDMS coating strategies [ 1 , 2 , 4 , 14 ], relatively few comprehensively address the challenge of long-term stability [ 3 , 15 ]. Polymer coatings are typically anchored to the silanol groups of activated PDMS surfaces through a reactive alkoxysilane layer, which provides functional groups for grafting-from or grafting-to processes [ 1 ]. This basal silane layer also retards hydrophobic recovery up to nine days [ 16 ]. Grafting-to approaches, such as the attachment of polyethylene glycol (PEG) or polysaccharide-based coatings, have demonstrated stable hydrophilicity for at least 30 days, with only slight decreases observed over six months [ 17 , 18 ]. Alternative grafting-from techniques, including thiol-initiated radical polymerizations using ethylene glycol dicyclopentenyl ether acrylate or di(ethylene glycol) methyl ether methacrylate, have maintained hydrophilicity for at least six months [ 19 ]. The close proximity of radicals in this approach promotes cross-linking, resulting in complex architectures with improved hydrophilic stability, i.e., up to 100 days for plasma polymerized tetraglyme [ 15 ]. However, further functionalization of such coatings is challenging due to the lack of accessible functional groups, restricting their broader application potential. The introduction of mussel-inspired dopamine coatings by Messersmith et al. in 2007 [ 20 ] provides a versatile, substrate- and geometry-independent strategy for producing amine-functionalized PDMS surfaces. This method relies on the straightforward dip-coating of substrates in an alkaline aqueous dopamine solution, leading to the deposition of a polydopamine (pDA) layer presenting reactive amine functional groups. Building on this catechol-based chemistry, universal coatings incorporating functional polymers have been successfully developed on various substrates [ 12 , 21–23 ]. Of particular interest are polymer coatings exhibiting reactive groups for postmodification with bioactive compounds. However, so far, there is no information on the long-term stability of mussel-inspired coatings on PDMS substrates. With the pressing need for antibiotic alternatives, surface-confined antimicrobial photodynamic systems have gained significant attention as a non-invasive method for surface decontamination [ 24–26 ]. Photosensitizers (PS) are the key component of such bioactive surfaces, generating highly reactive ions, radicals, and/or reactive oxygen species (ROS) upon light activation. These reactive species can effectively kill bacteria by damaging their cell membrane. The covalent immobilization of PS offers significant advantages over physisorption by preventing leaching and ensuring the safety and reusability of substrates for repeated disinfection cycles under light irradiation. This not only improves economic efficiency but also enhances sustainability [ 26 ]. Notably, no resistance against antimicrobial photodynamic approaches has been observed so far [ 27 ], highlighting their potential to combat bacterial resistance. In general, the efficacy of a PS depends on its ability to generate reactive oxygen species (ROS) upon irradiation and the lifetime of these species within the given environment. Key criteria for selecting a PS for antibacterial photodynamic systems include photostability, quantum yield, ROS generation efficiency, and an absorption range that aligns with the intended light source [ 28 ]. A notable limitation of ROS, however, is their highly localized activity due to short lifetimes and limited diffusion ranges, which reduces antibacterial effectiveness as the distance between the PS and bacterial target increases [ 24 , 29 ]. To maximize efficacy, surface-immobilized PS must directly interact with bacteria before a biofilm matrix shields them. Thus, primary applications of antibacterial photodynamic approach-based surface sterilization include medical devices, packaging, cell culture systems, biomedical analytical tools, and clinical surfaces, where localized and effective antimicrobial action is essential. We anticipate that integrating long-term stable functional coatings on PDMS with PS immobilization offers significant potential for creating bioactive surfaces capable of non-invasive decontamination across manifold applications, including microfluidics and long-term implantable catheters equipped with optical fibers. A key advantage of using PDMS as a substrate is its intrinsic gas permeability, which ensures high O 2 availability for efficient ROS production by PS such as rose bengal (RB), even in enclosed systems like the inner surfaces of tubing or microfluidic devices [ 30 ]. Building on current research into stable hydrophilic coatings on PDMS, we hypothesize that thermal crosslinking of hydroxyl-bearing polymers after deposition can produce durable hydrogel coatings suitable for bioactive surface fabrication. With the reduced flexibility in the crosslinked coating layer, we expect the modified silicone surface to withstand hydrophobic recovery, unlike conventional brush coatings on PDMS reported in literature [ 31 , 32 ]. To test our hypothesis, we employed a bioinspired approach by incorporating a low percentage of catechol groups into a hydrophilic copolymer and established a geometry-independent nanoprecipitation coating on activated PDMS. After thermal crosslinking, we evaluated the robustness of the coating during standard autoclaving conditions and long-term storage in air, water, and physiological salt solution for up to 120 days with no significant decrease in layer thickness and no change in contact angle for at least 90 days. In a proof of concept study, we demonstrate straightforward immobilization and stability of RB on the hydroxy-functionalized surface during multiple rounds of irradiation for non-invasive decontamination of the silicone surface and proven antibacterial activity. 2 Materials and methods 2.1 Materials for synthesis and bacterial culture Acetone (analysis grade), N , N -dimethylformamide (DMF, 99.5 % analytical grade), anhydrous DMF (99.8 %), methanol (MeOH, 99,9 % analytical grade), rose bengal (RB, 85 %), diethyl phosphite (DEP, 96 %), 3,4-dihydroxybenzaldehyde (DHBA, 98 %), triethylamine (TEA, 99 %) and phosphate buffered saline (PBS, research grade) were obtained from Fisher Scientific GmbH (Schwerte/Geisecke, Germany). Ethanol (EtOH, absolute) was ordered from Merck (Darmstadt, Germany), N , N′ -dicyclohexylcarbodiimide (DCC) from J&K Scientific bvba (Lommel, Belgium), N -(3-aminopropyl) methacrylamide hydrochloride (APMAA; 95 %, stabilized with MEHQ), and 2,2,4-trimethylpentane from abcr GmbH (Karlsruhe, Germany). 4-(Dimethylamino)pyridine (DMAP, ≥99 % ReagentPlus), N -hydroxyethyl acrylamide (HEAA; 97 %, stabilized with HQ), and sodium pyruvate solution (100 mM, sterile) were supplied by Sigma-Aldrich Chemie GmbH (Taufkirchen, Germany). 9,10-Anthracenediyl-bis(methylene)dimalonic acid (ABDA, >97 %) and nitroblue tetrazolium chloride (NBT) were ordered from Biomol GmbH (Hamburg, Germany). All chemicals were used as received. Ultrapure water (H 2 O) was obtained from a PURELAB Classic filtration system by ELGA LabWater/Veolia Water Technologies Deutschland GmbH (Celle, Germany) with a minimum resistivity of 18.5 MΩ cm (25 °C). Silicon wafers were supplied by Silchem GmbH (Freiberg, Germany), cut into small samples (11 × 11 mm), thoroughly washed with EtOH and H 2 O, and dried under a stream of argon before use. QSense® Sensors QSXT 301 Au were supplied by Biolin Scientific (AB Stockholm, Sweden) and were used as received. Staphylococcus aureus ( S. aureus , wild type; strain SH1000) and Escherichia coli K12 ( E. coli ) were kindly provided by the Institute for Medical Microbiology and Hospital Epidemiology (MHH, Hannover, Germany). Lysogeny broth (LB) and LB agar were purchased from Carl Roth GmbH + Co. KG (Karlsruhe, Germany). Maximal recovery diluent (MRD) medium and LIVE/DEAD™ Bac Light™ Bacterial Viability Kits L7012 were purchased from Thermo Fisher Scientific (Schwerte, Germany). Bacteria were cultivated and expanded on LB agar plates for 24 h at 37 °C before use. Human dermal fibroblasts (HDF) were isolated and cultured according to our previously established procedures.[ 72 ] Dulbecco's modified eagle medium (DMEM), either supplemented with GlutaMaX, pyruvate, and phenol-red or without, as well as GlutaMAX were used from Gibco (Life Technologies GmbH, Darmstadt, Germany). Fetal calf serum (FCS; PAN-Biontech GmbH; Aidenbach, Germany). Cell-Counting-Kit-8 (CCK-8) was purchased from MedChemExpress via BIOZOL Diagnostica Vertrieb GmbH (Hamburg, Germany), tissue culture dishes (TCPS) were purchased from VWR/Avantor (Darmstadt, Germany). 2.2 Polymer synthesis 2.2.1 Synthesis of P1 A statistical copolymer P1 composed of HEAA with a low percentage of APMAA was synthesized via free radical polymerization. Specifically, HEAA (23.75 mmol, 2.40 g, 95 eq), APMAA (1.25 mmol, 0.223 g, 5 eq), and AIBN (0.25 mmol, 0.041 g, 1 eq) were dissolved in ethanol (7.4 mL, 70 wt% of the total solution) and degassed with Ar for 15 min. The reaction mixture was stirred at 60 °C in a preheated oil bath for 10 h. The polymerization was terminated by exposure to air at 0 °C. Precipitation from the crude reaction mixture into acetone yielded the copolymer P1 (95 %) as a white solid. 1 H NMR (600 MHz, MeOD, δ ): 8.08 (s, 0.7H); 5.49 (s, 0.6H); 4.63 (s, 0.8H); 3.71–2.97 (m, 72H); 2.22–1.37 (m, 50H); 1.07 (m, 3H). Molecular weight determined by GPC (0.1 M aq. NaNO 3 , pullulan standard): M n = 11 715 g mol −1 , Ð = 2.9. 2.2.2 Synthesis of P2 Crosslinkable catechol units were introduced into copolymer P1 via a Kabachnik-Fields reaction at the amine functionality of APMAA units, adopted from the procedure previously reported by Zhang et al. [ 21 ]. Briefly, copolymer P1 (0.085 mol, 1 g, 1 eq) and DHBA (0.84 mmol, 0.116 g, 2 eq per APMAA unit) were dissolved in methanol (7 mL) and heated to 40 °C. To this solution, a mixture of DEP (1.26 mmol, 0.174 g, 3 eq per APMAA unit) and TEA (0.84 mmol, 0.085 g, 2 eq per APMAA unit) in methanol (1 mL) was added dropwise, to give a final copolymer concentration of 0.125 g L −1 . The reaction mixture was stirred at 40 °C for 48 h. Afterward, it was diluted with an equal volume of methanol, and the functional copolymer P2 -presenting catechol groups in its side chains was precipitated into acetone (2 × 1 L) at 0 °C. The precipitate was dissolved in water and then dialyzed against 1 M NaCl and deionized water (2 days each, with 3–4 daily changes) and lyophilized to yield 1.05 g (93 %) P2 as an off-white solid. 1 H NMR (600 MHz, MeOD, δ ): 8.07 (s, 0.8H); 6.95–6.79 (m, 3H); 5.50 (s, 0.5H); 4.64 (s, 0.9H); 4.10–3.11 (m, 87H); 2.59 (m, 2H); 2.22–1.37 (m, 51H); 1.31 (m, 6H); 1.03 (m, 3H). 13 C NMR (150 MHz, MeOD, δ ): 176.9 (O= C -NH); 146.6; 127.6; 121.7; 116.9; 116.5 (aromatic); 64.5; 61.7 (P-O-C H 2 -CH 3 ); 60.9; 57.5; 54.2; 47.9; 46.7; 44.5; 43.8; 43.1; 37.8; 36.7; 20.8; 16.9 (P-O-CH 2 -C H 3 ). 31 P NMR (243 MHz, MeOD, δ ): 27.09–24.71 (1P). 2.3 Polymer characterization Gel permeation chromatography (GPC) was conducted on a PSS SECcurity 2 GPC System (Polymer Standards Service GmbH, Mainz, Germany) in 0.1 M NaNO 3 aqueous solution at a concentration of 3 mg mL −1 and a flow rate of 1 mL min −1 at 25 °C. The system consists of a PSS Suprema pre-column with dimensions of 8 × 50 mm and a particle size of 10 μm and three linear PSS Suprema columns with dimensions of 8 × 300 mm, a particle size of 10 μm, and a pore size of 30 Å and 2 × 1000 Å, respectively. All columns were arranged in line with a refractive index detector. Pullulan standards from PSS (Mainz, Germany) were used for calibration. Nuclear Magnetic Resonance (NMR) spectra were recorded on a Jeol ECZ at 600, 150, and 243 MHz, respectively, and were processed with the software MestreNova (version 14.2.2). The spectra were referenced to the deuterated solvent peak unless stated otherwise, and the chemical shifts δ were reported in ppm. Dynamic light scattering (DLS) was performed using a Malvern Zetasizer Nano (Malvern Panalytical GmbH, Malvern, UK) equipped with a red laser (636 nm) as the light source. Samples were analyzed at a fixed back-scattering angle of 173°. For each sample, three independent measurements were conducted, and each measurement consisted of 15 scans, with a duration of 10 s per scan. Prior to measurement, samples were equilibrated at the set temperature for 5 min to ensure thermal stability. Data were analyzed using the Malvern Zetasizer software, applying the cumulants method to calculate the hydrodynamic diameter according to the size distribution by volume curves. 2.4 Surface preparation, coating, and characterization PDMS surface preparation (bulk and thin-films) PDMS elastomers were prepared using Sylgard® 184 silicone elastomer kit from Dow Corning purchased from VWR International GmbH (Darmstadt, Germany). The liquid starting material was mixed in a 10:1 ratio with the crosslinking agent and poured into glass Petri dishes (3 cm in diameter, 6.6 cm 2 surface area; Carl Roth GmbH + Co. KG, Karlsruhe, Germany), generating round disks of approx. 3 mm height after curing overnight at 80 °C. Surface modifications and biological testing were conveniently performed inside the Petri dishes, while for surface characterization, round specimens with a diameter of 8 mm were cut from the bulk material using a biopsy punch (Medic GmbH, Wesel, Germany). PDMS thin-films on silicon wafers (11 × 11 mm) as model substrates for dry thickness analysis, as well as on QCM-D gold sensors, were prepared by spin coating using a SPIN150i-NPP Single Substrate Spin Processor from SPS-Europe (Ingolstadt, Germany). The freshly prepared silicone elastomer mix was dissolved in 2,2,4-trimethylpentane (6 % (w/w)) while vortexing. The solution (60 μL per wafer) was spin-coated at 3000 rpm for 60 s with subsequent curing at 80 °C overnight. Nanoprecipitation coating Prior to coating, the PDMS substrates (spin-coated or bulk disk) were activated using air plasma generated by an Atto 1.5.2. Typ B plasma generator (Diener electronic GmbH + Co. KG, Ebhausen, Germany) at 200 W for 180 s. The activated PDMS substrates were then extracted in ethanol for 2–4 h, dried in a stream of Ar, and coated with the catechol group-modified copolymer P2 through a straightforward nanoprecipitation coating procedure at its solubility limit in a solvent/non-solvent mixture. P2 was dissolved in a 9:1 methanol/water mixture referred to as ‘solvent’ (1 mL) at a concentration of 2 mg mL −1 , and acetone, referred to as ‘non-solvent’ (3 mL), was added until a metastable solution with visual turbidity was obtained. The activated PDMS substrates were statically incubated in the freshly prepared coating solution of P2 (0.5 mgmL −1 ; 215 μLcm −2 ) for 120 min at room temperature (RT). After incubation, the supernatant was carefully decanted from the substrates and the adsorbed polymer coating was thermally annealed at 120 °C for 120 min to covalently crosslink the deposited polymer layer. Loose polymer chains were subsequently removed by dynamic extraction of the substrates in water for 3 days on a shaker with daily water changes. To investigate the impact of polymer concentration on layer thickness, this procedure was adopted to a P2 stock solution of 1 mg mL −1 in the solvent and a final concentration of 0.25 mg mL −1 in the solvent/non-solvent mixture. The P2 -based hydrogel-coated PDMS substrates were stored in air at RT for a maximum of two weeks until use, except for long-term stability experiments. 2.4.1 Immobilization of rose bengal Covalent surface immobilization of RB was performed using two different concentrations of RB (50 and 100 mg mL −1 ) following an analogous procedure. The P2 -coated PDMS substrate (either spin-coated wafer or PDMS-disk) was first dried at 50 °C for 60–90 min. Meanwhile, a solution containing RB (0.05 mmol, 50 mg mL −1 , 1 eq), DMAP (0.064 mmol, 7.8 mg mL −1 , 1.3 eq), and DCC (0.064 mmol, 13.2 mg mL −1 , 1.3 eq) in anhydrous DMF was prepared and stirred for 10 min before applying it to the substrate (2 mL per 6.6 cm 2 ). The ester coupling reaction of RB to the P2 -coated PDMS substrate was conducted under ambient conditions in the dark for 48 h in closed Petri dishes. Unbound RB was subsequently removed by extensive washing of the substrates with ultrapure water, followed by extraction in water in the dark for 3 days (3 changes of water each). The light pink coloration of the substrate provided a clear visual indication of the reaction's success, in contrast to control experiments where either the coupling agents or the dye were omitted. Additionally, the amount of bound RB was quantified using UV–Vis spectroscopy. RB-modified substrates (RB- P2 @PDMS) were stored in air at RT in the dark for a maximum of four weeks until use. Control surfaces with physisorbed RB in P2 -coated PDMS substrates were prepared from a RB (8 mg) stock solution in ethanol (4 mL) and serial dilution with ethanol to final concentrations of 2, 1.5, 1, 0.5, 0.25, 0.125, 0.0625, and 0.03125 mg mL −1 . The P2 -coated PDMS substrates were incubated with the dye solutions (1 mL per substrate) in the dark for 3 days, allowing the dye to soak into the substrate (6.6 cm 2 ) until a homogeneous coloration occurred. Remaining ethanol was evaporated by drying at 50 °C for 8 h. The surface concentration of adsorbed dye was calculated according to equation (1) revealing surface concentrations of 0.3, 0.23, 0.15, 0.076, 0.038, 0.019, 0.0095, and 0.00475 μg cm (1) surface concentration ( mg cm − 2 ) = RB concentration in ethanol [ mg mL − 1 ] · 1 mL PDMS surface area [ cm 2 ] , −2 , respectively. For each concentration, a set of three control samples was prepared and stored in air at RT in the dark. 2.4.2 Surface characterization Spectroscopic ellipsometry (SE) (SENpro – SENTECH GmbH, Berlin, Germany) was employed to determine the dry layer thickness of bare and gel-coated PDMS thin-films on silicon wafers at an incident angle of 70° and a wavelength range of 370–1050 nm. Cauchy layers were used to model the surface composed of a SiO 2 bottom layer with a fixed refractive index of 1.46 and a PDMS layer with a fixed refractive index of 1.4. The averaged thicknesses for SiO 2 and PDMS layers were used to determine the thickness of the P2 -based gel layer on top, modeled as Cauchy layer with a fixed refractive index of 1.5. For substrates containing immobilized RB, the Cauchy layer model was extended with an extinction coefficient k fitting to address dye absorbance effects around 554 nm. In general, average dry layer thicknesses were determined on at least six independent samples ( N ≥ 6 with 5 technical replicates). Static water contact angles (CA) were determined via the sessile drop method and fitted with the Young-Laplace model using an OCA 25 system from DataPhysics Instruments GmbH (Filderstadt, Germany), equipped with software package 21. Measurements were performed with 2 μL water drop volume under ambient conditions. Average values were calculated from at least six independent samples ( N ≥ 6 with 3 technical replicates). Long-term stability of the P2 -based gel coating on PDMS was examined by storing the coated silicon wafers in air, pure water, or PBS at room temperature ( N = 4, with 2 technical replicates each). For this purpose, the wafers were placed in polystyrene Petri dishes (35 mm in diameter; Falcon®, Corning Incorporated, Corning NY, USA) in the respective medium. The layer thicknesses and water CAs were determined on days 0, 1, 2, 7, 14, 21, 28, 56, 86, and 116. Prior to each measurement, samples were dried for 1 h at 50 °C. Those stored in PBS were rinsed with pure water beforehand. Atomic force microscopy (AFM) was used to analyze the topography of the gel-functionalized/non-functionalized PDMS disks. A NanoWizard IV AFM (JPK Instruments, Berlin, Germany) mounted on top of an LSM800 confocal microscope (Carl Zeiss AG, Oberkochen, Germany) was applied. The AFM head was equipped with a 15 μm z -range linearized piezoelectric scanner and an infrared laser. Imaging of the samples was performed in air in tapping mode with PointProbePlus® Non-Contact – Long Cantilever – Reflex Coating (PPP-NCLR) silicon sensors with a tip radius of approximately 7 nm, a resonant frequency of 170 kHz, and a spring constant of 30 ± 3 N m −1 . Imaging parameters were adjusted to minimize the force applied to the substrate surface. The scanning speed was optimized to 0.2–0.5 Hz with 512 × 512 acquisition points. Imaging data were analyzed with the JPK image processing software v.6.4.21 (JPK Instruments, Berlin, Germany). X-ray photoelectron spectroscopy (XPS) measurements were conducted using a Kratos Axis Supra instrument (Kratos Analytical Ltd., Manchester, UK) equipped with a monochromatic Al Kα X-ray source (1486.6 eV). Survey spectra were recorded using a pass energy of 160 eV and 150 W emission power, and 20 eV at 225 W emission power at high-resolution spectra, with a spot size of 300 × 700 μm. Charge compensation was applied to all samples to mitigate surface charging effects. Binding energy calibration was performed by referencing the Si 2p 3/2 peak of PDMS to 101.79 eV. The obtained spectra were processed and analyzed using CasaXPS software (version 2.3.19PR1.0). Curve fitting was performed using a GL(30) lineshape and a U2 Tougaard background subtraction. Quartz Crystal Microbalance with Dissipation (QCM-D) measurements were performed with a one-channel Q-Sense E1 device from LOT-Quantum Design GmbH (Darmstadt, Germany) equipped with a Reglo Digital peristaltic pump from Ismatec (Wertheim, Germany). The software QSoft401 version 2.5.22 was used for data acquisition, and QTools 3 version 3.1.25 from Biolin Scientific AB (Stockholm, Sweden) was used for data analysis. AT-cut crystals with a fundamental resonance frequency of 4.95 MHz were mounted in a standard flow module (Biolin Scientific AB, Stockholm, Sweden) with the polymer-coated side exposed to the flow chamber. The temperature was controlled within ±0.1 °C for all experiments. The mass sensitivity constant C of the sensor was 17.7 ng cm −2 Hz −1 and was used to convert detected frequency changes Δ f on the sensor chip into mass changes Δ m via the Sauerbrey equation. All the results in the present study are obtained from the evaluation of the frequency change in the third overtone ( f n = 3). For real-time assessment of protein adsorption via frequency and dissipation shift monitoring, P2 -coated PDMS on QCM-D gold sensors with and without additional RB-modification, showing dry layer thicknesses of 20 ± 7 nm for the P2 and 145 ± 10 nm for the PDMS layer as determined via SE ( N = 3), were prepared. 10 % FCS in PBS was used as a complex, protein-containing medium for non-specific protein adsorption tests. The solution was injected into the flow chamber, which housed the coated sensors, after establishing a stable baseline with PBS buffer at 20 °C under a constant flow of 0.1 mL min −1 . After a 20-min flow of protein-containing medium, the system was switched back to PBS to remove loosely adsorbed proteins until the frequency shift reached a stable plateau again. Single t oxygen ( 1 O 2 ) generation was quantified using ABDA as a selective chemical probe. A 10 mM ABDA stock solution was prepared in DMSO and serially diluted in MDR buffer to final concentrations of 0.01–0.1 mM. Absorbance measurement at 400 nm (BioDrop Duo, Biochrom, UK) yielded a linear calibration curve ( Fig. S21 ). For experimental assays, RB-functionalized substrates (20 and 40 μg cm −2 ) were incubated with 1.5 mL of 0.1 mM ABDA in MDR (initial A 400 = 1). Samples were irradiated at 554 nm (190 mW; MINTL 5, Thorlabs Inc., USA) for 15, 30, or 60 min or maintained in dark conditions. Post-irradiation, supernatants were isolated for residual A 400 determination. Control experiments confirmed negligible ABDA self-degradation under identical conditions (i.e., 0.1 mM ABDA solution without substrates, 60 min irradiation). All assays were performed in triplicate using independently prepared samples ( N = 3). 2.5 Cell compatibility and protein adsorption 2.5.1 Leachable assay Sterile PDMS substrates (native, P2 -coated and RB-functionalized; in duplicates) and empty TCPS control dishes were incubated in 2 mL non-supplemented DMEM (no FCS/antibiotics, phenol red-free) under static conditions (37 °C, 24 h). Extracts were collected, sterile-filtered (0.22 μm cellulose acetate), and pooled per condition. The pooled extract was supplemented to final concentrations of 10 % FCS and 1 % Pen/Strep. HDFs (5000 cells/well in 96-well plates) were cultured for 24 h (37 °C, 5 % CO 2 ) in 100 μL DMEM supplemented with 10 % FCS. Culture media was replaced with 100 μL of the supplemented extracts. Cell viability was assessed via a CCK-8 assay according to the manufacturer's instructions after 24 and 72 h of extract exposure at 37 °C and 5 % CO 2 . In brief, 10 μL CKK-8 reagent was added per well, incubated for 3 h (37 °C, 5 % CO 2 ), and the absorbance was measured at 450 nm. Cell viability was calculated according to with negative control: TCPS extract; positive control: 0.1 % Triton X-100 (100 % cytotoxicity); blank: each supplemented extract without cells, containing 10 μL CKK-8 reagent. viability [ % ] = O D probe − O D blank O D neg control − O D blank · 100 According to ISO 10993–5, a value below 70 % was considered cytotoxic. ( N = 2 biological replicates, with 3 technical replicates each). 2.6 Antimicrobial activity tests 2.6.1 Bacterial culture E. coli and S. aureus were transferred from the agar plate into suspension in MRD media with an inoculation loop until an optical density OD 600 = 0.5 (BioDrop Duo, Biochrom, UK) was reached, corresponding to approximately 5 10 · 7 –10 8 CFU mL −1 . For further experiments, this suspension (1 mL) was diluted in MRD media (9 mL) and used as bacterial testing suspension. 2.6.2 Design of test chamber For antibacterial testing of the coatings under controlled light irradiation, a test chamber (height x length x depth = 35 cm × 29.5 cm x 27.5 cm) was constructed from item panels with an aluminum surface to guarantee controlled experimental conditions, excluding any external light. A mounted LED (554 nm) with a power output of 190 mW (MINTL 5, Thorlabs Inc., USA) served as light source and was placed 21.5 cm above the sample. A collimator (SM1U25-a, Thorlabs Inc., USA) was installed to achieve uniform irradiation across one Petri dish, minimizing light divergence and ensuring consistent sample exposure. Additionally, a power meter continuously monitored the LED's power during experiments to ensure reproducibility. The light dose was calculated according to equation (3) . Optimized irradiations operated for 30 min with a total light dose of 342 J cm (3) dose (mJ cm −2 ) = intensity (mW cm −2 ) × time (s) −2 . For antibacterial tests in solution , an RB stock solution (100 mg mL −1 ) in MRD medium was prepared. The respective bacteria testing suspension (2 mL) containing either S. aureus or E. coli K12 was added to a sterile glass Petri dish (∅ 3 cm) and different amounts of the RB stock solution were added to adjust final concentrations of 0.005, 0.01, 0.05, and 0.1 mg mL · −1 . The dishes were then placed in the irradiation chamber and irradiated at 554 nm for 5, 10, 20, or 30 min, respectively. The number of surviving bacteria in the irradiated suspensions (1 mL) was quantified via spot-plating on agar performed in triplicate. For each bacterial strain, concentration, and irradiation time, three independent experiments with 3 technical replicates have been performed ( N = 3). For antibacterial tests on surfaces , the aforementioned procedure was slightly adapted. All samples, including PDMS bulk substrates and PDMS samples prepared in glass Petri dishes, were sterilized via autoclaving (120 °C, 20 min) prior to use. Subsequently, the respective bacteria testing suspension (1.5 mL) was added to the PDMS substrates with covalently attached RB (20 μg cm −2 or 40 μg cm −2 ) on the substrate surface in Petri dishes. P2 -coated PDMS substrates without RB modification were used as a control. Irradiation in the test chamber at 554 nm was performed for 30 min with three independent samples ( N = 3) per bacterial strain and 3 technical replicates each. Live/Dead staining of bacteria was performed separately after irradiation on RB-modified PDMS substrates and hydrogel-coated controls, as direct staining on the samples was not feasible due to spectral overlap of RB and propidium iodide (PI) with the used filter set. Thus, the E. coli containing test suspension (100 μL) was placed onto the RB-modified substrates or control. After 60 min of irradiation at 554 nm in the test chamber, the surface was rinsed with MRD media and the collected suspension was stained with 0.1 μL of each dye (SYTO9 and PI from LIVE/DEAD™ Bac Light™ Bacterial Viability Kit) via a 20 min incubation at RT in the dark. For analysis, the stained suspension was transferred to a glass dish and images were taken with an upright two-photon fluorescence microscope (Trim Scope 2, LaVision/Miltenyi Biotec B.V. &Co. KG, Bergisch Gladbach, Germany) equipped with a 495LP dichroic mirror and a 500/20 nm as well as a 615/25 nm filter. An adapted procedure was employed for Live/Dead staining of S. aureus : The S. aureus - containing test suspension (2 mL) was placed onto the RB-modified substrate or control. After 30 min irradiation, 1 mL of the suspension was collected, while the remaining suspension underwent an additional 30 min irradiation before being collected. Subsequently, both suspensions were stained according to the LIVE/DEAD™ Bac Light™ Bacterial Viability Kit with each dye (1 μL). Fluorescence images were acquired using an Axiovert 200 fluorescence microscope (Carl Zeiss AG, Oberkochen, Germany) equipped with a camera (Andor Zyla 5.5 sCMOS) and filters for specific dyes: excitation at 455/30 nm and emission at 535/25 nm with a dichroic mirror 500LP for SYTO9, and excitation at 535/25 nm and emission at 671/41 nm with a dichroic mirror 550LP for PI. 2.7 Statistical analysis Statistical analyses were conducted using OriginPro 2022 (OriginLab Corporation, Northampton, MA, USA). Data that did not follow a normal distribution were analyzed using the Kruskal-Wallis test to assess differences among multiple groups, followed by Dunn's test for post-hoc pairwise comparisons. Alternatively, the Mann-Whitney U test was applied when comparing two groups. A p -value of <0.05 was considered statistically significant. For all experiments, N states the number of independent samples/measurements with 3–5 technical replicates as stated in the experimental part. 3 Results and discussion 3.1 Polymer design and coating strategy To covalently functionalize plasma-activated silicone surfaces with a robust polymer coating in a geometry-independent manner, we aimed at a nanoprecipitation strategy with thermal post-curing covalently immobilizing and crosslinking the deposited polymer film simultaneously ( Fig. 1 ). Preserving the general bioinertness of the silicone substrate, the coating design was based on biocompatible and protein-repellent poly(hydroxyethyl acrylamide) (pHEAA) [ 33 , 34 ]. The acrylamide backbone ensures the hydrolytic stability of the polymer while providing reactive hydroxyl groups in the side chain for post-modification. Mussel-inspired catechol units were incorporated to anchor and crosslink the polymer on the PDMS surface. To simplify their integration without requiring protecting groups, a low percentage of aminopropyl methacrylamide (APMAA) was copolymerized with the hydroxyethyl acrylamide (HEAA) monomer. The primary amino-groups of the resulting copolymer P1 were then selectively functionalized in a two-step, one-pot Kabachnik-Fields reaction [ 21 ], yielding the final copolymer P2 for surface coating ( Fig. 1 A). This reaction also introduced phosphonate groups, which - although not expected to enhance affinity for PDMS - can aid surface characterization and may improve polymer attachment on alternative metallic substrates [ 35 , 36 ]. Thus, this versatile design offers a straightforward and robust strategy for producing crosslinked, functional coatings on a range of substrates. To achieve a homogeneous and uniform coating with controlled thickness in the nanometer range, we opted for a nanoprecipitation coating approach instead of the widely utilized dip-coating methods reported in the literature. Nanoprecipitation relies on the formation of polymer colloids in a solvent/non-solvent mixture, creating a metastable solution that facilitates controlled polymer deposition on the substrate. Parameters such as colloidal size, particle concentration, and coating time have the potential to influence the thickness of the resulting nanoscale layers. Alternative dip coating methods have yielded layer thicknesses ranging from as thin as 4 nm for single-chain polymers to as much as 3.7 μm for multi-armed polymers, albeit with a substantial increase in surface roughness [ 37 , 38 ]. In the case of catechol-functionalized polymers, basic conditions during dip-coating, typically in the presence of primary amine groups, induce colloid formation and crosslinking [ 39 ]. In our nanoprecipitation approach, basic coating conditions were avoided to enhance the stability of the metastable colloidal solution of the catechol-functionalized copolymer P2 . We aimed at an alternative, facile heat-induced crosslinking of the surface-deposited nanoprecipitates of P2 to create a stable, surface-immobilized polymer network on the plasma-activated PDMS substrate ( Fig. 1 B). Mechanistically, the heat-induced crosslinking is assumed to follow an autoxidation-induced polymerization of the catechol units, resulting in the covalent linkage of oxidized and non-oxidized catechols with simultaneous attachment to the PDMS surface [ 40 , 41 ]. 3.1.1 Polymer synthesis and characterization As previous studies have demonstrated that even small quantities of catechol units are effective for surface attachment and covalent gel formation [ 42 ], we targeted a copolymer containing 5 mol% catechol groups. Consequently, copolymer P1 , synthesized via free radical polymerization, was designed to comprise 95 mol% hydroxyethyl acrylamide (HEAA) and 5 mol% aminopropyl methacrylamide (APMAA) ( Fig. 1 A). 1 H NMR spectroscopy revealed 6 % APMAA incorporation ( Fig. S1 ). Furthermore, aqueous GPC measurements using pullulan as the standard indicated a molecular weight of M n = 11.7 kDa, corresponding to a degree of polymerization x n = 100, with moderate dispersity Ð = 2.9 ( Fig. S2 ). For polymer post-modification, the pendant primary amine groups were functionalized via a Kabachnik–Fields reaction, as recently demonstrated by Zhang et al. [ 21 ]. In this one-pot synthesis, the amine undergoes a Schiff base reaction with the aldehyde-presenting catechol to form an imine, followed by a nucleophilic addition of diethyl phosphonate, leading to the formation of an α-amino phosphonate ( Fig. 1 A). To ensure complete conversion of the amino groups, an excess of aldehyde and phosphonate reagents was used. NMR spectroscopy ( Fig. S3 ) showed full conversion of the free amine into the catechol-containing moiety, resulting in a calculated molecular weight of 12.9 kDa for polymer P2 . 3.1.2 Establishment of nanoprecipitation coating conditions To establish effective nanoprecipitation coating conditions for P2 , its solubility behavior in various solvents was systematically examined using dynamic light scattering (DLS) at a polymer concentration of 2 mg mL −1 ( Fig. 2 A). In water (blue curve), P2 exhibited a hydrodynamic diameter of approximately 7 nm, aligning well with theoretical predictions according to the Flory-Huggins-Theory for a 12.9 kDa acrylamide-based polymer in a good solvent (8 nm, assuming a monomer length of a = 0.25 nm) [ 43 ]. In contrast, methanol, as a poorer solvent for P2 , induced slight polymer aggregation, evidenced by a hydrodynamic diameter of ∼15 nm (green curve). Larger aggregates (∼22 nm diameter) of P2 were observed in a 9:1 methanol/water mixture (grey curve), indicating further destabilization of the fully transparent polymer solution. Importantly, the P2 solution remained transparent and colorless even after one week of storage under light and air exposure at RT with no change in hydrodynamic diameter (data not shown), confirming the absence of significant catechol-based crosslinking. Hereafter, this 9:1 methanol/water mixture is referred to as “solvent” for preparing the initial P2 stock solution. For nanoprecipitation, a destabilizing non-solvent has to be added to the stock solution, generating mesoscopic colloids. Acetone was selected as the non-solvent for P2 due to its full miscibility with methanol and water, preventing phase separation. Moreover, acetone's ability to swell PDMS [ 44 ] may facilitate enhanced polymer anchoring onto the PDMS surface through chain entanglement during the coating process. To facilitate uniform nanoprecipitation and ensure a homogeneous, controllable coating, we targeted colloidal sizes between 200 and 300 nm. The time-dependent formation of P2 colloids was analyzed using DLS at a polymer concentration of 2 mg mL −1 and solvent/non-solvent ratios of 1:1, 1:2, and 1:3 ( Fig. 2 B–D), complemented by macroscopic images of the colloidal solutions ( Fig. 2 E). In the 1:1 mixture, dynamic aggregation was observed, with colloids reaching a maximum size of 165 nm after 1 h. In contrast, P2 in the 1:2 and 1:3 mixtures exhibited rapid aggregation, forming aggregates ≥50 nm within 5 min. Stable colloids were achieved after 1 h, with sizes of 240–260 nm for the 1:2 ratio and 250–300 nm for the 1:3 ratio, both remaining stable for at least 1 h. The larger colloid size and increased turbidity observed at the 1:3 ratio on a macroscopic scale made it the preferred choice for the envisioned coating strategy. 3.1.3 Surface characterization and time-dependent stability To ensure proper wetting of PDMS substrates with the aqueous colloidal solution, their surfaces were air-plasma activated, reducing the water CA from >100° to ∼30° ( Fig. 3 A). Otherwise, droplet formation occurred after depositing the colloidal polymer solution, leading to inhomogeneous coatings as shown in Fig. S4 . Additional PDMS model substrates were prepared on silicon wafers alongside bulk PDMS disks (Ø 2.9 cm, 3 mm height) for precise determination of the coating layer thickness via SE. Therefore, silicon wafers (1 × 1 cm) were spin-coated with a freshly prepared silicone elastomer (10:1) mix in 2,2,4-trimethylpentane (6 % w/w), yielding approximately 200 nm-thick PDMS films. After plasma treatment and ethanolic extraction, the dry layer thickness of the PDMS films decreased by 20–30 nm, indicating partial removal of the outermost polymer segments due to plasma-induced chain scission ( Fig. 3 B). To control the layer thickness during the nanoprecipitation process, final colloidal polymer concentrations of 0.25 and 0.5 mg mL −1 were prepared from 1 or 2 mg mL −1 stocks of P2 , which were dissolved overnight. After adding acetone to the stock solutions, the freshly prepared colloidal solution was applied to bulk PDMS disks or spin-coated thin-films. The substrates were incubated statically in closed dishes for 2 h and thermally post-cured at 120 °C for 2 h to yield fully transparent materials without macroscopically visible signs of inhomogeneity after extraction ( Fig. 4 A and S5). In situ hydration and viscoelasticity analysis by QCM-D measurements of P2 -coatings, prepared with varying curing times between 1 and 4 h, showed no substantial differences, suggesting that thermal post-curing for 2 h is sufficient to achieve full crosslinking ( Fig. S6 ). Combined QCM-D hydration monitoring ( Fig. S6 ) and ultrabalance measurements ( Fig. S7 ) revealed a fourfold mass increase upon equilibrium swelling in water, indicating effective hydration while excluding over-crosslinking of the hydrogel coating. The overall step-by-step coating strategy is schematically illustrated in Fig. S8 with accompanying photographs of the functionalized PDMS surface on silicon wafers, visually confirming successful coating with half-masked substrates. Surface wettability was assessed via static water CA measurements using the sessile drop method on three different spots on a sample to test for homogeneity. Following P2 coating, the CA located around ∼50° on PDMS model substrates, as shown in Fig. 3 A. Notably, P2 coatings on bulk PDMS disks yielded a CA value of 61 ± 16° (see Fig. 4 A). As a control, plasma-activated samples were subjected to the nanoprecipitation process without polymer in the coating solution. The CA of these substrates reverted to 98° due to hydrophobic recovery within hours ( Fig. S9 ). The discrepancy in CA after P2 immobilization on spin-coated PDMS model substrates and bulk PDMS disks is likely due to differences in PDMS surface morphology. While variations between individual measurements at different locations on the same sample were minimal, confirming coating homogeneity, sample-to-sample variations led to larger SDs, particularly on PDMS disks. Nevertheless, a clear correlation was observed between the colloidal polymer concentration and the resulting dry layer thickness ( Fig. 3 B), with higher concentrations producing nearly double the thickness, reaching 44 nm. Thus, further experiments were performed using P2 coatings prepared from a 0.5 mg mL · −1 colloidal solution. AFM imaging ( Fig. 3 C) revealed the surface morphology of the native and treated/coated bulk PDMS disks. Native PDMS surfaces displayed a relatively smooth topography, while plasma activation induced lamellar patterns in the range of 50–100 nm in height ( Fig. S10 ). Similar patterns after plasma treatment, around 160 nm in height, were previously reported by Chua et al. [ 45 ]. The periodicity of the patterns and the peak-to-trough distances depend on the plasma source energy and exposure duration. This phenomenon is attributed to the buckling of a thin, silica-like layer formed by silicone oxidation during plasma treatment. The stiffer silica layer experiences compressive stress due to the local thermal expansion of the underlying silicone, resulting in the observed wavy patterns. Thus, in agreement with the determined layer thickness of 44 nm after nanoprecipitation, the P2 -coated substrates reflect the underlying lamellar patterns. Complementary amplitude images generally confirm the results of the topography images and demonstrate a homogeneous coating across the surface. X-ray photoelectron spectroscopy (XPS) was employed to analyze the elemental composition and chemical structure of the surfaces ( Fig. 3 D). The uncoated reference substrate, based on PDMS, exhibited characteristic peaks for silicon (Si), oxygen (O), and carbon (C) ( Fig. 3 D left), consistent with the expected composition of PDMS [ 46 ]. Upon coating the PDMS surface with P2 , the wide scan spectrum revealed new peaks for nitrogen (N) and phosphorus (P) ( Fig. 3 D right), attributed to amide groups in the polymer backbone and 6 % phosphonate functionalities. Additionally, O 1s and C 1s peaks exhibited shifts and broadening indicative of diverse functional groups, including hydroxyl, carbonyl, and aromatic catechol moieties, highlighting the more complex surface chemistry introduced by the coating ( Fig. 3 E and more detailed S11A, B ). For complete spectral deconvolution and analysis, see Supporting Information Chapter 8. One major challenge with surface coatings on PDMS is their limited stability, often compromised by the material's intrinsic chain dynamics. To evaluate the long-term stability of the P2 coating on bulk PDMS disks, samples were stored at RT in air, water, and phosphate-buffered saline (PBS) for up to four months ( Fig. 3 F and G). Ellipsometry measurements indicated that the layer thickness of the P2 coating, modeled by a Cauchy layer, remained constant in all three conditions during this period. The ability to consistently fit the layer thickness over time using the originally determined Cauchy coefficients indicates no significant degradation or hydrophobic burial of the coating, which would otherwise alter the optical constants. The higher SD observed for the dry layer thickness of samples stored in water likely stems from challenges in fully drying the hydrophilic hydrogel coating in a short time [ 47 ]. Water CA's of P2 -coated substrates remained stable for at least 90 days under all conditions. By day 90, substrates stored in aqueous environments exhibited a slight CA increase from around 60°–∼72°. In contrast, air-stored samples maintained their original hydrophilicity (∼60°) for 110 days with a minimal, non-significant CA shift to ∼65° ( Fig. 3 G). These results confirm that crosslinked coatings preserve surface integrity and effectively suppress PDMS hydrophobic recovery, supporting our initial hypothesis. Complementary to long-term stability in aqueous and ambient environments, the coating's short-term resistance to clinically relevant detergents was evaluated. Following ISO 19227 cleanliness for implants and ISO 14630 cleaning validation, substrates underwent dynamic immersion in sodium dodecyl sulfate solutions of 0.1 % and 1 % w/v. No significant thickness reduction occurred after 12 or 72 h ( Fig. S12 ), confirming robust stability under stringent detergent exposure. 3.2 Establishment and characterization of the bioactive coating To enable non-invasive decontamination for biomedical applications, we combined the established P2 coating on PDMS with the covalent immobilization of a PS. RB was selected as an ideal PS due to its high singlet oxygen ( 1 O 2 ) quantum yield (0.76) [ 48 , 49 ] and the ability to covalently bind to a hydroxyl group-presenting substrate via its carboxyl group [ 50 ]. Interestingly, beyond photoactivation, RB has also been reported to produce ROS under ultrasound, though at a lower yield compared to light activation [ 51 ]. Covalent immobilization of RB addresses key limitations of traditional soaking methods: (1) 1 O 2 , the primary ROS produced by RB, is generated directly at the interface with adhering bacteria, minimizing diffusion distances; and (2) the risk of RB leaching into the aqueous environment is eliminated [ 29 ]. Both factors significantly enhance the safety and enable repeated use of PS-functionalized materials for non-invasive decontamination. 3.2.1 Immobilization of rose bengal RB was covalently immobilized on P2 -coated PDMS disks through Steglich esterification, employing DCC as the coupling agent and DMAP as the catalyst ( Fig. 4 A). The reaction was carried out in DMF for 48 h to ensure the solubility of the reagents and effective swelling of the P2 coating while avoiding swelling of the PDMS substrate [ 44 ], thereby preventing RB penetration into the bulk PDMS. Considering the practical inconvenience of maintaining anhydrous surface coupling conditions throughout the reaction, relatively high concentrations of RB (50 and 100 mg mL −1 ) were utilized to achieve efficient coupling even under ambient conditions. Successful functionalization was visually evident by the permanent pink coloration of the modified P2 -coated PDMS substrates, in contrast to control experiments without DCC, which left the substrates colorless ( Fig. 4 A). The immobilized RB was quantified spectroscopically ( Fig. 4 B) using a calibration curve generated from defined amounts of RB physisorbed into P2 -coated PDMS disks by soaking ( Fig. S8 ). Integration of the measured absorbance between 500 and 600 nm when passing the beam through the sample allowed a correlation with the amount of covalently attached RB per unit area derived from the calibration curve, yielding values of 20 μg cm −2 for the lower RB and 40 μg cm −2 for the higher RB coupling concentration. If not stated otherwise, further experiments are performed with the 40 μg cm −2 RB-functionalized coatings. Additionally, due to the hydrophobic nature of RB, a water CA increase from ∼61° to ∼76° was observed following immobilization on the P2 -coated PDMS disks, with this change occurring for both concentrations. The XPS analysis of the high-resolution spectra in Fig. 4 C and S11C revealed that the peaks corresponding to carbon 1s (C) and oxygen 1s (O) remained largely unchanged upon RB immobilization, with shake-up satellite peaks for aromatic carbons already present in the P2 -coated substrates due to the inherent catechol units within the coating. Notably, also the nitrogen 1s (N) signal exhibited no detectable variation. In contrast, additional peaks associated with halogens, specifically chlorine 2p (Cl) and iodine 3d (I), emerged in the XPS spectra ( Fig. 4 C) compared to the P2 -coated substrate, confirming the successful attachment of RB. 3.3 Stability against steam sterilization and irradiation To evaluate the stability of the P2 coating and the immobilized RB under stress conditions such as autoclaving or prolonged irradiation in ROS-generating environments, we systematically analyzed the dry layer thickness and water CAs on PDMS model substrates and quantified the immobilized RB on PDMS disks. Thermal stress from autoclaving could potentially induce hydrolytic bond cleavage or delamination of the coating, while extended ROS exposure might lead to oxidative degradation. However, the negligible changes observed in dry layer thickness ( Fig. 5 A–C) and water CAs indicate that the P2 coating before and after RB immobilization remains unaffected by standard autoclaving conditions (120 °C, 20 min). Similarly, continuous irradiation in 30 min-cycles at 554 nm in air for up to 90 min did not significantly affect the dry layer thickness of the coating. To assess the impact of both stress conditions on the presence of immobilized RB, we integrated absorbance between 500 and 600 nm before and after autoclaving ( Fig. 5 B) and quantified immobilized RB after defined irradiation periods ( Fig. 5 D). Interestingly, all samples exhibited a slight increase in absorbance following autoclaving, which can be attributed to incomplete drying and the retention of small water inclusions in the bulk PDMS. These inclusions scatter the light, resulting in minimal increase in absorbance. Proper drying of PDMS samples after steam sterilization typically requires approximately two days [ 52 ]; however, in this study, the samples were processed within a standard workflow with shorter drying times post-autoclaving. Despite this, no significant effects on the absorbance properties of RB-functionalized substrates or P2 -coated samples were observed during autoclaving ( Fig. S14 ). Additionally, the data highlights the transparency of the P2 coating, as its absorbance closely matches that of native PDMS. Likewise, repeated 30-min irradiation cycles at a wavelength of 554 nm with an intensity of 190 mW per cm −2 showed no evidence of RB bleaching, as indicated by the constant RB areal concentration in Fig. 5 D and consistent absorbance across the corresponding UV–Vis spectra ( Fig. S15 ). Notably, antimicrobial evaluation further demonstrated that prolonged pre-irradiation for 90 min does not compromise the coating's antibacterial function compared to non-pre-irradiated controls ( Fig. S16 ). For on-demand antibacterial testing, the stability of the coating against bacterial exposure is critical. To assess this, the RB-functionalized P2 -coated PDMS substrate was incubated with E. coli and S. aureus for 24 h in MRD medium. Following centrifugation to remove the bacteria, the absorbance between 500 and 600 nm was measured in the supernatant. Although RB immobilization in this study relies on ester bonds, which are susceptible to cleavage, no bacteria-derived RB release could be detected in the background-corrected UV–Vis spectra ( Fig. S17 ). Thus, our approach demonstrates negligible RB leaching, even in the presence of metabolically active planktonic bacteria. In contrast, traditional physisorption of RB to PDMS suffers from significant PS leaching when the substrates are exposed to aqueous media, immediately producing pink-colored solutions ( Fig. 5 E). In summary, no coating-derived degradation products were observed under autoclaving or irradiation that could interfere with subsequent test outcomes, reduce the coating's lifespan, or cause unintended cytotoxic effects. These findings confirm the stability of the coating under the tested conditions as a requirement for subsequent non-invasive decontamination tests. 3.4 Protein adsorption and cell compatibility To assess the biocompatibility of P2 -coated and additionally RB-modified substrates for biomedical applications, first, their non-specific interactions with complex biological fluids were evaluated. For real-time monitoring of protein adsorption from 10 % FCS-containing PBS, QCM-D sensors were functionalized with PDMS, coated with P2 , and subsequently modified with RB. After initial equilibration in PBS, the samples were perfused with the protein solution for 20 min and subsequently washed with PBS to remove loosely adsorbed proteins. Frequency shifts (Δ f ) revealed minimal interaction between the P2 -coated surface and serum components, evidenced by a gradual decrease over the 20 min exposure period ( Fig. 5 F, yellow curve). After washing off loosely adsorbed proteins, an adsorbed mass of 168 ± 17 ng cm −2 , corresponding to hydrated serum components, remained on P2 -coated surfaces. This absorption level aligns within the lower range reported for single protein adsorption on pHEMA hydrogels (170–270 μg cm −2 [ 53 ]), even though FCS is a more complex mixture including serum proteins like albumin, fibrinogen, and diverse growth factors, as well as hormones and lipids, contributing to the detected value of adsorbed mass. Importantly, despite this, P2 -coated PDMS substrates did not support cell adhesion of HDFs in serum-supplemented cell culture medium ( Fig. S18 ), in agreement with the general bio-inertness of pHEAA. RB-modified P2 coatings displayed a slightly increased hydrophobicity compared to bare P2 coatings ( Fig. 5 A), increasing theoretical protein adsorption tendency. Accordingly, frequency shifts (Δ f ) showed a rapid initial decrease, indicating enhanced serum component interaction ( Fig. 5 F, pink curve). After washing, a hydrated mass of 211 ± 78 ng cm −2 remained adsorbed on the substrates, which was in the range of the P2 -coated substrates. These results indicate no substantially altered protein interactions of P2 -coatings in the presence of surface-immobilized RB. Additional leaching assays with fibroblasts cultured in medium extracts of the coated and native PDMS substrates revealed no cell toxicity of the materials according to cell viability after 24 and 72 h ( Fig. 5 G). Overall, these results demonstrate biocompatibility of P2 -coated surfaces, even after RB modification, in the presence of complex physiological fluids. Furthermore, the confirmed absence of cytotoxic leachables underscores their potential for biomedical applications. 3.5 On-demand antibacterial properties To evaluate the on-demand antibacterial efficiencies of surface-immobilized versus free RB against planktonic bacteria, concentration and dose-dependent measurements were performed. For standardized irradiation experiments, a fully enclosed test chamber (35 cm height × 27.5 cm length × 29.5 cm depth) was designed to provide a secure and controlled environment for the tests using E. coli and S. aureus as representative Gram-negative and Gram-positive strains, respectively ( Fig. 6 A). The chamber ensures precise control over irradiation intensity by an adjustable distance between the light source (LED, 554 nm) and the sample while effectively blocking ambient light to prevent unintended PS activation. At a set distance of 21.5 cm, the calibrated light source delivered a consistent light intensity of 190 mW cm −2 to the sample, ensuring reproducibility and accuracy in experimental results. First, the antibacterial efficiency of unbound RB was assessed in a concentration- and dose-dependent manner using bacterial suspensions (∼1 × 10 6 CFU mL −1 ). This solution-phase study provides a critical internal benchmark for validating the performance of surface-immobilized RB under identical experimental conditions. Such standardization is essential because direct comparisons with literature are hindered by substantial methodological variations – particularly in RB purity and concentration, light source specification (wavelength, power density), irradiation times, and bacterial strains [ 54 ]. Establishment of a setup-dependent solution-phase baseline under controlled parameters enables direct validation of antibacterial efficacy for surface-functionalized systems. As shown in Fig. 6 B–E Coli required at least 50 μg mL −1 RB and 20 min of irradiation (228 J cm −2 ) to achieve a 2.5-log reduction (99.68 %) in bacterial growth. Extending irradiation to 30 min (342 J cm −2 ) resulted in a 3.5-log reduction (99.97 %) at 10 μg mL −1 and a 5-log reduction (99.999 %) at 100 μg mL −1 . No antibacterial effect against E. coli was observed from light exposure alone. In contrast, S. aureus exhibited greater sensitivity towards light and ROS, requiring lower RB concentrations and light doses for comparable effects ( Fig. 6 C). Even 5 min of light exposure without a PS reduced bacterial growth by half a log. Generally, no significant dose-response was observed, as concentration-dependent profiles remained similar after 5 min (57 J cm −2 ) and longer exposures. A 5-min irradiation at 50 μg mL −1 fully eradicated bacteria, as did a 30-min exposure at 10 μg mL −1 . RB is known to start aggregating in aqueous solutions at concentrations above 10 μg mL −1 , which reduces its singlet oxygen production efficiency [ 55 ]. This explains the lack of a clear concentration-dependent increase in activity between 10 and ≥50 μg mL −1 for both strains. The higher sensitivity of S. aureus compared to E. coli aligns with previous reports, confirming that negatively charged PS, like RB, are generally less effective against Gram-negative bacteria: (1) The strong negative charge of the Gram-negative outer membrane impedes the penetration of negatively charged PS, reducing its antibacterial efficacy [ 54 , 56 ]. (2) The active efflux pumps of Gram-negative bacteria are generally more effective at counteracting PS compared to those in Gram-positive bacteria [ 57 ]. (3) A robust and enhanced antioxidant defense system in E. coli , compromises multiple ROS-detoxifying enzymes regulated by oxidative stress-responsive transcriptional activators, enabling rapid detoxification of exogenous ROS [ 58 , 59 ]. Membrane penetration and efflux pump-related effects are less relevant when RB is immobilized on a substrate, as membrane insertion – if possible at all – requires direct contact with the bacteria. Thus, the antimicrobial activity of covalently PS-modified substrates relies primarily on the diffusion range of the forming ROS, being 10–20 nm for singlet oxygen, in bacterial suspensions [ 60 ]. Consequently, lower efficacy is anticipated for the RB-functionalized P2 -coated PDMS disks compared to the free PS in solution, justifying the use of longer irradiation times starting at 30 min. Bacterial suspensions (1.5 mL, ∼1 × 10 6 CFU mL −1 ) were applied to RB-functionalized substrates (20 or 40 μg cm −2 ) and non-functionalized P2 -coated PDMS controls. After 30 min of irradiation, the number of CFU in the suspensions, indicative of surviving bacteria, was determined ( Fig. 6 D). As expected, the P2 -coated PDMS control disks without RB (yellow bars) showed no antimicrobial activity. Interestingly, while RB achieved a 3.5-log reduction against E. coli in solution at 10 μg mL −1 , surface immobilization completely abolished its antibacterial activity. Similar observations were reported for RB immobilized on chitosan (surface concentration unspecified), which showed no effect on E. coli (10 4 CFU mL −1 ) after 1 h of irradiation [ 28 ]. Additionally, RB immobilized on bacterial cellulose membranes or glass slides (0.09 nM m −2 ) exhibited no antibacterial effect against Gram-negative bacteria (10 8 CFU mL −1 ) even after 2 h of irradiation [ 44 , 61 ]. Together with the solution-based results, our data suggest that E. coli , due to its additional outer membrane, is less efficiently penetrated by hydrophilic RB, limiting 1 O 2 generation within the membrane, periplasm, or cytoplasm. Furthermore, extracellularly produced ROS is less effective in killing suspended bacteria located within a 3 mm distance from the RB-modified surface, even after prolonged irradiation. This limited efficacy occurs despite the primary targets of extracellular ROS being membrane proteins and lipids. Protein oxidation induces enzyme inactivation, aberrant cross-linking, and loss of structural integrity – molecular disruptions that typically compromise cellular function and cause membrane leakage, ultimately leading to cell death [ 62 , 63 ]. However, E. coli counteracts these effects through its protective outer membrane barrier and constitutive enzymatic defenses, including ROS-detoxifying enzymes regulated by oxidative stress-responsive transcription factors (e.g., OxyR, SoxRS). This dual protective system enables rapid neutralization of exogenous ROS, enhancing survival in oxidative environments [ 58 , 59 , 63 , 64 ]. In contrast, surface-bound RB efficiently killed S. aureus, achieving a 3–4 log reduction (99.98 %) in CFUs ( Fig. 6 D). Interestingly, increasing the RB surface concentration did not enhance efficacy, indicating that 20 μg cm −2 is sufficient for a bioactive coating. A similar effect was observed for the photosensitizing dyes toluidine blue O and methylene blue immobilized in polymeric coatings, suggesting that increasing PS concentration either inhibits formation of ROS or the produced ROS likely terminate each other [ 65 ]. This concentration-independent efficacy was further investigated by quantifying ROS production from RB-modified substrates. Based on literature, RB predominantly generates 1 O 2 (quantum yield Φ a = 0.76 in water), with minimal superoxide production [ 48 ]. To quantify them, photometric assays employing 9,10-anthracenediyl-bis(methylene)dimalonic acid (ABDA) for 1 O 2 and nitroblue tetrazolium (NBT) for superoxide (O 2 •- ) detection were performed ( Fig. 6 E and S20). Therefore, substrates functionalized with 20 or 40 μg cm −2 RB were incubated in dye-containing MRD buffer under dark conditions or time-dependent irradiation at 554 nm. Conversion of NBT to purple formazan was undetectable, indicating minimal O 2 •- generation with surface-immobilized RB ( Fig. S20 ). In contrast, 1 O 2 -mediated ABDA bleaching revealed time-dependent ROS accumulation in the supernatant from 2 nmol cm −2 after 15 min up to 16 nmol cm −2 after 60 minutes irradiation. The cumulative 1 O 2 production shown in Fig. 6 D was calculated as detailed in the SI Section 12.2. No significant difference between samples with 20 or 40 μg cm −2 RB surface concentration was detected, aligning with theoretical calculations confirming near-complete photon absorption (>99 %) at 20 μg mL −1 under the given experimental conditions (SI Section 12.3). Optical density estimation reveals that 20 μg mL −1 RB achieves an absorption efficiency of >99.8 % for incident photons (190 mW, 15 min irradiation). Consequently, increasing the concentration to 40 μg mL −1 provides no measurable enhancement in 1 O 2 production, as all available photons are already utilized (SI Section 12.3). Discrepancies between theoretical solution-phase 1 O 2 yields (mmol range) and experimental detection (nmol range) in surface-conjugated systems arise, on the one hand, from competitive scavenging by endogenous solvent components, an effect that amplifies due to the restricted 1 O 2 diffusion, which limits ABDA accessibility. On the other hand, a localized concentration gradient of 1 O 2 develops starting from the surface due to confined ROS generation, resulting in elevated 1 O 2 concentrations at the surface that promote 1 O 2 – 1 O 2 termination [ 48 ]. Consequently, solvent-mediated 1 O 2 consumption and self-quenching mechanisms dominate over ABDA reaction kinetics, leading to an underestimation of ROS production in assays containing surface-immobilized PS [ 48 ]. Nevertheless, the theoretical as well as experimental results confirm that prolonged irradiation times consistently enhance ROS production. Thus, extending the irradiation time in antimicrobial tests against S. aureus to 60 min further minimized the number of viable bacteria, as shown by live/dead fluorescence staining ( Fig. 6 F). Compared to previous studies, our system outperforms a more complex RB-functionalized PDMS coating that required plasma polymerization and multi-step post-modification, yet demonstrated only 0.3-log reduction in CFUs without proven long-term stability [ 28 ]. Notably, the study did not directly quantify the RB surface concentration, which was estimated at 0.1 mol% RB within a 10 mg cm −2 coating. This suggests that the effective surface concentration may have been lower than the levels explored in our approach. In contrast, another study utilizing sol-gel technology on glass substrates covalently incorporated 2.5 wt% RB within a polymeric matrix and achieved a > 4-log reduction only after 3 h of irradiation [ 65 ]. In the study, only 100 μL of bacterial suspension was applied to the surface, which may limit the generalizability of its findings. Despite the reduced efficiency of surface-immobilized RB compared to free RB in solution, a 3–4 log reduction still corresponds to up to 99.99 % bacterial growth inhibition – an acceptable threshold for disinfection applications [ 66 ]. These findings underscore the potential of the developed bioactive surface for efficient, non-invasive decontamination. However, more in-depth mechanistic studies are required to better understand the efficacy differences between surface-bound and free antibacterial photodynamic systems across different bacterial species [ 56 , 57 ]. Beyond differences in the bacterial envelopes of Gram-negative and Gram-positive strains, motility plays a crucial role in the differential efficacy of the RB-functionalized surface. Unlike the non-motile S. aureus , flagellated E. coli in planktonic conditions does not readily settle onto the surface, reducing direct PS-bacteria interactions that are crucial for maximizing photodynamic antibacterial effects. Given the nanosecond-scale lifetime of singlet oxygen in aqueous media [ 67 ], the observed efficacy against suspended S. aureus is particularly remarkable. Further supporting this concept, reducing the bacterial suspension volume from 1.5 mL to 100 μL − while maintaining the same E. coli concentration (1 × 10 6 CFU mL −1 ) − led to a substantial reduction in viable bacteria after 60 min of irradiation, as evident in live/dead-stained fluorescence images ( Fig. 6 G). This recovered efficacy highlights the importance of minimizing ROS interference with the surrounding media, which will be given in most applications where non-invasive surface decontamination is desired. Conventional strategies to safeguard PDMS-based medical devices include surface coatings, either passively antifouling or actively antimicrobial, and loading antimicrobial compounds, such as antibiotics, silver nanoparticles, and polymeric imidazolium salts, into the bulk of the material or hybrid approaches thereof [ 7 , 68–70 ]. Passive coatings inhibit bacterial adhesion but lack bactericidal capability. Active antimicrobial coatings kill pathogens, yet suffer from the accumulation of non-viable cellular debris, which compromises surface functionality and promotes biofilm nucleation. In contrast, bulk-loaded systems embed antimicrobial agents within the PDMS matrix, exhibiting biphasic release kinetics: an initial burst release followed by diffusion-limited sustained release [ 68 , 69 ]. This profile restricts long-term efficacy, increases the risk for antimicrobial resistance, and raises concerns regarding cytotoxicity, such as local antibiotic concentrations or environmental impact, including heavy metal leaching [ 70 ]. Our approach circumvents these limitations by covalently immobilizing PS, eliminating reservoir depletion while enabling on-demand photodynamic activation. This provides (1) no burst-release toxicity, (2) reusability across multiple treatment cycles, and (3) absence of leachable environmental contaminants. Ongoing investigations address mechanical robustness under physiological stress and real-world reusability parameters. In summary, our results provide a compellingly simple approach for the fabrication of RB-functionalized PDMS surfaces with on-demand photodynamic antibacterial properties demonstrated in a proof of concept. Such bioactive surfaces are particularly promising for preventing biofilm formation by effectively targeting surface-attached bacteria at an early stage of colonization in the biomedical or food packaging context. 4 Conclusion In conclusion, we have successfully established a stable, bio-inspired hydrophilic surface coating for PDMS through nanoprecipitation, with adjustable thicknesses of 24 and 44 nm, which demonstrates remarkable resistance to hydrophobic recovery over 90 days. This stability is attributed to the reduced polymer flexibility in crosslinked coatings, highlighting their significance in maintaining surface integrity. Unlike traditional dopamine-based coatings, which impart a yellow to brownish color, our catechol-based coating preserves the original transparency of the PDMS, a critical feature for optical applications. Rapid and efficient functionalization of the coating was achieved with immobilization of 20 and 40 μg cm −2 RB, enabling quantifiable singlet oxygen production (2–16 μmol over 15–60 min irradiation) – paving the way for non-invasive decontamination of PDMS surfaces. Notably, a 4-log reduction against S. aureus in medium was achieved, and by reducing the distance between surface and motile bacteria through reduced medium volume, initial success against E. coli was observed. Biocompatibility was confirmed via ISO 10993-5 leachable testing (>70 % viability), underscoring the coating's general suitability for clinical applications. Given its stability under both storage and autoclaving conditions, this coating is well-suited for long-term biomedical applications, such as in catheters with integrated light guides for continuous surface decontamination. Future studies will focus on optimizing the antibacterial performance, particularly against Gram-negative bacteria like E. coli , through strategies such as combining multiple PS, coupling with stable ROS amplifiers (e.g., nitric oxide donors for the formation of persistent RNS/ROS hybrids) or incorporating membrane-disrupting agents like EDTA or cationic polymers [ 24 , 29 , 71 ]. This work lays a strong foundation for the development of durable, bioactive coatings with significant promise for enhancing medical device functionality and safety. CRediT authorship contribution statement Romina Berger: Writing – original draft, Visualization, Validation, Methodology, Investigation, Formal analysis, Data curation, Conceptualization. Alina Rahtz: Writing – review & editing, Methodology, Investigation, Formal analysis. Alexander Schweigerdt: Writing – review & editing, Validation, Methodology, Investigation, Formal analysis. Daniel D. Stöbener: Writing – review & editing, Methodology, Investigation, Formal analysis, Conceptualization. Andrea Cosimi: Writing – review & editing, Methodology, Investigation, Formal analysis. Wibke Dempwolf: Writing – review & editing, Visualization, Methodology, Investigation, Formal analysis. Henning Menzel: Writing – review & editing, Supervision, Resources, Methodology, Formal analysis. Sonja Johannsmeier: Writing – review & editing, Supervision, Resources, Project administration, Methodology, Funding acquisition, Formal analysis, Data curation, Conceptualization. Marie Weinhart: Writing – review & editing, Writing – original draft, Validation, Supervision, Resources, Project administration, Methodology, Funding acquisition, Formal analysis, Data curation, Conceptualization. Ethics approval and consent to participate This study does not include a clinical study, animal experiments, or any experiments involving human subjects (including organ/tissue donors). Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgements We thank Dr. Rolf M. Schwiete Foundation and the Federal Ministry of Education and Research , Germany (BMBF; FKZ: 13GW0439B) for the financial support of this work. Furthermore, B.Sc. Maren Leuker, M. Sc. Andrea De Martines, and Johanna Scholz are kindly acknowledged for their support in surface preparation and cell culture experiments. Appendix A Supplementary data The following is the Supplementary data to this article: Multimedia component 1 Multimedia component 1 Appendix A Supplementary data Supplementary data to this article can be found online at https://doi.org/10.1016/j.bioactmat.2025.07.052 .
REFERENCES:
1. LAM M (2021)
2. MIRANDA I (2021)
3. WOLF M (2018)
4. GOKALTUN A (2017)
5. VANDENBERG D (2023)
6. AKTHER F (2020)
7. ARMUGAM A (2021)
8. HILLBORG H (2000)
9. FRITZ J (1995)
10. NGUYEN L (2014)
11. ZHAO L (2012)
12. ZHANG W (2020)
13. ZHANG Z (2011)
14. SHAKERI A (2021)
15. FORSTER S (2012)
16. DECAMPOS R (2014)
17. YANG L (2010)
18. ZHANG Z (2013)
19. MAGENNIS E (2016)
20. LEE H (2007)
21. ZHANG Y (2020)
22. YAZDANIAHMADABADI H (2024)
23. LI M (2021)
24. GNANASEKAR S (2023)
25. LOPEZFERNANDEZ A (2022)
26. KHARKWAL G (2011)
27. PEDIGO L (2009)
28. FERREIRA A (2013)
29. QMESQUITA M (2018)
30. SHIKU H (2006)
31. DING X (2012)
32. MAHESHWARI N (2010)
33. ZHAO C (2011)
34. KAKWERE H (2011)
35. SHAO H (2010)
36. LI A (2015)
37. MIZRAHI B (2013)
38. BONDA L (2023)
39. BALL V (2023)
40. LUO H (2021)
41. MALOLLARI K (2019)
42. LUDECKE N (2023)
43. KUBOTA K (1990)
44. LEE J (2003)
45. CHUA D (2000)
46. LOUETTE P (2005)
47. KOZLOWSKA J (2018)
48. ENTRADAS T (2020)
49. LAMBERTS J (1984)
50. SPAGNUL C (2015)
51. NAKONECHNY F (2019)
52. MCINTOSH K (2011)
53. LORD M (2006)
54. KUROSU M (2022)
55. MOKRZYNSKI K (2024)
56. DAHL T (1988)
57. CIEPLIK F (2018)
58. MAISCH T (2015)
59. ALFEI S (2024)
60. OZOG L (2018)
61. KIM H (2019)
62. HANAKOVA A (2014)
63. ALVES E (2015)
64. PUDZIUVYTE B (2011)
65. AKARSU E (2018)
66. (2019)
67. SCHWEITZER C (2003)
68. NEOH K (2017)
69. ANJUM S (2018)
70. CATTO C (2018)
71. ASHKENAZI H (2003)
72. STOBENER D (2017)
|
10.1016_j.caeai.2024.100307.txt
|
TITLE: Preservice teachers’ behavioural intention to use artificial intelligence in lesson planning: A dual-staged PLS-SEM-ANN approach
AUTHORS:
- Acquah, Bernard Yaw Sekyi
- Arthur, Francis
- Salifu, Iddrisu
- Quayson, Emmanuel
- Nortey, Sharon Abam
ABSTRACT:
In the ever-changing landscape of education, the integration of technology has become an inevitable force that reshapes the foundations of teaching and learning. Amidst this transformative wave, the concept of Artificial Intelligence (AI) has taken center stage, promising innovative approaches, and increased efficiency. Within this context, the exploration of preservice teachers' behavioural intention to employ AI in lesson planning has emerged as a critical issue for examination. This study used a descriptive cross-sectional survey design and employed a purposive sampling technique to recruit 783 preservice teachers. By employing a cutting-edge dual-staged partial least squares structural equation modelling-artificial neural network (PLS-SEM-ANN) approach, this study investigated the influence of the following essential variables on preservice teachers' intentions to incorporate AI into their lesson planning endeavours: performance expectancy, effort expectancy, habit, hedonic motivation, social influence, and facilitating conditions. Social influence emerged as the most significant positive predictor of preservice teachers' behavioural intention to use AI in lesson planning. Additionally, habit, performance expectancy, effort expectancy, and facilitating conditions substantially positively influenced preservice teachers' behavioural intention to use AI in lesson planning. Conversely, hedonic motivation did not significantly affect preservice teachers’ behavioural intention to use AI in lesson planning. This study not only enhances our understanding of technology integration in pedagogy from a theoretical standpoint but also provides practical recommendations for refining educational curricula and instructional strategies that promote effective AI integration.
BODY:
1 Introduction A robust education system that prioritises student achievement has been shown to make a significant contribution to national development and individual well-being ( Buerkle et al., 2023 ; Long et al., 2023 ). While a number of aspects shape student outcomes, the importance of teachers' instrusctional strategy is arguably paramount. Within this framework, lesson planning has been identified as a central element in improving student learning outcomes, even when the influence of other variables is acknowledged ( Gess-Newsome et al., 2019 ; Iqbal et al., 2022 ). A well-designed lesson plan emerges as a critical competency for teachers. Defined as the careful planning and adaptation of instructional materials with appropriate strategies, models and media, lesson planning enables teachers to facilitate students' success in achieving their learning goals ( Anggrella et al., 2023 ). A well-structured lesson plan serves as a cornerstone for effective instruction and has been shown to influence student success ( Hagermoser ; et al. , 2018 Nagro et al., 2019 ). The processes of planning, designing and reflecting on instructional strategies are central to the development of teacher competence ( Enama, 2021 ). Indeed, successful teachers demonstrate a propensity for meticulous planning and critical reflection in the classroom. This meticulousness ensures that lessons are tailored to learning objectives and that teaching methods are optimised for effective delivery. Through the act of lesson planning, teachers become adept at identifying student needs and adapting their approaches to promote optimal learning within the specific context of their classrooms. Lesson planning skills are fundamental for all teachers, as contemporary research highlights their positive impact on both teacher development and student engagement ( Aimah et al., 2017 ). This renewed focus on lesson planning reflects its growing recognition as a cornerstone of teacher effectiveness in the classroom ( Groβmann & Krüger, 2024 ; Janssen et al., 2019 ; Mutton et al., 2011 ; Zaragoza et al., 2021 ). Undeniably, lesson planning guides inservice and preservice teachers in facilitating their teaching practice ( Cuñado & Abocejo, 2019 ). As such, lesson planning is integral to the professional development of teachers, serving as a bridge between pre-service training and effective classroom practice ( Hejji Alanazi, 2019 ). Despite its acknowledged importance, lesson planning remains a persistent challenge for both novice and experienced teachers ( Anggrella et al., 2023 ). Aligning lesson plans with curriculum competencies and incorporating a scientific approach to teaching can be particularly difficult ( Anggrella et al., 2023 ). Factors contributing to these challenges are multifaceted and include, but are not limited to, access to instructional materials, variation in student interests and abilities, and, most importantly, time constraints. Recent research exploring the use of lesson planning workshops for independent learning models highlights the particular difficulties teachers face in selecting appropriate learning resources and media ( Anggrella et al., 2023 ). In addition, some teachers perceive lesson planning as an overly rigid process that requires a significant investment of time to design each activity ( Anggrella et al., 2023 ). Teacher proficiency in lesson planning remains a critical area of concern ( Cuñado & Abocejo, 2019 ). Technology, a frequent catalyst for innovation in higher education pedagogy ( Breines & Gallagher, 2023 ; Popenici & Kerr, 2017 ), has immense potential to transform lesson planning practices. Advances in educational technology have revolutionised teaching and learning practices. In particular, innovative technologies have incentivised the adoption of student-centred approaches such as authentic and project-based learning, fostering deeper engagement and meaningful learning outcomes ( Popenici & Kerr, 2017 ). In support of this notion, Janssen et al. (2019) explored the impact of technological support on pre-service teachers' lesson planning by comparing lesson plans created by pre-service teachers with and without access to technology-integrated resources. The findings revealed that teachers who received technology-supported materials incorporated stronger design justifications in their lesson plans ( Janssen et al., 2019 ). Similarly, Onyango et al. (2017) concluded that technology integration empowers teachers to design technology-supported lessons, potentially mitigating teachers' capacity limitations. By facilitating the creation of relevant learning experiences that resonate with students, technology emerges as an essential tool for effective lesson planning. The burgeoning field of Artificial Intelligence (AI) is attracting considerable interest in the educational landscape. AI is highly sophisticated, and its potential to transform teaching and learning is undeniable ( Dai & Ke, 2022 ; Popenici & Kerr, 2017 ; Salifu et al., 2024 ). The emergence of AI promotes a paradigm shift away from traditional methods in favour of dynamic, personalised and cost-effective learning and teaching experiences. As supported by previous research, the impact of AI extends across the educational spectrum, influencing instructional design, delivery, assessment, and the learning process itself ( McCormack, 2024 ; Richardson & Clesham, 2021 ; Roll & Wylie, 2016 ). Chatbots, a rapidly evolving AI technology, serve as valuable assistants for educators and learners alike. These tools provide information, facilitate subject-specific discussions, answer students' questions and provide instant feedback ( Pillai et al., 2023 ; Zhou et al., 2020 ). Chatbots can be broadly categorised according to their functionality: addressing specific problems (applied potential) or generating content such as text, images and audio. Driven by their interactive nature and wide accessibility, chatbots are experiencing a surge in popularity in educational settings ( Zhou et al., 2020 ). This has led numerous institutions to explore the integration of AI as a novel approach to teaching and learning ( Al-Ghadhban & Al-Twairesh, 2020 ; Malott, 2020 ). In particular, AI-powered tools such as 'Teacherbot' (T-Bot) stand out as a prominent chatbot designed specifically for teaching purposes in educational settings. As a virtual teaching assistant, T-Bot empowers teachers by streamlining the organisation of content and providing immediate answers ( Pillai et al., 2023 ). In addition, T-Bot extends its functionality to personalised learning and teaching process via virtual means. Proponents argue that T-Bot seamlessly complements traditional teaching methods by providing teachers with personalised feedback, constructive suggestions and real-time responses to their queries. T-Bots have the potential to revolutionise the practice of lesson planning. As previously argued, successful lesson planning depends on the thoughtful integration of technology to enhance instructional effectiveness in the classroom. In particular, T-Bot hold the promise of personalising the academic voyage for teachers and students with less planning ( Hwang & Chang, 2021 ; Pillai et al., 2023 ; Zhang & Aslan, 2021 ). To this end, some universities around the world are embracing the potential of T-Bots, along with other AI-powered educational tools and platforms such as Ani, Botter, CourseQ, Differ, Duolingo, Genie, Ivy and MOOCBuddy, which are helping to enhance the learning experience through different approaches, from language learning to personalised tutoring and course support ( Pillai et al., 2023 ). The development of the chatbot ALBELA by the Indian Institute of Technology is another example of AIwhich play similar role as T-bots in supporting teaching (India Today, 2019; Pillai et al., 2023 ). Ghana's education landscape is currently experiencing a surge in senior high school (SHS) enrolment, due to the introduction of the free senior high school policy. However, the influx of students has not been accompanied by a corresponding increase in teacher recruitment, leading to teachers’ workload. The resultant increase in teacher workload poses a challenge to effective lesson planning, particularly the development of lesson plans, which can hamper teachers' ability to effectively respond to students' queries ( Osei & Agyei, 2023 ). While the widespread adoption of T-bots in the Ghanaian education system is still in its early stages, a growing number of pre-service teachers are familiar with this AItechnology and its functionality to support both teaching and learning endeavors, particularly in the area of lesson planning. A ′pre-service teacher' is someone who is undergoing formal education and training to become a certified teacher. This period precedes their official teaching career and includes academic coursework, practical teaching experience and professional development to prepare them with the necessary knowledge, skills and competencies to teach effectively in the classroom they complete their certification and become fully qualified teachers. Given the emphasis on lesson planning as a cornerstone of teacher competence, T-Bots hold great promise for providing personalised support to pre-service teachers. Consequently, exploring the determinants impacting pre-service teachers' acceptance of T-Bots in the Ghanaian context is of paramount importance. There is a critical gap in current scholarship, as there is dearth of studies specifically examines teachers' embracement of T-Bots for lesson planning purposes. This gap highlights the need for an empirical investigation of the antecedents of T-Bot adoption from a behavioural perspective. Such an investigation would aim to elucidate the pre-services teachers' view and willingness to imploy T-bot. The findings could inform interventions designed to facilitate the successful implementation of T-Bots among pre-service teachers. Guided by this rationale, the present study seeks to investigate: What factors influence pre-service teachers’ intentions to use T-Bots for lesson planning purposes? To delve into the factors shaping pre-service teachers' to imploy T-Bots for lesson planning, this study uses a social psychological model in the light of the Unified Theory of Acceptance and Use of Technology (UTAUT2) developed by Venkatesh et al. (2012) . UTAUT2 has emerged as a prominent theory that guides the understanding technology adoption in various domains, and its applicability is validated by previous studies such as Arthur et al. (2023) , Yidana et al. (2023) , and Salifu et al. (2024) in the context of Ghana. While UTAUT2 provides a valuable lens, further research regarding the acceptance of T-Bot by pre-service teachers using this model is needed. This research strives to expand upon the current understanding of AI tools embracement by exploring what inform the integration of T-bot for lesson planning in the Ghanaian context. The novelty of this study is the decision to integrate the robust statistical validation of structural equation modelling (SEM) with the ability of artificial neural networks (ANN) to recognise complex non-linear data patterns. The SEM-ANN as a dual approach ensures accurate predictions of pre-service teachers' intentions to incorporate AI into lesson planning, providing deeper insights than either method used alone. SEM facilitates the rigorous testing of hypothesised relationships within the UTAUT2 framework ( Salifu et al., 2024 ). Conversely, ANNs excel at identifying complex non-linear patterns within the data, potentially revealing nuanced insights that may be missed by SEM alone ( Arpaci, 2023a, 2023b ). This hybrid analytical strategy exploits the strengths of each technique for a deeper insight into the factors shaping preservice teachers' adoption of T-Bots. By using both methods and comparing their results, the study increases the reliability and robustness of its findings, ultimately leading to more convincing conclusions. The subsequent sections of this study are carefully organised as follows: Section Two delves into the theoretical underpinnings of the study, presenting the UTAUT2 framework and the development of hypotheses formulated. Section Three carefully outlines the methods. Section Four presents the findings, while Section Five provides a thorough discussion. Section Six offers the conclusion, highlighting the theoretical contributions and practical significance, as well as limitations for future research directions. 2 Theoretical background As mentioned, Teacher Bot (T-Bot) is an AItool that has the ability to streamline lesson planning by providing real-time, personalised support to pre-service teachers. T-Bot's AI features, including content suggestions, adaptive learning strategies and instant feedback, increase the efficiency and creativity of lesson design. Its user-friendly interface and integration with educational platforms are particularly beneficial for pre-service teachers, allowing them to focus on pedagogical effectiveness while using advanced planning technologies. Among the various technology adoption theories, including AI, the Unified Theory of Acceptance and Use of Technology (UTAUT2) proposed by Venkatesh et al. (2012) is often used to support the theorisation of its antecedents from the behavioural intentions and actual usage behaviours aspects (see Fig. 1 ). Thus, the UTAUT2 represents an extended framework within the field of social psychological theories ( Venkatesh et al., 2012 ). Its core function is to explain individuals' attitudes and behavioural intentions towards specific technological products, services or experiences ( Venkatesh et al., 2012 ). UTAUT2 integrates insights from several established theories such as the Theory of Reasoned Action (TRA), Theory of Planned Behaviour (TPB), Innovation Diffusion Theory (IDT), Technology Acceptance Model (TAM), Motivation Model (MM), Model of PC Utilisation (MPCU), and Social Cognitive Theory (SCT). The UTAUT2 framework posits that user intention, a concept that reflects individuals' perceptions of usefulness and willingness to adopt a technology, serves as the primary driver of human behaviour towards technological innovations ( Venkatesh et al., 2012 ). UTAUT2 identifies seven key determinants that influence behavioural intention: performance expectancy, effort expectancy, social influence, facilitating conditions, habits, hedonic motivations and price value. In addition, UTAUT2 recognises the moderating effects of gender, age and experience on these core constructs ( Venkatesh et al., 2012 ). The broad applicability of UTAUT2 makes it a prominent theoretical lens for studying user uptake of emerging technologies ( Arpaci et al., 2022 ). The UTAUT2 model was selected for this study because of its robust framework for predicting technology adoption behaviour. Recent studies have demonstrated that UTAUT2 offers a more nuanced perspective with the ability to shed light on a substantial segment of the variability in behavioural intentions towards technology adoption (up to 74%) and its actual usage (up to 52%) compared to alternative models ( Das & Datta, 2024 ; Gansser & Reich, 2021 ; Granić, 2022 ; Li et al., 2023 ; Marikyan et al., 2023 ; Tamilmani et al., 2021 ). The comprehensive nature of the UTAUT2 model allows for a nuanced analysis of the individual and contextual factors that affect technology adoption. Therefore, the UTAUT2 as theoretical lenses presents itself as a suitable for understanding pre-service teachers' intentions to use AI in lesson planning. In the context of Ghana, Salifu et al. (2024) used the UTAUT2 to explore drivers of higher education economics students' propensity to employ ChatGPT. Their findings support the theoretical and practical contributions of UTAUT2 that can serve as guide for a thoughtful and responsible integration of AI-based tools as a future strategy to enhance education accessibility and inclusivity opportunities. Their findings contribute to the expanding corpus of knowledge supporting the comprehensiveness of UTAUT2 in assessing technology adoption. The model is a robust theoretical foundation and explanatory power make it a reliable tool for understanding user integration of innovative AI technologies such as ChatGPT. Similarly, the UTAUT2 framework can provide a strong theoretical basis for investigating the potential adoption of emerging AI tools for educational purposes, such as the T-bot. 2.1 Hypotheses formation 2.1.1 Teacherbot usage intention (outcome variable) We chose the UTAUT2 model because of its well-established ability to capture various factors influencing technology acceptance, which is particularly relevant within the uptake of AI implementation ( Venkatesh et al., 2003 ). We operationalised the UTAUT2 framework by adapting its core constructs - performance expectancy, effort expectancy, social influence, facilitating conditions, habit and hedonic motivation – within the uptake of T-bot for lesson planning. This adaptation was informed by previous research on AI adoption intentions (cite specific studies where relevant). By exploring how these factors influence pre-service teachers' preference for using T-bots in lesson planning, we aim to develop a comprehensive set of hypotheses that explain the potential for T-bot integration in teacher education. 2.1.2 Performance expectancy (PE) and intention to adopt T-bot Performance expectancy (PE) is a core concept in UTAUT and UTAUT2 which reflects a user's belief that a technology will enhance their productivity and task performance. Similar to the concept of perceived usefulness (PEU) in TAM, PE focuses on the usefulness aspect of educational technology adoption. Studies have consistently shown a positive influence of PE pertaining to technology incorporation and intention to use ( Pillai et al., 2023 ). Furthermore, PE acts as a pivotal factor in fostering the adoption of AI tools ( García de Blanes Sebastián et al., 2022 ; Pillai et al., 2023 ; Salifu et al., 2024 ). We reinforced these findings in the literature with the well-established notion that preservice teachers might expect T-bots to improve their lesson planning. Consequently, PE is a critical factor in preservice intention to adopt T-bots. This emphasis stems from the understanding that PE significantly influences teachers' intentions to adopt T-bots. Put simply, pre-service teachers who believe that T-bots are valuable tools for lesson planning. This understanding of PE is fundamental to predicting whether preservice teachers will adopt or reject T-bots in their teaching practice. Based on this premise, we set forth the following hypothesis. H1 Performance expectancy will have a significant positive influence on pre-service teachers' intentions to use the T-bot for lesson planning purposes 2.1.3 Effort expectancy (EE) and intention to adopt T-bot Effort Expectancy (EE) is a key construct in the UTAUT2 model that influences user acceptance and technology adoption. It reflects users' beliefs about the ease of learning, understanding and operating a technology. In essence, EE captures users' perceptions of the mental and physical effort required to successfully use technology. This concept is also important in other models such as UTAUT and TAM. Understanding users' EE is critical to designing user-friendly technology solutions that promote adoption. Research suggests a positive relationship between EE and teachers' intentions to adopt technology ( Al-Adwan et al., 2024 ; Du & Liang, 2024 ; Ogegbo et al., 2024 ). Furthermore, EE is a critical element shaping the uptake of AI-powered tools ( Cortez et al., 2024 ; Das & Datta, 2024 ; Salifu et al., 2024 ; Strzelecki, 2023 ; Wu et al., 2022 ). T-bots, which are designed to facilitate easy lesson planning by answering questions, explaining complex concepts and addressing various queries, may positively influence preservice teachers' intention to adopt them. A high positive EE associated with T-bots means that minimal training is required for preservice teachers to use them effectively, and this is likely to foster positive attitudes towards integrating them into lesson planning. Conversely, a low EE, signifying perceived difficulty in using T-bots due to complex procedures, unfamiliar terminology or a steep learning curve, and this could discourage adoption among preservice teachers. Based on these considerations, this study posits the following hypothesis. H2 Effort expectancy will have a significant positive influence on pre-service teachers' intentions to use the T-bot for lesson planning purposes 2.1.4 Social influence (SI) and intention to adopt T-bot Social influence (SI) reflects the importance that individuals attach to the opinions and judgments of others, thus influencing their receptiveness to a particular technology ( Arpaci et al., 2022 ). It involves interactions between individuals or groups that can lead to changes in thoughts, feelings, attitudes or behaviours ( Bhukya & Paul, 2023 ; Bower et al., 2020 ; Tseng et al., 2022 ). These interactions can occur in a variety of social contexts, including peer groups, social media networks, professional communities, and societal norms ( Bhukya & Paul, 2023 ). Both the UTAUT and UTAUT2 models recognise the influence of social factors through SI pertaining to an individual's inclination to uptake technology ( Venkatesh et al., 2003 , 2012 ). In essence, SI captures the impact of the perceptions, behaviours and attitudes of influential peers, such as friends, family, colleagues and prevailing social norms. The emphasis on interpersonal relationships and social contexts highlights the important role of SI in shaping individuals' attitudes and intentions towards AI adoption ( Aziz et al., 2022 ; Baydas & Goktas, 2016 ; Cortez et al., 2024 ; Strzelecki, 2023 ; Suhail et al., 2024 ; Wu et al., 2022 ). In our study, SI is conceptualised as preservice teachers' perceptions of how significant others, such as colleagues, school administrators, and policymakers, will respond to their use of T-bots for lesson planning. Therefore, SI is defined by preservice teachers' anticipation of their social circle's approval or disapproval of the integration of T-bots into their lesson planning practices. Based on this understanding, our hypothesis is as follows. H3 Social influence will have a significant positive impact on the intention of pre-service teachers to use the T-bot for planning purposes. 2.1.5 Facilitation condition (FC) and intention to adopt T-bot Facilitating conditions (FC), as described in UTAUT and UTAUT2, emphasised on the external factors and resources that support technology adoption and use ( Venkatesh et al., 2003 , 2012 ). These conditions include infrastructure such as hardware and software, top management support, relevant policies, and readily available resources that facilitate technology integration for individuals ( AlQudah & Shaalan, 2022 ; Lopez-Perez et al., 2019 ; Yuan et al., 2023 ). FC serves an indispensable role in forming users'perceptions and behaviours towards educational technology adoption ( Albanna et al., 2022 ; An et al., 2023 ; Arpaci et al., 2022 ; Arthur et al., 2023 ; Du & Liang, 2024 ; Garcia, 2023 ; Salifu et al., 2024 ). In our study, FC is operationalised to reflect preservice teachers' perceptions of factors that may influence their adoption of T-Bots for lesson planning. These factors include access to training and support, technical assistance, compatibility with existing educational systems, and the presence of organisational policies that encourage or discourage the uptake of AI. By examining the influence of AI, we aim to provide valuable insights for policymakers and stakeholders. Understanding these conditions can provide insights for the development of interventions with specific aims that promote successful T-bot adoption and integration into classroom practices. H4 Facilitating conditions will have a significant positive influence on pre-service teachers' intentions to use the T-bot for lesson planning purposes 2.1.6 Hedonic motivation (HM) and intention to adopt T-bot Hedonic motivation (HM) is a key construct within UTAUT2 that captures the intrinsic enjoyment individuals derive from interacting with a technology system ( Venkatesh et al., 2012 ). HM influences user behaviour by determining whether individuals are motivated or discouraged to engage with particular technologies ( Al-Azawei & Alowayr, 2020 ). Within the sphere of technology uptake, HM refers to the enjoyment and satisfaction that users experience when interacting with technology ( Nikolopoulou et al., 2021 ; Tamilmani et al., 2019 ). The inclusion of HM in UTAUT2 represents a significant advance, as it recognises the crucial role of emotional factors in shaping user perceptions. Previously, these emotional aspects were largely absent from the original UTAUT model ( Venkatesh et al., 2012 ). Research suggests that HM is a significant determinant of students' propensity to embrace technology ( Arthur et al., 2023 ; Lin & Yu, 2024 ; Moorthy et al., 2019 ). Other evidence has supported the positive influence of HMon the adoption of AI tools and other technologies in educational contexts ( Al-Emran et al., 2023 ; Emon et al., 2023 ; Qu & Wu, 2024 ; Salifu et al., 2024 ; Tseng et al., 2022 ). Building on this research, we posit that HM is a predictor of preservice teachers' intentions to adopt T-bots for lesson planning. We theorize that preservice teachers who perceive T-bots as enjoyable and pleasurable will exhibit a stronger intention to integrate them into their lesson planning practices. Conversely, a lack of perceived enjoyment may discourage adoption. Building upon this, our proposed hypothesis is. H5 Hedonic motivation will have a significant positive influence on pre-service teachers' intentions to use the T-bot for lesson planning purposes. 2.1.7 Habit (HB) and intention to adopt T-bot Habit, as described in UTAUT2, reflects the scope of individuals' habitual enactment of behaviors due to learned sequences over time ( Venkatesh et al., 2012 ). This concept suggests that past experiences with technology can influence future adoption intentions. Put simply, individuals are apt to integrate new technologies if they are consistent with their established behavioural patterns (e.g., using similar interfaces or performing similar tasks). Based on the notion that past usage experiences shape future usage patterns, habit is theorised to positively influence technology adoption ( Tamilmani et al., 2019 ; Venkatesh et al., 2012 ). Research suggests that habit significantly influences the adoption of technology in educational settings, especially AI tools ( Al-Azawei & Alowayr, 2020 ; Al-Emran et al., 2023 ; Garcia, 2023 ; Salifu et al., 2024 ). We propose that pre-service teachers with a higher propensity to use technology automatically (stronger habit) will have a stronger intention to use T-bots for lesson planning. This stems from the understanding that past experiences with similar technologies shape user behaviour and influence future technology adoption choices. In other words, if preservice teachers have a strong positive habit of using AI technology, they are more likely to adopt T-bots as another AI tool that can enhance their teaching repertoire. H6 Habit will have a significant positive influence on pre-service teachers' intentions to use the T-bot for lesson planning purposes. 2.2 Conceptual Model A conceptual model was developed based on the research hypotheses formulated for the study. Fig. 2 shows the conceptual framework that was constructed to guide the study. 3 Methods 3.1 Study design, population and sampling technique The design of this research relied on a survey to capture a snapshot of pre-service teachers' intentions to embrace the T-bot which is an AI tool for lesson planning pursposes. This design aligns with Dilman et al.'s (2014) perspective on the suitability of this design gather information from a group of people at one point in time. Furthermore, this design aligns with the study's objective of capturing the current state of practice, without manipulating the variables under investigation ( Yidana & Arthur, 2023 ; Yidana et al., 2022 ; Yidana et al., 2023 ). The target population for the study was all final-year education students at the University of Cape Coast who had participated in off-campus teaching practice in the 2022/2023 academic year and had used AI in lesson preparation. This is because the total number of students who used AI could not be determined, purposive sampling was employed. A questionnaire was administered to all 1506 final-year preservice teachers, and 800 who responded that they used AI were included in the study. This purposive sampling method was chosen for its ability to gather pertinent and valuable information ( Campbell et al., 2020 ; Obilor, 2023 ; Rahman, 2023 ). The researchers used statistical software (G∗Power) to calculate the appropriate number of participants needed for the study with multiple predictors ( Hair et al., 2016 ). A model with six predictors, an effect size of 0.15 (medium effect), a power level of 0.95, and a significance level of 0.05 ( Awang et al., 2020 ; Cheah et al., 2019 ; Memon et al., 2020 ) was used to determine the minimum required number of participants, which was found to be 146 (see Appendix A ). Recognising the importance of wider applicability and reliable results, the researchers decided to increase the sample size to 800 participants. 3.1.1 Research instrument To understand pre-service teachers' perspectives on T-bot, we developed an instrument based on the established UTAUT2 model Venkatesh et al. (2003 , 2012) . This study used seven key constructs and included 27 questions: behavioural intention (four items), effort expectancy (five items), facilitating conditions (four items), habit (three items), hedonic motivation (three items), performance expectancy (four items), and social influence (four items). Each construct was assessed using a five-point Likert scale, following previous studies by Kurt and Tingöy (2017) , Salifu et al. (2024) , Steil et al. (2020) , and Venkatesh et al. (2012) . 3.1.2 Procedure for data collection The data collection process commenced with the training and briefing of seven research assistants regarding the survey instruments. These assistants were subsequently deployed in various lecture theatres to facilitate questionnaire administration. Respondents were allotted 25–30 min to complete the questionnaire within the designated timeframe. The data collection spanned from July to August. Among the cohort of preservice teachers, 783 completed questionnaires were obtained from a total distribution of 800. This outcome signifies a robust questionnaire return rate of 97.87%. After completing the questionnaire, research assistants meticulously reviewed the responses to guarantee thoroughness and accuracy. 3.1.3 Data processing and analysis After cleaning the data (removing invalid responses), the researchers used advanced software (SPSS version 28) for analysis. A two-step approach (PLS-SEM-ANN) was used to test the model and hypotheses. After removing invalid and incomplete questionnaires, the data were processed using SPSS version 28. A two-stage analytical approach (PLS-SEM-ANN) was used to validate the research model and assess the hypotheses. First, the PLS-SEM approach was used to assess the validity and reliability of the indicators and constructs. This assessment included assessing convergent validity using Average Variance Extracted (AVE), discriminant validity and internal consistency using Cronbach's Alpha. These measures ensured that the constructs were not only reliably measured but also distinct from one another, with AVE confirming that the indicators adequately captured the intended constructs and Cronbach's Alpha demonstrating strong internal consistency across the items. Subsequently, the ANNmethod, known for its superior predictive accuracy compared to traditional regression techniques, was used to verify the factors predicting the intention to use AI, as supported by previous studies ( Arpaci et al., 2022 ; Al-Sharafi, Al-Emran, Arpaci, et al., 2022 ; Salifu et al., 2024 ). SPSS software was utilised to conduct the ANN analysis. This research extends the existing literature confirming the effectiveness of ANN in this area ( Al-Sharafi, Al-Emran, Arpaci, et al., 2022 ; Salifu et al., 2024 ). 3.1.4 Ethical consideration This study prioritised ethical considerations when investigating how future teachers view the use of AI in lesson planning. To protect the rights and well-being of participants, each teacher received a detailed informed consent form that explained the goals, methods, and potential risks of the study ( Diaz-Asper et al., 2024 ). This form ensured clear communication and allowed teachers to make an informed decision about participation. It emphasised confidentiality and voluntary participation, with the freedom to withdraw at any time without repercussions ( Wang & Ma, 2024 ). Maintaining ethical research practices and respectful interactions with participants is crucial to building trust and creating a positive environment for exploring how future teachers feel about incorporating AI into lesson planning ( Ahmad, 2024 ). 4 Results 4.1 Profile of preservice teachers The demographic characteristics of Preservice teachers were examined. Table 1 shows the profile (gender and age) of Preservice teachers. Table 1 reveals a sizable majority of the pre-service teachers were male, with over half (n = 501, 64.0%) identifying as such. Additionally, the majority of the preservice teachers (n = 589, 75.2%) were aged 25 years or younger. The result in Table 1 suggest that most of the pre-service teachers were men, with a concentration in the age group of 25 and young. 4.2 Measurement model Table 2 summarises the tests carried out to ensure the accuracy and reliability of the key measures in the study. Each construct, including Behavioural Intention (BI), Effort Expectancy (EE), Facilitating Condition (FC), Habit (HT), Hedonic Motivation (HM), Performance Expectancy (PE), and Social Influence (SI), was evaluated based on its respective items, loadings, Cronbach's Alpha (CA), Composite Reliability (CR), and Average Variance Extracted (AVE). The high loadings of the items for each construct indicate strong relationships with their respective latent variables. Cronbach's Alpha (CA) is a measure of internal consistency, indicating how closely a set of items within a construct are related to each other. A high CA, typically above 0.7, suggests that the items are reliably measuring the same underlying construct. In Table 2 , the CA values range from 0.868 to 0.941 across the constructs, demonstrating strong internal consistency and reliability of the measurement items. Also, CR is another measure of internal consistency, but is considered more accurate than CA because it takes into account the actual loadings of each indicator. CR values above 0.7 are considered acceptable and indicate that the constructs are being measured reliably. The CR values in Table 2 are above 0.9 for all constructs, reflecting excellent reliability and suggesting that the items collectively share a high degree of variance. Moreover, satisfactory CA and CR values suggest internal consistency and reliability within each construct ( Hair et al., 2019 ; Hair & Sarstedt, 2019 ). Furthermore, Average Variance Extracted (AVE) assesses convergent validity by measuring the average variance captured by the items in a construct relative to the variance due to measurement error. An AVE value greater than 0.5 indicates that more variance is accounted for by the construct than by error, supporting the validity of the construct. The AVE values in Table 2 range from 0.665 to 0.849, confirming that the constructs have sufficient convergent validity. The AVE values exceeding 0.5 demonstrate good convergent validity ( Bagozzi et al., 1991 ; Fornell & Larcker, 1981 ) [see Table 2 ]. Variance Inflation Factor (VIF) values for the Outer Model (OM) and Inner Model (IM) assess the absence of multicollinearity, with generally acceptable values across constructs. This comprehensive assessment underscored the robustness and validity of the measurement model used in this study. Moreover, Fig. 3 shows the PLS-SEM algorithm results. 4.3 Discriminant validity Discriminant validity is a measure of the extent to which a construct is truly distinct from other constructs in a model, ensuring that it captures phenomena not represented by other variables. Discriminant validity, a crucial aspect of construct validity, was assessed using two established methods. Table 3 shows the Fornell-Larcker criterion, where the square root of the Average Variance Extracted (AVE) for each construct exceeds its correlations with other constructs. This result ( Fornell & Larcker, 1981 ) suggests that the measurement model successfully distinguishes between the constructs, supporting the reliability and validity of the structural model. Further confirmation of discriminant validity is provided in Table 4 , which utilises the Heterotrait-Monotrait Ratio (HTMT). This ratio compares the strength of the correlations between constructs (heterotrait) with the correlations within constructs (monotrait). As recommended by Hair et al. (2022) and Henseler et al. (2015) , HTMT ratios below 0.85 indicate satisfactory discriminant validity. All HTMT ratios in Table 4 fall below this threshold, providing additional evidence for the discriminant validity of the constructs. Finally, the strong indicator loadings on the respective constructs ( Appendix B ) support the overall measurement model. 4.4 Evaluation of model fit indices Evaluating model fit indices using PLS-SEM is crucial, as emphasised by Sarstedt et al. (2017, 2021) and Ringle et al. (2023) . To assess the fit of the model, researchers have employed two commonly used metrics: the Standardised Root Mean Square Residual (SRMR) and Normed Fit Index (NFI) ( Linge et al., 2023 ; Ringle et al., 2015 ). The SRMR and NFI benchmarks were set at SRMR <0.08 and NFI >0.90 ( Ringle et al., 2015 ). Model fit analysis indicated that the SRMR value was 0.046, which satisfied the specified criteria. Moreover, the calculated NFI was 0.857, approximately equal to the stipulated threshold. Consequently, the researchers examined the structural model as the model fit metrics fulfilled both conditions. 4.5 Assessment of the structural model To highlight a specific variable multicollinearity concerns. The Variance Inflation Factors (VIFs) ranged from 2.295 to 3.534 ( Table 2 ), indicating no significant issues in alignment with established recommendations set forth by Hair et al. (2022) . To assess the formulated research hypotheses, a bootstrapping approach with 10,000 resamples was employed ( Becker et al., 2023 ). The detailed results are presented in Table 5 , which aligns with the hypotheses outlined to guide this study. Additionally, Fig. 4 visually depicts the bootstrapping outcomes for the PLS-SEM model. Structural path analysis revealed several significant relationships between the key constructs and BI. In Table 5 , the results revealed that performance expectancy (PE) had a positive and significant influence on behavioural intention (BI) [β = 0.145, p = 0.005, CI = (0.047; 0.248)]. Hence, H1 is supported. Effort expectancy (EE) had a significant positive effect on behavioural intention (BI) [β = 0.127, p = 0.018, CI = (0.020; 0.228)]. Also, SI showed a significant positive impact on BI (β = 0.249, p < 0.001, CI = [0.147; 0.353]). Similarly, the facilitating condition (FC) had a significant positive influence on BI (β = 0.118, p = 0.050, CI = [−0.003; 0.236]). Hence, H3 and H4 are supported. Surprisingly, hedonic motivation had a non-significant effect on behavioural intention (β = 0.031, p = 0.601, CI = [−0.085; 0.151]). Hence, H5 is not supported (see Table 5 ). Habit also exhibited a significant influence on BI (β = 0.189, p < 0.001, CI = [0.100; 0.280]). Therefore, H6 is sustained. To deepen our understanding of how one construct significantly influences another, we analysed effect sizes ( ) following Cohen's (1988) guidelines. In predicting BI, PE ( f 2 = 0.014), EE ( f 2 = 0.010), SI ( f 2 ) = 0.045), FC ( f 2 = 0.009), and HT ( f 2 = 0.033) demonstrated a small effect size. The model's overall explanatory power is reflected in an R-squared ( f 2 ) value of 0.528 (see R 2 Table 5 ), and adjustments account for 52.5% of the variance in Behavioural Intention. This result suggests that PE, EE, SI, FC and HT explained 52.8% of the variation in BI. 4.6 Out-of-sample predictive power We employed the PLSpredict methodology to evaluate the predictive capability of the dependent variables concerning behavioural intention (BI). Comparing the root mean squared error (RMSE) metrics from the PLS-SEM analysis with those from the linear regression model benchmark, we observed that except for BI3 and BI4, the former yielded lower prediction errors across all indicators of the outcome variable (behavioural intention) [see Table 6 ]. These results suggest medium predictive capacity ( Shmueli et al., 2019 ). Furthermore, the predictive significance of the structural model was appraised using the Q 2 predict values for the BI items. These values ranged from 0.379 to 0.404 for the individual BI items, indicating a considerable degree of predictive precision. Moreover, the overall predictive efficacy of the BI model was represented by a Q 2 value of 0.516 (see Table 6 ). 4.7 Importance-Performance Map Analysis (IPMA) Importance-Performance Map Analysis (IPMA) is a technique used to visually show how important different aspects of something are compared to how well they're currently performing ( Fakfare & Manosuthi, 2023 ; Hauff et al., 2024 ; Ringle & Sarstedt, 2016 ; Sop et al., 2024 ). By placing these aspects on a grid, IPMA helps to identify which aspects need the most attention, taking into account both their importance and how well they're working. Table 7 provides details on the importance and performance of the factors influencing the main variables in our study. According to the results, Social Influence (SI) emerged as the most pivotal factor, achieving the highest significance level (0.249) [see Table 7 ]. This suggests that the perceived influence of social networks, colleagues, or mentors plays a crucial role in shaping preservice teachers' intentions to adopt AI for lesson planning. Following closely behind, Habit (HT) holds a notable significance level (0.189), indicating that the habitual inclination or routine of using AI in lesson planning contributes significantly to preservice teachers’ intentions. This suggests that if preservice teachers have developed the habit of using AI tools for lesson planning, they are more likely to continue doing so in the future. Furthermore, Performance Expectancy (PE) and Effort Expectancy (EE) are also significant factors, albeit to a slightly lesser extent, at significance levels of 0.145 and 0.127, respectively. This suggests that preservice teachers’ beliefs about the effectiveness and ease of use of AI tools in lesson planning influence their intentions. Finally, Facilitating Conditions (FC) is identified as a significant factor with a lower significance level (0.118). However, in terms of performance, FC garners the highest score (61.785), indicating that the availability of resources, support, and infrastructure to implement AI in lesson planning is perceived as crucial by preservice teachers, although it may not hold the highest significance level among the factors considered. Overall, the findings suggest that several factors play a role in how likely future teachers are to use AI in lesson planning. These factors include social influence, hedonic motivation, performance expectancy, effort expectancy, facilitating conditions. Interestingly, social influence and ingrained habits stand out as particularly strong motivators for using AI. Fig. 5 provides a visual representation of the importance of these factors compared to how well they are currently supported in the context of our study. In addition, Fig. 6 illustrates the effectiveness of each factor in influencing pre-service teachers' intentions. 4.7.1 Evaluation of the model with artificial neural network After using a statistical method called PLS-SEM to explore the initial relationships, this study dug deeper by using a different approach known as an Artificial Neural Network (ANN) to analyse the factors that influence future teachers' intention to use AI in lesson planning. ANNs have an advantage over traditional methods because they can capture more complex relationships, not just simple ones ( Al-Sharafi, Al-Emran, Arpaci, et al., 2022 ; Arpaci et al., 2022 ; Kalinic et al., 2021 ). An ANN typically has layers that process information. This study used a specific setup with an input layer, hidden layers that analyse the data, and an output layer that provides the results ( Kalinic et al., 2021 ). To ensure that the model performed well, certain settings were used, such as a special function for processing data within the network and restrictions on input and output values. In addition, a technique called tenfold cross-validation was used to prevent the model from becoming overly dependent on certain data ( Alam et al., 2021 ; Kalinic et al., 2021 ). In this approach, the data is divided into sections for training and testing the model. As shown in Fig. 6 , the model treated teachers' intention to use AI as the final outcome. Fig. 7 illustrates the initial structure of the ANN, in which sections for the five key factors identified earlier (teachers' belief that AI will improve teaching, perception of ease of use, influence of others, access to resources, and ingrained habits) feed into a single section representing the intention to use AI. The specific details of the ANN model are further explained in the following section. (1) BI = f (PE, EE, SI, FC, HT) … … … … … … … … … … … … … … … … … … … … … …...ANN Model 4.7.2 Artificial neural network (ANN) model Fig. 7 depicts the artificial neural network (ANN) model comprising two five (5) layers (PE, EE, SI, FC and HT) and one output layer (BI). The results pertaining to the ANN model's training and testing performance are detailed in Table 8 . 4.8 Sensitivity analysis for the ANN model Sensitivity analysis was performed to investigate the significance of independent factors and assess the effectiveness of the neural network ( Arpaci, 2023a, 2023b ). The examination of significant predictors' of preservice teachers’ intention to use AI was conducted through a sensitivity analysis, which determined the “normalised importance” (NI), “average importance” (AI) and identified the “importance” (I) of each factor. Table 9 shows the sensitivity analysis for the ANN model. According to the sensitivity analysis results presented in Table 9 , the ANN model underscores the paramount importance of Social Influence (SI), with a sensitivity value of 100%, in forecasting the Behavioral Intention (BI) to employ Artificial Intelligence (AI) for lesson planning. This finding corroborates the outcomes of the structural model, where a significant positive relationship between SI and BI was observed (β = 0.249∗∗∗). Subsequently, Habit (HT) emerged as the second most influential factor, with a sensitivity value of 68.0%, reaffirming the structural model's result of a positive association between HT and BI (β = 0.189∗∗∗). Following in significance are Facilitating Conditions (FC) were 60.0%, Performance Expectancy (PE) was 59.0%, and Effort Expectancy (EE) was 54.0%, suggesting their respective contributions in shaping preservice teachers' intentions to utilise AI for lesson planning. 4.8.1 Coefficient of determination ( ) for the ANN models R 2 The R-squared value is an essential metric for assessing the efficacy and predictive accuracy of the ANN model. Following the approach proposed by Hew and Kadir (2016) , we calculated the R-squared ( ) value of the ANN model using the formula R 2 = 1 - RMSE/ R 2 , resulting in S 2 = 1 – (0.47/18.46 = 0.0255) = 0.9745 (approximately 97.5%) for ANN model 1, indicating a 97.5% accuracy level in forecasting preservice teachers’ behavioural intention to use AI for lesson planning. Note: the RMSE and R 2 values mentioned pertain to the mean RMSE and SSE, respectively, during the testing phase of each of the ANN model. S 2 4.8.2 Contrasting insights from PLS-SEM and ANN analyses This research compared the results of the two methods used: PLS-SEM and ANN. This comparison looked at how much each factor influenced the final outcome (path coefficient) and the relative importance of each factor (normalised relative importance). These comparison metrics are explained in detail by Ng et al. (2022) and Yidana et al. (2023) . Table 10 summarises the main differences between the results of PLS-SEM and ANN, providing a clear picture of how the two methods differ in their results. The results in Table 10 demonstrate a congruent ranking of SI and HT across both the PLS-SEM results and ANN model. However, FC was ranked third in the ANN model compared to PLS-SEM, which was ranked fifth. In addition, PE was ranked fourth in the ANN model compared to third in the PLS-SEM. 4.9 Revised conceptual model Fig. 8 shows the revised conceptual model of the study. 5 Discussion The world of education is increasingly experiencing a paradigm shift towards technology, so teachers need to find new and creative ways to adapt their teaching practices. Artificial intelligence (AI)-based tools are emerging technologies with the potential to make teaching even better. However, in order to successfully use AI in the classroom, it is important to understand how future teachers feel about using it. This study looks at the social and psychological factors that influence how likely future teachers are to use AI, specifically T-bots, when planning their lessons. The used using the UTAUT2 model to explore the influence of its constructs: performance expectancy, effort expectancy, social influence, facilitating conditions, hedonic motivation, and habit on pre-service teachers' behavioural intentions to use the T-bots in lesson planning purposes. This study used a survey design and a hybrid analytic approach to determine the factors that influenced pre-service teachers at the University of Cape Coast (UCC) to use T-bots for lesson planning. This research is in line with the UN's goal of providing quality education for all. The results demonstrated that performance expectancy, effort expectancy, social influence, facilitating conditions, and habit significantly influenced pre-service teachers' intention to use T-bots for lesson planning. These findings support hypotheses H1 , H2 , H3 , H4 , and H6 . Notably, social influence and habit emerged as the most compelling motivators in both analytical approaches. However, a comparison of Partial Least Squares Structural Equation Modeling (PLS-SEM) and Artificial Neural Networks (ANN) highlighted the distinct advantages of each method. PLS-SEM effectively validated the hypothesised relationships by demonstrating statistical significance and path coefficients, particularly in the direct effects of performance expectancy (PE) on behavioural intention (BI). On the other hand, ANN, which identifies non-linear patterns, revealed complex interactions, underscoring it higher predictive accuracy. The significance of facilitation conditions and the performance expectancy varied slightly between the two analyses, indicating that these factors may affect teachers' decisions in different ways. Thus, while PLS-SEM excelled in clarifying causal relationships, ANN was superior in uncovering intricate interactions that might not be apparent through PLS-SEM and other linear models. Notably, social influence followed by habit emerged as the most compelling motivators in both analytical approaches. These findings attests to previous studies showing that social influence and habit are key predictors in the embracement of T-bot as AI tool for lesson planning. Specifically, regarding the performance expectancy it implies that individuals are more likely to use the T-bot if they help them teach better. Thus, individuals are more likely to use new technology if they believe it will help them teach better ( García de Blanes Sebastián et al., 2022 ; Pillai et al., 2023 ; Salifu et al., 2024 ). Although these studies did not specifically look at T-bots, they all found the same thing: if teachers find a technology useful, they are more likely to want to use it. In our study, the positive influence of believing that T-bots will improve teaching (performance expectancy) suggests that future teachers who see T-bot as valuable AI tools for lesson planning are more likely to use it. This could lead them to feel more positive about T-bots overall and ultimately be more likely to use it in their teaching endeavour. Our results also showed that effort expectancy is a significant element explaining pre-service teachers' intention to imploy T-bot. This is consistent with previous research where individuals are more likely to adopt new technologies if they perceived them as easy to use ( Al-Adwan et al., 2024 ; Cortez et al., 2024 ; Das & Datta, 2024 ; Du & Liang, 2024 ; Ogegbo et al., 2024 ; Salifu et al., 2024 ; Strzelecki, 2023 ; Wu et al., 2022 ). While these studies may not have looked specifically at T-bots, they all consistently show that ease of use leads to a more positive outlook on the integration of AI in education. In the context of lesson planning with T-bots, this suggests that features that promote ease of use and clear, understandable functionalities are crucial for encouraging pre-service teacher adoption. Policy makers and stakeholders should prioritise the development of intuitive AI interfaces specifically designed for lesson planning tasks. This focus on user-centred design can allay potential fears of complexity and ultimately increase T-bot adoption among pre-service teachers. The current study's findings on social influence echo previous research ( Aziz et al., 2022 ; Baydas & Goktas, 2016 ; Bhukya & Paul, 2023 ; Bower et al., 2020 ; Cortez et al., 2024 ; Strzelecki, 2023 ; Suhail et al., 2024 ; Tseng et al., 2022 ; Venkatesh et al., 2003 , 2012 ; Wu et al., 2022 ). These studies highlight the significant influence of social networks and established norms on the adoption of educational technologies. This suggests that pre-service teachers are more likely to seek guidance from their peers and social circles when integrating T-bots into lesson planning. To capitalise on this social influence, educational institutions should actively promote the benefits of T-bots and encourage peer training initiatives among teachers. Facilitating discussion and knowledge sharing around AI experiences can raise awareness and ultimately lead to wider adoption of T-bots in professional teaching practice. Similar to other research ( Albanna et al., 2022 ; AlQudah & Shaalan, 2022 ; An et al., 2023 ; Arpaci et al., 2022 ; Arthur et al., 2023 ; Du & Liang, 2024 ; Garcia, 2023 ; Lopez-Perez et al., 2019 ; Salifu et al., 2024 ; Yuan et al., 2023 ), this study found that access to certain things plays a big role in how likely future teachers are to use AI tools like T-bots in their classrooms. These things include having the right equipment (infrastructure), support from school leaders (administrative support), and opportunities to learn how to use the tools (training). Our results showed that when pre-service teachers felt that these things were readily available, they were more likely to want to use T-bots. This suggests that people who make decisions about schools (policymakers) should focus on investing in things like equipment, support systems, and training so that teachers can successfully use AItools in their classrooms. Interestingly, our study found that how comfortable pre-service teachers are with technology in general (habit) also plays a role in how likely they are to use T-bots. This is consistent with other research showing that the more teachers use a technology, the more comfortable they become with it and the more likely they are to continue using it ( Al-Azawei & Alowayr, 2020 ; Al-Emran et al., 2023 ; Garcia, 2023 ; Salifu et al., 2024 ). Essentially, the more teachers get used to using new technology, the easier it becomes and the more likely they are to stick with it. In the context of this study, this means that hands-on experience with T-bots, such as asking questions for information and receiving feedback, can lead pre-service teachers to develop a habit of using them for lesson planning. To capitalise on this finding, policymakers should consider integrating structured approaches into teacher education programmes. This could include incorporating AI applications and encouraging their regular use both in the classroom and in independent practice. By promoting consistent engagement and familiarity with T-bots, teacher education programmes can increase the likelihood that pre-service teachers will adopt this habit and use T-bots effectively in their future classrooms. Unexpectingly, the current study did not find a significant relationship between hedonic motivation and pre-service teachers' intention to adopt T-bots. This finding contradicts previous research suggesting that enjoyment and pleasure may be drivers of AI adoption ( Al-Emran et al., 2023 ; Emon et al., 2023 ; Qu & Wu, 2024 ; Salifu et al., 2024 ; Tseng et al., 2022 ). Our findings suggest that preservice teachers' decisions to use T-bots for lesson planning may be driven by practical considerations rather than enjoyment. In other words, they may prioritise the efficiency and effectiveness of T-bots in improving learning outcomes over the inherent enjoyment of using the tool. However, future research could explore this concept further to determine whether the novelty of AI tools in education might influence hedonic motivation in the early stages of adoption. 6 Conclusion and implications This study is a pioneer in exploring how T-bots, a type of AI tool, can be used for lesson planning. We focused on how future teachers perceive and feel about using T-bots, using a model that looks at behavioural factors (UTAUT2). We used powerful statistical methods to analyse the data and found that performance expectancy, effort expectancy, social influence, facilitating conditions and habit significantly influence pre-service teachers' intentions to adopt T-bots. In particular, social influence and habit emerged as the most consistently dominant factors in both PLS-SEM and ANN analyses. This information is valuable for people who make decisions about schools (policy makers), educators, and anyone else interested in bringing AI into the classroom. By understanding what elements of T-bot are important to pre-service, policymakers and stakeholders can inform T-bot selection and implementation strategies which could ultimately lead to better lesson planning and a greater role for AI in education. 6.1 Theoretical and methodological implications In terms of educational theories and pedagogies, the findings offer valuable insights for educators and policymakers by identifying the critical factors driving AI adoption, specifically T-bots, in lesson planning. By applying the Unified Theory of Acceptance and Use of Technology 2 (UTAUT2) model, the study offers a nuanced exploration of the factors influencing the adoption of T-bots among pre-service teachers in Ghana. The UTAUT2 framework deepens our understanding by incorporating additional constructs such as social influence, facilitating conditions, and habit—elements that have been underexplored in previous models (e.g., Pillai et al., 2023 ). This comprehensive approach enables a more thorough analysis of the socio-psychological determinants shaping pre-service teachers' readiness to integrate AI tools into their teaching practices. The research thus represents a crucial step towards optimising the use of AI tools like T-bots in lesson planning, ensuring that pre-service teachers are well-prepared to navigate the evolving technological landscape in education. Moreover, the study provides a methodological blueprint for future research exploring technology integration in educational settings. Methodologically, this study pioneers a hybrid approach that combines Partial Least Squares Structural Equation Modeling (PLS-SEM) with Artificial Neural Network (ANN) analysis to predict teachers' intentions to use AI. This two-step approach is particularly robust, as PLS-SEM effectively handles outliers and standardizes variables, reducing bias, while ANN enhances prediction accuracy through its feature selection capability. By integrating these methods, the study offers a more reliable and accurate predictive model, contributing to the broader discourse on AI adoption in education. 6.2 Practical policy implications The results of the study highlight the significant potential of AI-based tools such as T-bots to improve the educational landscape. The results show that preservice teachers' intentions to use T-bots for lesson planning are significantly influenced by factors such as performance expectancy, effort expectancy, social influence, facilitating conditions and habit. In other words, preservice teachers are more likely to adopt T-bots if they perceive them to be useful, easy to use, endorsed by their peers, supported by adequate facilities and compatible with their existing teaching practices. These findings have several practical implications for stakeholders investing in the integration of AI in education. The positive influence of performance expectancy on the adoption of T-bots highlights the importance of increasing their usefulness and also of equipping pre-service teachers with the necessary skills and knowledge to achieve these useful outcomes. Policy makers and key stakeholders in higher education institutions should prioritise the allocation of resources to training and support programmes that focus on the integration of T-bots into lesson planning. These programmes should effectively communicate the practical benefits and functionalities of T-bots, while providing ongoing technical support. By equipping pre-service teachers with these skills, policy makers can cultivate their confidence and competence in using educational technology, ultimately fostering a positive perception of the effectiveness of T-bots (performance expectancy). The critical role of effort expectancy in T-bot adoption highlights the need for simplified user experiences within educational AI tools. Policy makers should prioritise initiatives that improve the usability and accessibility of T-bots. This can include developing intuitive interfaces, providing comprehensive instructional materials, and ensuring readily available technical support. By minimising the perceived effort required to use T-bots, policymakers can empower pre-service teachers to use them for lesson planning tasks with greater confidence and competence. Ultimately, this can lead to improved educational outcomes through the effective integration of AI tools in the classroom. The study's emphasis on social influence as a key driver of T-bot adoption underscores the importance of fostering collaborative and supportive learning environments within educational institutions. Policymakers should prioritise initiatives that promote peer engagement, mentorship programmes focused on T-bot integration, and knowledge-sharing platforms for educators. These strategies can facilitate discussions, sharing of experiences and awareness-raising campaigns about the benefits of AI tools. Ultimately, by cultivating a culture of collaboration and knowledge sharing around T-bots, policymakers can empower educators to use these tools with greater confidence, paving the way for their seamless integration into regular classroom practices. The study's emphasis on facilitating conditions underlines the crucial role of leadership within the education sector. Policymakers and top management must work together to invest in infrastructure improvements and robust support systems, such as dedicated IT helpdesks and online T-bot user guides. This includes ensuring reliable internet access to minimise technical barriers that could hinder AI adoption. It is also essential to provide pre-service teachers with comprehensive training initiatives and ongoing technical support frameworks. By increasing their skills and confidence in using AI tools for instructional planning, policymakers can effectively address these facilitating Ultimately, this will pave the way for the successful integration of T-bots into lesson planning practices, potentially leading to improvements in educational outcomes. The results of the study show a significant relationship between habit formation and preservice teachers' intentions to use T-bots in lesson planning. This highlights the need for policy makers to develop structured approaches within teacher education programmes that promote habitual use of T-bots. One approach might be to integrate AI applications directly into lesson planning tasks. Another strategy might be to facilitate frequent hands-on practice with T-bots, both on campus and during off-campus teaching practice sessions. By encouraging regular engagement with T-bots, pre-service teachers can develop a sense of comfort and familiarity with these tools. This, in turn, may increase the likelihood of sustained use of T-bots in their professional practice. In addition, providing ongoing access to resources and support can cultivate positive attitudes towards T-bots and solidify their integration into preservice teachers' future classroom practice. While the current study did not identify a significant relationship between hedonic motivation (enjoyment and pleasure) and preservice teachers' intentions to use T-bots, prior research in educational technology adoption suggests a potential positive correlation (e.g., Al-Emran et al., 2023 ; Emon et al., 2023 ; Qu & Wu, 2024 ; Salifu et al., 2024 ; Tseng et al., 2022 ). In light of these findings, policymakers and educational institutions aiming to promote T-bot adoption as an innovative technology should consider incorporating elements that enhance user enjoyment and satisfaction. This could involve designing T-bot interfaces that are user-friendly and engaging, or providing positive feedback mechanisms that reinforce successful T-bot use within lesson planning. By fostering a more positive user experience, policymakers can potentially cultivate positive associations and habits around T-bot integration in the classroom. 6.3 Strength and limitations of the study This research provides valuable insights, but there are a number of areas that future studies could explore further. Firstly, the study only involved pre-service teachers from one university in Ghana. It would be helpful to test the model with teachers from different cultures and educational backgrounds to see if the findings apply more widely. Secondly, the study only looked at a single point in time. Following teachers over a longer period of time could provide more insight into the factors that influence teachers who continue to use T-bots in their classrooms. This long-term approach could give educators and policymakers a clearer picture of how best to integrate T-bots into lesson planning. 6.4 Statements on open data and ethics The data are available on request from the corresponding author. All study procedures were carried out in accordance with relevant legislation and institutional research ethics guidelines, ensuring a strict commitment to ethical principles. No personally identifiable or sensitive information was collected, ensuring complete anonymity of participants. Participants were given detailed information about the study prior to commencement of the survey, including assurances of their right to volunteer and withdraw. Informed consent was obtained from all participants when they submitted the survey. CRediT authorship contribution statement Bernard Yaw Sekyi Acquah: Writing – review & editing, Writing – original draft, Visualization, Validation, Supervision, Resources, Methodology, Investigation, Conceptualization. Francis Arthur: Writing – review & editing, Writing – original draft, Software, Methodology, Formal analysis, Data curation, Conceptualization. Iddrisu Salifu: Writing – review & editing, Writing – original draft, Validation, Methodology, Investigation, Data curation, Conceptualization. Emmanuel Quayson: Writing – review & editing, Writing – original draft, Validation, Methodology, Investigation, Data curation. Sharon Abam Nortey: Writing – review & editing, Writing – original draft, Validation, Methodology, Investigation, Data curation, Conceptualization. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Appendix A Sample Size Determination from G∗Power software Image 1 Appendix B Cross Loading BI EE FC HT HM PE SI BI1 0.846 0.577 0.536 0.505 0.531 0.545 0.560 BI2 0.891 0.525 0.563 0.506 0.530 0.498 0.557 BI3 0.878 0.479 0.512 0.543 0.494 0.480 0.581 BI4 0.872 0.484 0.531 0.569 0.514 0.510 0.568 EE2 0.554 0.920 0.562 0.516 0.582 0.763 0.550 EE3 0.526 0.929 0.528 0.499 0.571 0.747 0.563 EE4 0.546 0.916 0.548 0.505 0.580 0.749 0.558 EE5 0.550 0.909 0.536 0.540 0.594 0.760 0.550 FC1 0.490 0.484 0.868 0.568 0.619 0.461 0.696 FC2 0.554 0.531 0.888 0.593 0.669 0.516 0.689 FC3 0.547 0.524 0.891 0.588 0.683 0.514 0.658 FC4 0.574 0.548 0.884 0.581 0.762 0.495 0.662 HM1 0.538 0.552 0.744 0.607 0.882 0.509 0.626 HM2 0.548 0.587 0.716 0.629 0.917 0.531 0.675 HM3 0.510 0.565 0.624 0.612 0.890 0.514 0.598 HT1 0.579 0.542 0.592 0.889 0.620 0.534 0.617 HT2 0.529 0.489 0.581 0.898 0.603 0.483 0.582 HT3 0.514 0.461 0.587 0.882 0.611 0.471 0.601 PE1 0.511 0.713 0.468 0.487 0.514 0.922 0.496 PE2 0.538 0.763 0.536 0.521 0.561 0.924 0.554 PE3 0.556 0.768 0.557 0.556 0.548 0.929 0.554 PE4 0.542 0.783 0.509 0.493 0.506 0.910 0.521 SI1 0.579 0.502 0.569 0.539 0.576 0.436 0.820 SI2 0.585 0.505 0.561 0.578 0.565 0.446 0.842 SI3 0.532 0.524 0.671 0.531 0.572 0.535 0.812 SI4 0.460 0.465 0.681 0.548 0.575 0.481 0.804 SI5 0.470 0.462 0.664 0.558 0.602 0.465 0.797
REFERENCES:
1. AHMAD A (2024)
2. AIMAH S (2017)
3. ALADWAN A (2024)
4. ALAZAWEI A (2020)
5. ALEMRAN M (2023)
6. ALGHADHBAN D (2020)
7. ALSHARAFI M (2022)
8. ALSHARAFI M (2022)
9. ALAM M (2021)
10. ALBANNA H (2022)
11. ALQUDAH A (2022)
12. AN X (2023)
13. ANGGRELLA D (2023)
14. ARPACI I (2023)
15. ARPACI I (2023)
16. ARPACI I (2022)
17. ARTHUR F (2023)
18. AWANG H (2020)
19. AZIZ F (2022)
20. BAGOZZI R (1991)
21. BAYDAS O (2016)
22. BECKER J (2023)
23. BHUKYA R (2023)
24. BOWER M (2020)
25. BREINES M (2023)
26. BUERKLE A (2023)
27. CAMPBELL S (2020)
28. CHEAH J (2019)
29. CHIN W (2020)
30. CORTEZ P (2024)
31. CUNADO A (2019)
32. DAI C (2022)
33. DAS S (2024)
34. DIAZASPER C (2024)
35. DILMAN D
36. DU W (2024)
37. EMON M (2023)
38. ENAMA P (2021)
39. FAKFARE P (2023)
40. FORNELL C (1981)
41. GANSSER O (2021)
42. GARCIA M (2023)
43. GARCIADEBLANESSEBASTIAN M (2022)
44. GESSNEWSOME J (2019)
45. GRANIC A (2022)
46. GROMANN L (2024)
47. HAGERMOSERSANETTI L (2018)
48. HAIR J (2016)
49. HAIR J (2022)
50. HAIR J (2019)
51. HAIR J (2019)
52. HAUFF S (2024)
53. HEJJIALANAZI M (2019)
54. HENSELER J (2015)
55. HEW T (2016)
56. HWANG G (2023)
57. IQBAL S (2022)
58. JANSSEN N (2019)
59. KALINIC Z (2021)
60. KURT E (2017)
61. LI B (2023)
62. LIN Y (2024)
63. LINGE A (2023)
64. LONG S (2023)
65. LOPEZPEREZ V (2019)
66. MALOTT C (2020)
67. MARIKYAN D (2023)
68. MCCORMACK V (2024)
69. MEMON M (2020)
70. MOORTHY K (2019)
71. MUTTON T (2011)
72. NAGRO S (2019)
73. NG F (2022)
74. NIKOLOPOULOU K (2021)
75. OBILOR E (2023)
76. OGEGBO A (2024)
77. ONYANGO G (2017)
78. OSEI W (2023)
79. PILLAI R (2023)
80. POPENICI S (2017)
81. QU K (2024)
82. RAHMAN M (2023)
83. RICHARDSON M (2021)
84. RINGLE C (2015)
85. RINGLE C (2016)
86. RINGLE C (2023)
87. ROLL I (2016)
88. SALIFU I (2024)
89. SARSTEDT M (2017)
90. SARSTEDT M (2021)
91. SHMUELI G (2019)
92. SOP S (2024)
93. STEIL A (2020)
94. STRZELECKI A (2023)
95. SUHAIL F (2024)
96. TAMILMANI K (2019)
97. TAMILMANI K (2021)
98. TSENG T (2022)
99. VENKATESH V (2003)
100. VENKATESH V (2012)
101. WANG Y (2024)
102. WU W (2022)
103. YIDANA M (2023)
104. YIDANA M (2022)
105. YIDANA M (2023)
106. YUAN Z (2023)
107. ZARAGOZA A (2021)
108. ZHANG K (2021)
109. ZHOU L (2020)
|
10.1016_j.ebiom.2021.103617.txt
|
TITLE: Deiodinase-3 is a thyrostat to regulate podocyte homeostasis
AUTHORS:
- Agarwal, Shivangi
- Koh, Kwi Hye
- Tardi, Nicholas J.
- Chen, Chuang
- Dande, Ranadheer Reddy
- WerneckdeCastro, Joao Pedro
- Sudhini, Yashwanth Reddy
- Luongo, Cristina
- Salvatore, Domenico
- Samelko, Beata
- Altintas, Mehmet M.
- Mangos, Steve
- Bianco, Antonio
- Reiser, Jochen
ABSTRACT:
Background
Nephrotic syndrome (NS) is associated with kidney podocyte injury and may occur as part of thyroid autoimmunity such as Graves’ disease. Therefore, the present study was designed to ascertain if and how podocytes respond to and regulate the input of biologically active thyroid hormone (TH), 3,5,3′-triiodothyronine (T3); and also to decipher the pathophysiological role of type 3 deiodinase (D3), a membrane-bound selenoenzyme that inactivates TH, in kidney disease.
Methods
To study D3 function in healthy and injured (PAN, puromycin aminonucleoside and LPS, Lipopolysaccharide-mediated) podocytes, immunofluorescence, qPCR and podocyte-specific D3 knockout mouse were used. Surface plasmon resonance (SPR), co-immunoprecipitation and Proximity Ligation Assay (PLA) were used for the interaction studies.
Findings
Healthy podocytes expressed D3 as the predominant deiodinase isoform. Upon podocyte injury, levels of Dio3 transcript and D3 protein were dramatically reduced both in vitro and in the LPS mouse model of podocyte damage. D3 was no longer directed to the cell membrane, it accumulated in the Golgi and nucleus instead. Further, depleting D3 from the mouse podocytes resulted in foot process effacement and proteinuria. Treatment of mouse podocytes with T3 phenocopied the absence of D3 and elicited activation of αvβ3 integrin signaling, which led to podocyte injury. We also confirmed presence of an active thyroid stimulating hormone receptor (TSH-R) on mouse podocytes, engagement and activation of which resulted in podocyte injury.
Interpretation
The study provided a mechanistic insight into how D3-αvβ3 integrin interaction can minimize T3-dependent integrin activation, illustrating how D3 could act as a renoprotective thyrostat in podocytes. Further, injury caused by binding of TSH-R with TSH-R antibody, as found in patients with Graves’ disease, explained a plausible link between thyroid disorder and NS.
Funding
This work was supported by American Thyroid Association (ATA-2018-050.R1).
BODY:
Research in context Unlabelled box Thyroid hormones (TH) are circulating iodinated signaling molecules that orchestrate physiological and developmental processes in nearly all the cells. Deiodinase 3 (D3) is a membrane-bound catabolic enzyme that deactivates the bioactive TH, 3,5,3′-triiodothyronine (T3). Although, deiodinases-mediated regulation of TH activity has been studied extensively in several tissues, how their anomalous expression, function or regulation impacts renal physiology is poorly understood. Dysregulation of TH has been associated with glomerular diseases and renal complications, indicating an obvious crosstalk between thyroid and kidneys. Additionally, other evidences point towards an overlap between them; (a) Chronic kidney disease (CKD) has been characterized by a low T3 syndrome; (b) patients with thyroid cancer exhibit a genetic predisposition for the development of renal cell carcinomas (RCC), and vice versa; (c) Recommendation for thyroid gland assessment in subjects with idiopathic kidney disease (KD); (d) Drugs used to combat thyroid disorders or KDs display adverse effects on the other organ's functions. Despite this precedence, how TH signaling locally affects renal cells remained an under delved arena. Therefore, the present study was undertaken to bridge this chasm in our understanding of how these two organs function synergistically. Added value of this study We demonstrate that healthy podocytes express D3 isoform in abundance. However, when podocytes were injured, a dramatic reduction in the Dio3 mRNA and protein levels was observed; accompanied by a loss of D3 activity. Concomitantly, translocation of D3 to the plasma membrane was compromised, which was instead found in nuclear and Golgi compartments. These effects were recapitulated in vivo . Lipopolysaccharide (LPS)-induced kidney injury mouse model exhibited a substantial decrease in the glomerular Dio3 mRNA levels. Podocyte-specific D3 deletion led to severe proteinuria along with podocyte foot process effacement. Upon exposing mouse podocytes to excess T3, which mimics the absence of D3, we were able to activate the deleterious αvβ3 integrin signaling pathway. Our findings thus identified that D3 dysfunction or down-regulation, defined by reduced T3 deactivating capacity, enhanced local thyroid hormone action in podocytes leading to an increased susceptibility to nephrotic injury via activation of integrin signaling cascade. Additionally, D3 was shown to interact directly with αvβ3 integrin in vitro and in cells; probably as an alternative protective mechanism to block the integrin receptor from activation. To mechanistically couple thyroid hormone dysfunction with KDs, we also show that thyroid stimulating hormone receptor (TSH-R) is not only expressed but is functionally active on the surface of mouse podocytes; activation of which induces injury. This explains how nephrotic syndrome (NS) is associated with thyroid malfunction as seen in Graves’ disease, an autoimmune disorder characterized by circulating antibodies directed against TSH-R. Implications of all the available evidence Our study unequivocally demonstrated a novel renoprotective role for D3 in podocytes and provided a missing link that integrated the thyroid-kidney axis. The study also provides a proof of concept that interjecting TH signaling in podocytes via deiodinases could be exploited as a potential therapeutic strategy to prevent or treat KDs. Further, down-regulation of D3 expression could be used as a sensor/biomarker for assessing the progression and severity of KDs. Our data also advises clinicians to monitor TH levels in patients with NS. 1 Introduction Podocytes, terminally differentiated epithelial cells within the kidney glomerulus, play an instrumental role in the ultrafiltration process. Malfunction of the glomerular filtration barrier (GFB) is generally attributed to podocyte injury [1] . Podocytes are metabolically active and demand a large energy supply, and evidence suggests a correlation between mitochondrial dysfunction and podocytopathy [ 2 , 3 ]. Although podocytes were originally considered as mere structural components of the glomerulus, they have recently been shown to possess mechanisms that respond to both glomerular-derived and systemically circulating hormones and other humoral factors [4] . Here we explored a potential link between podocytes and thyroid hormones (TH), which are circulating iodinated signaling molecules that regulate physiological and developmental processes in virtually all cells [5] . In support of this, dysregulation of TH in the kidney has been associated with glomerular diseases and other renal complications [6–9] . While hyperthyroidism has been associated with increased renal blood flow and absorption capacity, hypothyroidism can be associated with thickening of the glomerular basement membrane (GBM) and reduced filtration rate, indicating a distinct overlap between the proper functioning of the thyroid gland and health of kidneys [7] . Additionally, patients with Graves’ disease, an autoimmune form of hyperthyroidism characterized by presence of thyroid stimulating autoantibodies directed against the thyroid stimulating hormone receptor (TSH-R) leading to receptor activation [10] , exhibit membranous glomerulonephritis with nephrotic syndrome (NS) [11] , and also some sporadic cases of membranoproliferative glomerulonephritis or minimal change disease have been reported [12] . On the other hand, hypothyroidism has been associated with NS, but only as a secondary outcome due to increased urinary loss of TH [13] . Since NS is more closely and causally linked with hyperthyroidism as its primary outcome, the present study was specifically designed to ascertain if and how podocytes respond to and regulate T3 input, the biologically active TH. TH signaling can be locally regulated by deiodinases [14] . Thyroxine (T4) secreted by the thyroid gland is a prohormone with minimal activity and is converted to T3 via either type 1 or 2 iodothyronine deiodinases (D1/D2) in extrathyroidal tissues [15] . Alternatively, the levels of T3 are reduced by type 3 iodothyronine deiodinase (D3; encoded by Dio3 ), an enzyme that irreversibly converts T3 to, 3,3´-diiodothyronine (T2). D3 also converts T4 to reverse T3 (rT3), an inactive molecule, dampening TH signal [ 16 , 17 ]. Thus, iodothyronine deiodinases can initiate or terminate thyroid hormone action locally, independent of changes in TH serum concentrations. Our study demonstrates that TH acts on podocytes via extranuclear non-canonical pathway. Accordingly, a podocyte-specific reduction in D3 activity locally enhanced TH signaling, increasing susceptibility to nephrotic injury. 2 Methods 2.1 Reagents Lipopolysaccharide (LPS) was procured from E. coli O111:B4 (LPSEB; Invitrogen, tlrl-eblps), collagen I from rat tail (Gibco, #A10483), puromycin aminonucleoside (Sigma-Aldrich, #P7130), 3,5,3′-triiodo-L-thyronine sodium salt (Sigma-Aldrich, #T6397) and 3,3′,5,5′-Tetraiodothyroacetic acid abbreviated as Tetrac (Sigma-Aldrich, #T3787) and Cyclo [Arg-Gly-Asp-D-Phe-Val] peptide (Enzo Life Sciences, BML-AM100-0001). Alexa Fluor™ 488 Phalloidin was from Invitrogen (A12379). Anti-TSH Receptor (TSH-R) (extracellular) Antibody (# ATR-006; RRID:AB_2341080) was from Alomone Labs. Recombinant mouse TSH alpha/beta Heterodimer protein (# 8885-TH-010/CF) was from R&D Systems. Mouse C57 thyroid tissue lysate was from Zyagen Labs (Fisher Scientific, #50-171-8790). 2.2 Cell culture Immortalized human podocytes were cultured at 37°C for 10–14 days for full differentiation as previously described [18] . The culture medium was RPMI-1640 medium (Gibco, 11875) enriched with 10% fetal bovine serum (FBS; Denville Scientific, FB5001-H), insulin, transferrin and selenium (10.0 μg/ml, 5.5 μg/ml, and 6.7 ng/ml, respectively) supplement (Gibco, 41400045), 100 U/ml penicillin, and 100 μg/ml streptomycin (Gibco, A15140). Immortalized mouse podocytes were cultured as described [19] . All tissue culture flasks were coated with collagen I prior to seeding mouse podocytes. Briefly, cells were cultured at 33°C for proliferation in RPMI-1640 medium containing 10% FBS, 100 U/ml penicillin, and 100 μg/ml streptomycin supplemented with mouse recombinant interferon-γ (Cell Sciences, CR2041), at an initial concentration of 50 U/ml for the first 2 passages and then 20 U/ml for continuous passages. For differentiation, cells were thermoshifted to 37°C for 10–14 days without interferon-γ. Prior to drug treatment, podocytes were serum starved using charcoal stripped sterile FBS (Millipore, Sigma). 2.3 Reverse transcription and quantitative polymerase chain reaction (qPCR) assays Total RNA was isolated from cultured cells grown in three independent wells (biological replicates) using Trizol reagent (Invitrogen) or RNAeasy Mini Kit (Qiagen) following the manufacturer's instructions. The concentration and quality of RNA was determined spectrophotometrically by measuring absorbance values at 260 nm (A260) and 280 nm (A280) and evaluating A260/A280 ratios using a NanoDrop 2000 spectrophotometer (Thermo Fisher Scientific). Mouse thyroid total RNA was purchased from Takara Bio (#636674). The cDNAs were synthesized using a High-Capacity cDNA Reverse Transcription Kit following standard methods (Applied Biosystems, 4368813) and stored at -20°C until further use. PCR reactions were performed in triplicates (technical replicates) using a CFX96 Real-Time System thermal cycler (Bio-Rad). For analysis, results were expressed as fold change using the gene expression levels normalized to gapdh mRNA levels, as indicated, using the 2 –ΔΔCt method. Dio2 was taken as the reference gene for calculating the fold-change. The following TaqMan gene probes were purchased from Thermo Fisher Scientific: mouse Dio1 (Mm00839358_m1), mouse Dio2 (Mm00515664_m1), mouse Dio3 (Mm00548953_s1), mouse Gapdh (Mn99999915_g1), human DIO3 (Hs00956431_s1), human GAPDH (Hs02758991_g1), mouse Pgc-1a (Mm01208835_m1), mouse Hr (Mm00498963_m1) and mouse Tshr (Mm00442027_m1). 2.4 Immunofluorescence microscopy Mouse or human podocytes were seeded onto glass coverslips (Marienfeld) at 20 × 10 3 cells/ml in 12-well plates and allowed to differentiate as described above. The cells were rinsed with ice-cold PBS and fixed with 4% PFA for 10 min at room temperature followed by permeabilization with 0.1% Triton X-100 for 5 min. After washing with PBS twice for 5 min each, the coverslips were incubated with blocking buffer in 5% donkey serum (Sigma-Aldrich, D9663) for 1 h at room temperature. For immunofluorescence staining, cells were incubated with custom rabbit anti-human D3 (1:100; Novus Biologicals, NBP1-05767; RRID:AB_1556282), and/or Mouse anti-human Synaptopodin (1:300; D-9; Santa Cruz Biotechnology, sc-515842), at 4°C overnight. The cells were washed with cold PBS and incubated with appropriate Alexa Fluor 488–labeled donkey anti-Rabbit IgG (1:1000; Molecular Probes, A-21206; RRID:AB_2535792) and/or Alexa Fluor 647–labeled donkey anti-mouse IgG (1:1000; Molecular Probes, A-31571; RRID:AB_162542) secondary antibodies at room temperature for 1 h. Cells were stained with 0.1 μg/ml DAPI (Invitrogen, D1306) in PBS. Then cells were examined using an LSM 700 laser scanning fluorescence confocal microscope with ZEN software (Zeiss). 2.5 Flow cytometry After 12 days of differentiation, mouse podocytes were trypsinized and rinsed with PBS. In a final volume of 50 µl containing 2 × 10 5 cells, staining was performed with 1:50 dilution of either Rabbit anti-TSH-R-FITC conjugated antibody (Bioss Antibodies, BS-0460R-FITC; RRID:AB_11042712) or normal mouse IgG-FITC (Santa Cruz, sc-2856; RRID:AB_737238) for 1 h at 4°C. Cells were washed with FACS buffer (PBS with 0.5% BSA and 0.1% azide) twice at 1000 × g at 4°C. After the final wash, podocytes were resuspended in 50 µl PBS and were fixed with 200 µl of 1.2% paraformaldehyde in PBS. The samples were read on a FACScalibur (Beckton and Dickinson). 5000 events were collected in each case and data was analyzed using the FlowJO software. 2.6 Generation of podocyte-specific D3 knockout (D3KO) mouse model Using a well-established Cre-loxP technology, we inactivated D3 selectively in the adult glomerular podocytes. Homozygous mice ( Dio3 fl/fl ), designated as D3-flox, were generated in the laboratory of Domenico Salvatore (Department of Public Health, University of Naples "Federico II," Naples, Italy) [20] and were generously donated by Antonio Bianco (Department of Medicine, University of Chicago). D3 contains a selenocysteine residue in its catalytic domain, which is encoded by an in-frame UGA stop codon. However, to override the stop codon and to insert a selenocysteine instead, a cis-acting stem loop sequence called as the selenocysteine insertion sequence (SECIS) is required in the 3′ UTR of the mRNA [21] . Since SECIS element is essential for the full-length translation of D3 protein, this region was ablated in the Dio3 gene. To accomplish that, a plasmid harboring floxed sites in the Dio3 locus, and specifically flanking the SECIS mRNA structure located at nt 1,001 and nt 1,706 from the ATG in the Dio3 mRNA was generated. In the absence of the SECIS, the TGA codon within the D3 catalytic domain will be recognized as stop codon, and the protein translation will be terminated [20] . We crossed these D3-flox mice with the driver mouse strain expressing Cre recombinase under podocyte-specific Nphs2 promoter (Pod-Cre mice) (obtained from Jackson Laboratory) to generate D3KO mice (Pod-Cre +/− ::Dio3 flox/flox ). All mice were crossed onto a pure C57BL/6J (B6) background. In the presence of the Pod-Cre transgene, D3KO mice will have podocyte-specific excision of the floxed Dio3 SECIS, resulting in a null allele with termination at the UGA codon and no D3 catalytic activity. Podocin-Cre littermates served as controls. The expected recombination event occurred in the progeny, which was confirmed by tail clip genotyping using CflipU and FflipL primers to confirm lox sites and pod-cre-f and pod-cre-r primers to confirm cre sites. Further to validate lack of D3 expression in the podocytes of D3KO mice, primary podocytes were isolated from glomeruli of 10-week old D3KO mice or littermates controls as described below in section 2.7 . Western blot analysis was done using rabbit anti-D3 antibody (10 μg/ml, Novus Biologicals # NBP1-05767) and relative density of D3 bands was calculated via ImageJ software, normalized to the GAPDH, and compared to littermate controls (n=3 per group). 2.7 Isolation of primary podocytes Primary mouse podocytes were isolated using Dynabeads magnetic separation as described previously [22] with some modifications. Briefly, mice (Male, Wild-type (C57BL/6) or Lox-D3 (no podocin-cre), 8-10 weeks old) were anesthetized and perfused through the heart with 20 ml Hank's Balanced Salt Solution (HBSS), which contained 8 × 10 7 M-450 Dynabeads (Invitrogen, 14013). Kidneys were harvested, minced into small pieces, and digested at 37°C for 30 min in HBSS buffer containing 1 mg/ml collagenase A (Sigma-Aldrich, C-6885) and 100 U/ml DNase I (New England Biolabs, M0303L). The digested tissue was then passed twice through a 100 μm nylon mesh (BD Biosciences, 352360), washed with HBSS buffer followed by isolation of magnetic particles. Isolated glomeruli were then cultured for 5 days on dishes previously coated with collagen I. Cells were trypsinized, filtered using a 40-μm cell strainer (BD Biosciences, 352340), centrifuged and seeded on collagen I–coated dishes for sub-culturing. 2.8 LPS-induced proteinuric and hypothyroid mouse models Proteinuria was induced in mice as described previously [23] with some modifications. Mice were intraperitoneally injected with a single dose of LPS-EB at 2.5 mg/kg body weight followed by a single intraperitoneal injection of 150 µl of iopanoic acid (IOP) in 2% (v/v) ethanol solution (0.11 g/kg) [24] . Ethanol alone was used as a vehicle control. After 24 h, the kidneys were removed and processed for microscopy as described is subsequent sections. 2.9 Measurement of ACR levels Mouse urine samples were collected for the measurement of urinary albumin and creatinine using a mouse albumin ELISA kit (Bethyl Laboratories, E99-134), and a creatinine assay kit (Cayman Chemical, 500701), respectively, according to the instructions provided in the kit. The ratio of urinary albumin to creatinine (ACR, mg/g) was then calculated. 2.10 Electron microscopy (EM) Kidneys were extracted from the control and treated mice. Renal tissue was first fixed in PFA overnight at 4°C and post-fixed for 1 h in 1% osmium tetroxide (OsO4) on ice. Tissues were washed, dehydrated, and embedded in Embed 812 Resin (Electron Microscopy Sciences, 14120). Ultrathin kidney sections (70 nm) obtained on the EM UC7 Ultramicrotome (Leica) were placed on Formvar-coated nickel grids (EMS, FF-2010-Ni) and counter stained with 5% uranyl acetate and 0.1% lead citrate. EM micrographs were taken using a Sigma HD VP Electron Microscope (Zeiss). Foot process effacement was quantified from transmission electron microscopy (TEM) micrographs of glomeruli as described previously [25] . Briefly, multiple capillary loops were imaged at 5000 × and glomerular basement membrane (GBM) length was measured for 10 different glomeruli from a minimum of 4 mice per condition using ImageJ software (version 1.52a; National Institutes of Health). To quantify effacement, secondary processes (i.e., foot processes, FPs) were tallied manually, and this was divided by the length of GBM examined to calculate FPs per unit length (i.e., µm of GBM). 2.11 Western blotting Total cell lysates (10 μg) in RIPA buffer were supplemented with Halt protease and phosphatase inhibitor cocktails (Thermo Fisher Scientific). For fractionated samples, lysates corresponding to at least 25-50 μg were used. Plasma Membrane Protein Extraction Kit for organelle fractionation and NE-PER™ Nuclear and Cytoplasmic Extraction Reagents were used for cellular fractionation (Abcam, ab65400) according to the manufacturer's protocol. The samples were separated on SDS-PAGE gradient gel (NuPAGE 4–12% Bis-Tris, Invitrogen) followed by transfer to nitrocellulose membrane (LI-COR Biosciences, 926-31092). Blots were blocked using TBS-based Odyssey blocking buffer (LI-COR Biosciences, 927-50000) or 5% blotting grade skimmed milk (Bio-Rad) in TBST for 1 h at room temperature with constant shaking. Blots were incubated with primary antibodies [Cell signaling technology: total Src #2108 (RRID:AB_331137), phospho-Src (Y416) D49G4 #6943S (RRID:AB_10013641), total PKCα #2056T (RRID:AB_2284227), phospho-PKCα/β II (Thr638/641) #9375T (RRID:AB_2284224), total FAK #3285T (RRID:AB_2269034), phospho-FAK (Tyr397) #3283S (RRID:AB_2173659), phospho-Paxillin (Tyr118) #2541S (RRID:AB_2174466), total Paxillin (D9G12) #12065S (RRID:AB_2797814), total ERK1/2-HRP conjugated #4348 (RRID:AB_10693601), phospho-ERK1/2 (T202/Y204) #9101S (RRID:AB_331646); Abcam: phospho-Paxillin (Tyr 113) #ab32084 (RRID:AB_779033); Santa Cruz: p-PKCα (A11) #Sc-377565 (RRID:AB_2877652)] diluted in TBS-Tween (either with skimmed milk or BSA as recommended by the supplier) overnight at 4°C with gentle rotation. IRDye 800RD donkey anti-rabbit IgG (H+L) (LI-COR Biosciences, 926-32213; RRID AB_621848) and IRDye 800 donkey anti-mouse IgG (H+L) (LI-COR Biosciences, 926-32212; RRID AB_621847) were used as secondary antibodies, diluted in TBST for 1 h at room temperature. The protein detection and image capture were performed using an Odyssey CLx imaging system (LI-COR Biosciences). 2.12 Ultra-performance liquid chromatography (UPLC) In brief, 10-day-old differentiated mouse podocytes were placed in serum-free medium for 24 h prior to T3 (10 −7 M) treatment. Cells are then washed in PBS, harvested in buffer (PBS containing 2 mM EDTA, 1 mM DTT and protease inhibitor cocktail), pelleted and either analyzed right away or snap frozen and stored at -80 until analysis. Cell pellets were processed in UPKC lysate buffer (PBS containing 1 mM EDTA, 10 mM DTT with 0.25 M sucrose) and lysates are used for D3 activity assay as described [26] . 2.13 Molecular cloning and purification of recombinant human D3 protein The gene encoding full length human DIO3 was codon optimized and synthesized as linear DNA strand (G-block, Integrated DNA Technologies) for subsequent cloning procedures. This fragment was cloned in-frame with N-terminal GFP and 6x-His tag after digestion with the restriction enzyme Ssp I using Gibson cloning into a commercially available plasmid from Adgene (pET His6 GFP TEV LIC cloning vector [1GFP], plasmid 29663). Expression of GFP-tagged full-length D3 (1-304 aa), abbreviated as D3 FL was induced in BL21(λDE3) cells at 25°C for 16 h in enriched terrific broth. Cell pellets were subjected to three cycles of repeated freeze-thaw followed by lysis using sonication in sonication buffer (25 mM HEPES, pH 8.0, 500 mM NaCl, 2.5 mM imidazole, 5% sucrose wt/vol, 0.1% Triton X-100) then spun at 35,000 rpm for 45 min. The soluble fraction was incubated with 4 ml of 50% Nickel immobilized metal ion affinity chromatography resin (Qiagen) for 2 h at 4°C. The beads were washed extensively with 100 ml wash buffer (25 mM HEPES, pH 8.0, 500 mM NaCl, 2.5 mM imidazole, 5% sucrose wt/vol) and eluted with a gradient of 50-500 mM imidazole. Fractions containing the proteins were pooled and subjected to dialysis in PBS. The protein was determined to be ∼>95% pure using SDS-PAGE analysis. Protein concentration was estimated using both a spectrophotometer and a BCA protein estimation kit. Protein was flash frozen in liquid nitrogen and stored at −80°C until further use. 2.14 Surface plasmon resonance (SPR) Protein interactions were measured and analyzed on a Biacore T200 instrument (GE Healthcare). Experiments were performed at 25°C. Briefly, to measure the binding affinities of the candidate analyte proteins [human- integrin αvβ3 (R&D Systems, 3050-AV-050) and human uPAR (R&D Systems, 807-UK-100/CF)] to recombinant human D3 FL protein, the latter protein was immobilized on to the flow channel on a CM5 sensor chip using a standard amine coupling method. For immobilization, D3 FL was diluted in 10 mM sodium acetate, pH 4.5, and was loaded on to the chip after activation of the sensor surface with catalyst (3-dimethylaminopropyl)-3-ethylcarbodiimide (EDC)-N-hydroxysuccinimide (NHS), followed by blocking of the unoccupied surface area using 1M ethanolamine. Integrin αvβ3 protein, in a series of increasing concentrations (i.e., 0-150 nM in 2-fold serial dilutions), was applied to the channels as an analyte at a flow rate of 25 μl/min. The running buffer for the binding experiments was 10 mM HEPES, 150 mM NaCl, 0.05% n-Octyl-β-D-glucopyranoside, pH 7.1. To study the binding of analytes to activated integrin, 2 mM MnCl 2 and 0.1 mM MgCl 2 were added to the binding buffer. For inhibition, a combination of cRGDfv and Tetrac (15 μM each) was pre-incubated with increasing concentrations of αvβ3 integrin during sample preparation on a plate and injected following the same procedure as previously described. Data were double-referenced with blank (ethanolamine) RU values on flow channel 1 and zero concentration analyte signal. Sensorgrams were analyzed using the Biacore T200 evaluation software 2.0.3, and response units (RU) were measured during the equilibration phase at each concentration for steady-state affinity fittings. Kinetic fittings were done by 1 to 1 Langmuir binding model embedded within the Biacore T200 evaluation software 2.0.3. 2.15 Co-immunoprecipitation Cultured HEK293T cells (procured from ATCC; RRID:CVCL_0063) were grown at 37°C with 5% CO 2 in Dulbecco's Modified Eagle's medium (DMEM, Life Technologies) containing 10% fetal bovine serum, 100 U/ml penicillin and 1 μg/ml streptomycin, until 50-60% confluence was reached. Transfection of plasmids was carried out using FuGENE® HD Transfection Reagent (Promega) for 48 h according to the manufacturer's protocol. The plasmid used for transfection of DIO3 , NM_001362 (abbreviated as D3Cys) from GenScript (OHu24700D), contained a C-terminal FLAG tag and a stop codon TGA which is read as Selenocysteine. This stop codon was replaced with codon for Cysteine (Cys) using site-directed mutagenesis for efficient translation in the HEK-293T cells. The plasmid used for transfection of Integrin β3 (NM_000212.2) was procured from Sino Biologicals (HG10787-CM) contained a C-terminal Myc tag. The transfected cells were harvested; washed with 1 × PBS twice and were lysed in RIPA buffer supplemented with protease inhibitor (Roche). One percent of the cell lysate volume was loaded as the input to assess the presence of desired proteins in each case. One mg total cell protein was used per IP and isotype control mouse IgG antibody were used along with antibodies against: FLAG (F1804, Sigma; RRID:AB_262044) and c-Myc (Origene, clone OTI3F2, formerly 3F2, TA500003, RRID:AB_2148581). Anti-FLAG M2 affinity (Sigma), anti-c-Myc agarose and protein A/G beads (Pierce, Thermo Fisher Scientific) were used to isolate protein-antibody complexes, then eluted by boiling the beads with 2 × Laemmli reducing sample buffer (Invitrogen). Proteins were resolved on 4–12% Bis-Tris NuPAGE (Invitrogen) gels and transferred to the nitrocellulose membrane (LI-COR Biosciences) for Western blotting. The interacting partners were detected by Western blotting using the appropriate antibodies and developed as described above using an Odyssey CLx imaging system (LI-COR Biosciences). 2.16 In vitro scratch assay An artificial gap, or ‘scratch’ was created on a confluent monolayer of differentiated mouse podocytes using a 200 µl plastic pipette tip. The detached cells were removed by washing the wells twice with 1 × PBS followed by the addition of fresh growth medium in the presence or absence of T3 (10 −7 M) or PAN (30 µg/ml). Brightfield microscopy images were taken immediately after the creation of the scratch and were considered as T 0. After 24 and 48 h post treatment, images were captured again at T 24 and T 48 . The area covered or migrated in each case was quantified and calculated using Image J software. The images were saved as 8-bit multi-page TIFF files before data analysis. Experiments were performed in triplicate. 2.17 In situ Proximity ligation assay (PLA) PLA was performed using Duolink® in Situ Orange Starter Kit for Mouse/Rabbit antibody combination (Sigma Aldrich, DUO92102) according to the manufacturer's instructions. This kit included mouse and rabbit secondary antibodies with probes, blocking solution, wash buffers A and B, amplification solution, ligase solution and detection reagent. Human biopsy kidney sections were rinsed with ice-cold PBS and fixed with 4% PFA for 10 min at room temperature followed by permeabilization with 0.1% Triton X-100 for 5 min. After washing with PBS twice for 5 min each, the tissues were blocked for 1 h with agitation in the PLA blocking solution (Duolink®). After the blocking step, slides were incubated with a combination of either (a) rabbit anti-D3 (10 μg/ml, Novus Biologicals # NBP1-05767) and mouse anti-αvβ3 (10 μg/ml, Abcam clone LM609 # ab190147) antibodies, or (b) 10 μg/ml each of normal rabbit IgG (Santa Cruz Biotechnology, sc-2027; RRID:AB_737197) and normal mouse IgG (Santa Cruz Biotechnology, sc-2025; RRID:AB_737182) as controls, diluted in Duolink® antibody dilution buffer, overnight at 4°C. This was followed by incubation with secondary antibodies conjugated with complementary oligonucleotide PLA probe PLUS (anti-rabbit PLUS: DUO92002; RRID:AB_2810940) and MINUS (anti-mouse MINUS: DUO92004; RRID:AB_2713942) at 37°C for 1 h. The ligation (30 min), amplification (100 min) and washing steps with Buffers A and B were performed exactly as recommended. All incubations were performed in humidified chamber. The slides were stained with DAPI and mounted using Confocal images were captured using an LSM 700 laser scanning fluorescence confocal microscope equipped with ZEN software (Zeiss). Mean fluorescence intensity (MFI) was measured using ImageJ software (version 1.52a; NIH). Each disease group was normalized to healthy controls and is shown as fold change. 2.18 Measurement of cAMP levels Twelve days old differentiated and serum starved mouse podocytes (1 × 10 6 in 6-well plate) were exposed to 50 nM TSH, normal rabbit IgG isotype or anti-TSH-R antibody (3 µg/ml each) or a known adenylate cyclase activator, Forskolin (50 µM) (Tocris Bioscience #1099) for 1 h. cAMP (pmol/ml) was measured in the culture supernatants using Mouse/Rat cAMP Parameter Assay Kit from R&D Systems (#KGE012B), based on competitive enzyme immunoassay, exactly according to the manufacturer's instructions. Briefly, to 200 μL culture supernatants, 40 μL of 1N HCl was added to inactivate the phosphodiesterases. The samples were incubated for 10 min at room temperature followed by neutralization with 40 μL of 1N NaOH. Samples were then diluted with 280 μL of Calibrator Diluent RD5-55 and assayed immediately. 2.19 Statistics Statistical analysis was calculated using Prism 6.0 software (GraphPad). Data are presented as mean ± SEM. For comparison of two groups, statistical significance was evaluated using an unpaired two‐tailed Student's t ‐tests with an assumption that the data were normally distributed. All P values less than or equal to 0.05 were considered significant and are indicated in the text as appropriate, * P < 0.05, ** P < 0.01, *** P < 0.001, and **** P < 0.0001. 2.20 Human and animal subjects study approval All animal experiments were carried out according to the NIH's Guide for the Care and Use of Experimental Animals (National Academies Press, 2011), and approved by the Institutional Animal Care and Use Committee (IACUC#18-049) at Rush University (Chicago, Illinois, USA). Human biopsy kidney sections from healthy donors and patients with FSGS, DN and MCD were procured after informed consent and in accordance with the guidelines on human research and with approval of the Institutional Review Board (#14051401-IRB01) of Rush University Medical Center (Chicago, Illinois, USA). Role of funding source This work was supported by American Thyroid Association (ATA-2018-050.R1). However, the funding agency has no role in conceptualization or design of this study; neither in the collection, analysis, and interpretation of the data; nor in the writing or the decision to submit the paper for publication. 3 Results 3.1 Injured podocytes display reduced Dio3 expression To understand how TH effects are regulated in podocytes, we first sought to establish the relative expression levels of the 3 different deiodinases that function to either activate (D1 or D2) or inactivate (D3) TH ( Fig. 1 a). The qPCR data revealed that the Dio3 transcript was the most abundant in cultured mouse podocytes (>20-fold compared to DIO2 ) whereas only minimal expression of Dio1 could be found ( Fig. 1 b). We next sought to assess Dio3 mRNA profile after podocytes were exposed to stress-inducing conditions. Cultured mouse and human podocytes were treated with puromycin aminoglycoside (PAN) ( Fig. 1 c) or LPS ( Fig. 1 d), respectively, both of which reduced Dio3 mRNA levels. These findings were in accordance with publicly available database in the Nephroseq (The Regents of the University of Michigan, Ann Arbor, MI), which shows that DIO3 mRNA levels were reduced in the glomeruli and tubulointestitial compartments of patients with chronic kidney disease such as FSGS, in addition to those who had acute kidney rejection after transplantation ( Fig. 1 e). 3.2 The subcellular localization of D3 and its activity are altered upon injury to podocytes To evaluate the consequence of stress-inducing conditions on the cell membrane D3 expression, mouse podocytes were injured using PAN and analyzed for D3 subcellular localization. Notably, 24 h-post PAN treatment, confocal micrographs show a dramatic reduction of D3 expression at the plasma membrane of injured podocytes ( Fig. 1 f, left panel). An optical section through the nuclear plane further revealed that D3 expression was compartmentalized and enhanced in the nucleus and perinuclear regions (PNR) ( Fig. 1 f, right panel). Moreover, the co-localization of D3 with cortical F-actin, which was evident in the untreated podocytes, was lost upon PAN treatment ( Fig. 1 f, right panel). Quantification of the fluorescence intensities revealed that the subcellular localization of D3 switched from plasma membrane to nuclear after PAN treatment. In PBS-treated control cells, 22±5% of the D3 signal was found in the nucleus. This percentage increased to 82±8% in podocytes treated with PAN ( Fig. 1 g). Western blots performed on the fractionated cellular compartments also showed enhanced D3 levels in the nuclear and organelle compartments after PAN treatment with a concomitant loss of total D3 and a reduction in the plasma membrane fraction ( Fig. 1 h). To confirm that this subcellular redistribution also occurs in cultured human podocytes, D3 localization was assessed after LPS treatment. Confocal images show that LPS-treated human podocytes also displayed a loss in the plasma membrane D3 and an increased concentration of D3 signal in the nucleus and Golgi, as indicated by co-localization with the Golgi marker GM-130 (Supplementary Fig. 1). To assess if nuclear accumulation of D3 in PAN-injured podocytes corroborates with a reduced transcription of T3-responsive genes, we performed qPCR to detect the mRNA levels of peroxisome proliferator-activated receptor-γ coactivator-1α (Pgc-1α) and Hairless (Hr) . As anticipated, transcript levels of Pgc-1α and Hr were decreased in PAN-treated mouse podocytes compared to the untreated cells ( Fig. 1 i); indicating that nuclear D3 could degrade the residual T3 in the nucleus, which dampened the transcription of genes positively regulated by T3. Next, we studied D3 activity (i.e., conversion of T3 to T2) in the cell lysates of PAN-treated mouse podocytes by adding 125 I-T3 and measuring deiodination products via ultra-performance liquid chromatography (UPLC). Indeed, D3 activity in PAN-injured podocytes dropped markedly ( Fig. 1 j). 3.3 TH signaling modulates cytoskeletal organization and D3 subcellular distribution Next, we wished to bypass D3 and increase T3 signaling in podocytes by exposing mouse podocytes to 100 nM T3 while the cells were kept in 2% charcoal stripped FBS. Similar to PAN-treatment, exposure of T3 for 48 h was sufficient to cause phenotypic changes in podocytes as evident in the confocal micrographs of phalloidin-stained cells. There was a loss of F-actin polarity ( Fig. 2 a) and an increase in cell size ( Fig. 2 b). Also, T3 reduced plasma membrane D3 while increasing it in the cell nucleus ( Fig. 2 a). Our data suggest that increasing TH signaling in podocytes elicits a cellular response that resembles PAN treatment, i.e., redirection of D3 to the cell nucleus and loss of cytoskeletal organization. 3.4 T3 activates integrin signaling in podocytes Recently, αvβ3 integrin has been identified as a receptor for T3 [27] . Therefore, we treated mouse podocytes with T3 to test whether TH action produces downstream signaling via αvβ3 integrin receptor. Indeed, incubation with T3 caused an increase (1.5-2-fold) in the phosphorylation of FAK at Tyr397, PKC-α at Ser634 and Thr638/641, paxillin at Tyr118 and Tyr113, and Src kinase at Tyr416 ( Fig. 2 c, d). There are two T3-binding sites on αvβ3 integrin, Site-1 and Site-2. Although, both sites are sensitive to inhibition by Tetrac, Site-1 can be blocked only by the cRGDfv peptide, an αvβ3 integrin inhibitor [28] . Therefore, initially we were unable to block T3-mediated activation of αvβ3 integrin signaling using only cRGDfv; however, addition of cRGDfv and Tetrac together for 2 h prior to T3 exposure diminished the activation of the αvβ3 integrin-mediated downstream signaling factors ( Fig. 2 e). An early event following integrin activation on podocytes is increased motility [4] . We therefore measured the podocyte migration rate upon T3 treatment. For this, a scratch assay was employed where mouse podocytes in the monolayer were vertically scratched using a pipette tip and then treated with either T3 or PAN, a known promoter of podocyte motility [29] . Both T3 and PAN treated podocytes migrated twice as fast than the untreated controls ( Fig. 2 f, g). Previous studies have shown that T3 induced phosphorylation of ERK1/2 via αvβ3 integrin, followed by cellular proliferation and migration in several cancer cell lines [30] . We sought to determine if a similar mechanism was acting in T3-treated podocytes. We observed phosphorylation of ERK1/2 in mouse podocytes upon T3 treatment in as early as 20 min ( Fig. 2 h, i), which was inhibited by adding a combination of both cRGDfv and Tetrac. This suggests that the αvβ3 integrin receptor expressed on podocytes may play a pivotal role in responding to and integrating T3 signaling, effectively acting as a focal point for crosstalk between the thyroid-kidney axis. 3.5 D3 interacts with the αvβ3 integrin receptor The presence of D3 in the plasma membrane raises the possibility that it could interact with αvβ3 integrin. To test this hypothesis, FLAG-tagged CysD3 and Myc-tagged β3 integrin were transiently co-expressed in HEK-293T cells. Indeed, immunoprecipitation using anti-FLAG antibody pulled down both D3 and β3 integrin ( Fig. 3 a, left panel). In a reciprocal experiment, the pull down of Myc-tagged-β3 integrin with anti-Myc antibody was able to immunoprecipitate FLAG-tagged D3 ( Fig. 3 b). This interaction seems to be highly specific, given that we were not able to pull down FLAG-tagged CysD3 using anti-Myc antibody from cells co-transfected with FLAG/Myc-tagged β1 integrin and CysD3 ( Fig. 3 a, right panel). To further test the direct binding of D3 with αvβ3 integrin and to measure the strength of this interaction, we used Surface Plasmon Resonance (SPR). Full-length recombinant D3 (D3 FL ) was bacterially expressed, purified, and used for the analysis ( Fig. 3 c). The sensorgrams revealed that D3 FL could directly bind αvβ3 integrin with a K of 16 nM ( D Fig. 3 d). Addition of Mn 2+ , a potent integrin activator, did not substantially alter the binding affinity ( K D of 45 nM) between the two proteins ( Fig. 3 e). We also tested if the binding between D3 and αvβ3 integrin could be competitively inhibited by cRGDfv and/or Tetrac. Remarkably, pre-incubation of αvβ3 with cRGDfv alone resulted in marked reduction in binding affinity of the receptor to D3 FL , K D =2107 nM ( Fig. 3 f). However, a combination of cRGDfv with Tetrac almost abolished binding with D3 FL , K D >2mM ( Fig. 3 g), indicating that the interaction between D3 FL and αvβ3 integrin is quite specific. Neither the cRGDfv inhibitor alone nor in combination with Tetrac were able to bind to the immobilized D3 FL (Supplementary Fig. 2). It is noteworthy that SPR data are in consonance with our Western blot data in Figure 2 , wherein complete inhibition of integrin signaling via T3 is observed only when both inhibitors are used. In agreement with the SPR data, co-immunoprecipitation of Myc-tagged β3-integrin by FLAG-tagged D3 was also compromised at least 2-fold when the plasmids were transfected in the presence of both the inhibitors ( Fig. 3 h). Taken together, our data strongly suggest that D3 interacts with αvβ3 integrin. To demonstrate a viable in situ interaction between D3 and αvβ3 integrin in healthy podocytes and its loss in diseased conditions, a Proximity Ligation Assay (PLA) was used on human kidney biopsies from healthy individuals and from patients with FSGS, DN or MCD. While the tissue from healthy donors showed the appearance of orange punctate-like fluorescence indicating a positive interaction between D3 and αvβ3 integrin, the tissues obtained from diseased samples showed a significant reduction in the fluorescence intensities ( Fig. 3 i). Faint or no signal from the IgG isotype control antibodies obviated the possibility of non-specificity. Since urokinase receptor (uPAR) has been shown to bind to and activate podocyte β3 integrins causing outside-in activation of the FP microfilament system, podocyte FP motility, and thus their effacement [31] , we wanted to test if D3 could bind to human uPAR using SPR. The rationale was if D3 can form a complex with uPAR, this bipartite unit might bind to and activate αvβ3 integrin more effectively than the individual proteins, thereby augmenting integrin-mediated podocytopathies. However, no appreciable binding between D3 and uPAR was observed ( K D ≥2 mM; data not shown). 3.6 D3 expression is reduced in the glomeruli of LPS injected mice To determine if the phenotypes observed in injured cultured mouse podocytes can be recapitulated in vivo , we induced acute kidney injury in mice by LPS injection. Mice were challenged with LPS for 24 h and their glomeruli were isolated and stained for D3 and synaptopodin. Consistent with our in vitro data, confocal analysis clearly showed a reduction of D3 staining in the glomeruli of LPS-treated mice ( Fig. 4 a). Quantification of the fluorescent signals revealed that D3 levels in LPS-treated mice declined by at least 2-fold compared to the untreated controls ( Fig. 4 b). The qPCR analysis further corroborated the confocal data and showed a significant decline in the Dio3 mRNA levels after 24 or 72 h of LPS treatment, as compared to the vehicle treated mice ( Fig. 4 c). 3.7 Podocyte-specific D3 knockout (D3KO) mice display severe foot process effacement and proteinuria Our data clearly show that Dio3 transcript levels, protein expression levels and D3 enzymatic activity decrease in mouse podocytes when exposed to an external assault. This suggests that reduced D3 levels correlate with a diseased phenotype and that its absence may have an adverse impact on glomerular filtration functions carried out by podocytes in kidney tissues. However, to test this hypothesis and to directly gauge the contribution of D3 to podocyte integrity, we generated a podocyte-specific D3 knockout (D3KO) mouse model (Supplementary Fig. 3). Proteinuria, determined by establishing the albumin to creatinine ratio (ACR), was measured in both control and D3KO mice after treatment with a low dose of LPS to induce kidney damage. Elevated ACR was observed in LPS-treated D3KO mice as compared to the LPS-treated WT mice, indicating that the absence of D3 expression on podocytes leads to an increased susceptibility for the development of kidney disease ( Fig. 4 d). The glomerular capillary loops of LPS-treated WT control and D3KO mice were visualized using transmission electron microscopy (TEM). While the control group showed mild loss of podocyte foot processes (FPs) following LPS treatment, the D3KO group exhibited distinct FP effacement ( Fig. 4 e). Quantification of the FPs demonstrated a marked reduction in the number of FPs in the latter case ( Fig. 4 f). Finally, to ascertain if inhibition of deiodinases globally would impact the podocytes locally, we used iopanoic acid (IOP), which is a potent pan-deiodinase inhibitor [32] , and examined podocyte morphology. Ultrastructural analysis of the glomerular capillary loops from IOP-treated mice using TEM in combination with image quantification indeed showed a significant reduction in the number of FPs and profound effacement, when compared to the vehicle-treated control mice (Supplementary Fig. 4). While IOP also inhibits other deiodinases unbiasedly, we are confident that the effects observed are mediated via D3 inhibition given that the expression of D1 and D2 in podocytes is negligible. 3.8 Podocytes express TSH receptor (TSH-R) and its activation causes podocyte injury A common association of nephrotic syndrome (NS) with thyroid malfunction is seen in Graves’ disease, which is an autoimmune disorder caused by circulating autoantibodies against TSH-R, leading to overproduction of thyroid hormones. Therefore, we wanted to investigate if podocytes express TSH-R and if so, were they susceptible to anti-TSH-R antibody-mediated injury. Notably, our qPCR data revealed expression of Tshr mRNA in mouse podocytes, albeit lesser than what was observed in mouse thyroid ( Fig. 5 a). Further, immunoblotting of lysates from mouse podocytes and mouse thyroid tissue (used as a positive control), with anti-TSH-R antibody identified the presence of TSH-R in both glycosylated (holoreceptor, >100 kDa) and non-glycosylated (84 kDa) forms; whilst no bands were detected in K562 cell lysate, taken as a negative control ( Fig. 5 b). Additionally, we also performed flow cytometry to validate TSH-R expression on cultured mouse podocytes. A clear shift was observed for the FITC signal on the X-axis along with a ∼2-fold increase in the mean fluorescence intensity (MFI) when podocytes were incubated with FITC-conjugated TSH-R antibody, compared to the isotype control which overlapped with the unstained podocytes ( Fig. 5 c and inset). Although, the results clearly indicate expression of TSH-R, whether this receptor is functionally active on the surface of podocytes and if the engagement of TSH-R with its ligand can provoke downstream signaling in podocytes was yet to be ascertained. Since it is known that the TSH-R, in addition to its natural ligand [thyroid stimulating hormone (TSH)], can also be activated by TSH-R autoantibodies found in patients with Graves’ disease [ 33 , 34 ], mouse podocytes were exposed to both TSH and anti-TSH-R antibody. We expected TSH-R antibody to function as a clinically relevant TSH-R agonist. Interestingly, upon stimulation of TSH-R either directly by TSH or indirectly by TSH-R antibody, there were clear signs of cellular injury. α-TSH-R- ( Fig. 5 d, f) or TSH- ( Fig. 5 e, g) treated mouse podocytes exhibited changes in cell shape, F-actin depolarization and reduction in cell size; akin to the morphological changes seen in PAN-induced injury ( Fig. 1 ). A known outcome of TSH-R activation is robust cAMP induction [ 35 , 36 ]. Indeed, as shown in Fig. 5 h, ∼2-fold increase in the cAMP was observed in the culture supernatant of podocytes treated with TSH in comparison to the untreated podocytes. As anticipated, cAMP levels were substantially higher in mouse podocytes exposed to Forskolin, a known inducer of cAMP. Intriguingly enough, we did not observe cAMP induction in TSH-R treated podocytes, and the levels were comparable to the podocytes treated with a non-specific antibody (isotype control) or resting podocytes ( Fig. 5 h). Similarly, injury caused by TSH ( Fig. 5 i) but not TSH-R (data not shown) was accompanied by a reduction in the Dio3 mRNA levels. 4 Discussion There is strong precedence for a functional intersection between thyroid and kidney complications [ 37 , 38 ]. Our data support a concept of podocytes being regulated by a cellular TH signaling pathway with implications in kidney diseases. The classical understanding of the primary role for D3 is to minimize TH signaling and possibly reduce cellular energy demands, especially during disease states. D3 is predominantly expressed during embryonic life, but it is also found in the placenta, where it is linked to limiting the transfer of maternal TH to the fetus [39–41] . Interestingly, fully differentiated podocytes of diabetics undergo a reversion to a state that more closely resembles the early embryonic kidney [42] . These “ fetal-like ” changes in diabetic podocytes (characterized by cytoskeletal rearrangement, increased expression of fetal/mesenchymal markers, de-differentiation, maladaptive cell cycle induction/arrest, and hypertrophy) were attributed to high D3 expression with low levels of T3 within the podocyte. Similarly, in most reported models of tissue disease or injury like inflammation, liver regeneration, cardiac hypertrophy and infarct, and cancer, D3 levels are generally found to be elevated [43–45] ; presumably to impede or slow down cellular metabolism. However, our data showing elevated amounts of D3 in healthy podocytes and its reduction in diseased/injured podocytes is opposite of what one would anticipate; challenging the existing paradigm. We surmise that D3 regulation in podocytes is clearly distinct from other cell types. This is indeed substantiated by reduced DIO3 mRNA levels in diseased human kidneys. Further, high D3 expression can also be found in healthy adult cells such as neurons and keratinocytes [24] . D3 is an integral plasma membrane protein that has been shown to rapidly undergo Clathrin-mediated endocytosis followed by accumulation in the early endosomes from where it recycles back to the plasma membrane; a phenomenon that is thought to facilitate its longer half-life (12 h) compared to the other short-lived isoforms, D1 and D2 [46] . The present data demonstrate that exposure to either PAN, LPS or T3 leads to cellular D3 redistribution in podocytes, i.e., accumulation of the enzyme in the Golgi apparatus and cell nucleus. These findings are reminiscent of the redistribution of D3 from the plasma membrane to the cell nucleus in hypoxic cultured neurons [47] and in the ischemic rat cerebral cortex [48] . One question that should be explored is if the redistribution of D3 in podocytes observed under stressful conditions involves internalization of D3, or truncation of the Golgi export to the plasma membrane and redirection of newly synthesized D3 to the cell nucleus. Studies performed in hypoxic neurons support the latter mechanism [47] . Nonetheless, D3 apparently joins the expanding list of podocyte cell membrane proteins that undergo changes in subcellular localization, like the crucial slit diaphragm protein Nephrin and several integrins [49] . It is not surprising that these important structural and regulatory proteins present on the cell membrane would require some sort of tight regulation to cope and contend with the rigorous flux of metabolites while the cell performs fundamental filtration functions. Exactly why the subcellular distribution of D3 is so dramatically affected under stressful conditions, and what it would accomplish could be answered by considering that TH receptors are located in the cell nucleus and D3 relocation to the nucleus has been shown to limit TH signaling [47] . In fact, nuclear accumulation of D3 in response to PAN-induced injury did decrease the transcription of two T3-repsonsive genes ( Pgc-1α and Hr ); indicating that D3 translocation to the nucleus upon external injury provides podocytes with an adaptive compensatory mechanism that allows for a rapid, localized inactivation of T3, thereby reducing the T3-induced gene transcription. Our study thus illustrates how D3 safeguards podocytes from exhaustion and death by averting futile energy expenditure during gene transcription ( Fig. 6 ). The classical well-studied and characterized genomic actions of THs are carried out by the binding of T3 to high-affinity nuclear TH receptors (TRs), which recognize specific TH-response elements (TREs) on the target genes and activate or repress transcription in response to T3. However, there is evidence showing that TH induces non-genomic (extranuclear) events independent of TRs. Since our previous work has established the importance of integrin signaling on podocyte health, we hypothesized that T3 signaling could lead to the activation of αvβ3 integrin, too much of which is known to result in podocyte damage. The present study indicates that the presence of D3 on the podocyte membrane also modulates this novel pathway of T3 signaling. We envisioned a model that functions in several interconnected ways: (a) D3 in the healthy podocyte cell membrane inactivates T3 to dampen TH signaling. (b) D3 could also mitigate T3-mediated signaling in healthy podocytes through the αvβ3 integrin receptor by directly interacting with the αvβ3 integrin receptor and interfering with its activation. However, the absence of D3 on the plasma membrane, as seen during injury or stress and when T3 levels are in excess, leads to enhanced T3-αvβ3 integrin interaction and subsequent signaling through the αvβ3 integrin receptor, culminating in podocytopathy. Having said that, we do acknowledge that the effects observed upon T3 addition on podocytes may not be solely due to the activation of αvβ3 integrin-mediated non-genomic branch. It is certainly possible that the nuclear or genomic signaling, which is initiated upon T3 binding to the TR receptors in the nucleus, might have contributed to some of the observed T3 outcome. TH-mediated induction of a proliferative signaling pathway in an otherwise differentiated podocyte is quite deleterious for its critical cellular functions. The ability of TH stimulated podocytes to migrate faster could eventually be translated into their inability to adhere properly to the GBM. Thus, we speculate that the enhanced motility of podocytes observed in our in vitro model of hyperthyroidism could lead to podocyte detachment (loss of number) in addition to podocyte effacement (structural and functional loss). It is expected that by preventing or reducing some of the events that increase podocyte mobility, it may be possible to prevent podocyte effacement and thus limit glomerular injury. Thus, although studying the pathways that affect podocyte physiology warrants comprehensive investigation, the knowledge gained from these investigations may prove fruitful for future therapeutic interventions. In the entirety, our findings cogently demonstrate that TH feeds a metabolic input into αvβ3 integrin on podocytes and that D3 acts as a thyrostat by providing a mechanism whereby TH-activated integrin signaling can be modulated to offer renoprotection to podocytes ( Fig. 6 ). Our data support a model wherein D3 interferes with the binding of αvβ3 integrin to T3. Under healthy conditions, high D3 levels not only keep local T3 levels low (checkpoint 1) but also occupy binding sites on the integrin receptor (checkpoint 2) and interfere with activation by T3. When D3 levels at the plasma membrane are reduced during periods of stress or injury, D3-αvβ3 integrin coupling is alleviated. In this situation, the integrin receptor is free to bind T3, leading to the rapid onset of T3-mediated αvβ3 integrin signaling pathway activation. Finally, our study attempts to delineate the plausible mechanisms that lead to podocyte damage and NS in the hyperthyroidic Graves’ disease, which is caused by an abnormal activation of the TSH-R. We demonstrate for the first time that podocytes express an active and functional TSH-R and its activation is detrimental to the podocytes. In support of our findings, there exists a plethora of evidence in favor of extra-thyroidal expression of TSH-R and the ubiquitous presence of this receptor is becoming widely known [50] . Remarkably, renal expression of two thyroid-specific genes, TSH-R and thyroglobulin (Tg) has been reported [51] . Further, Dutton et al had confirmed TSH-R expression in human renal tubular cells with a prominence in distal tubules and collecting ducts [52] . Thus, TSH-R clearly has other moonlighting functions to perform besides controlling thyroid hormone production. Indeed, it will be quite intriguing to explore the function and regulation of TSH-R in human kidneys, particularly in podocytes. Signaling from the constitutively active TSH-R is augmented by its binding to TSH itself or by stimulating autoantibodies to the TSH-R. However, engagement of TSH-R by different ligands and different antibodies (stimulating, blocking or neutral) has been shown to contribute to thyroid pathology in a way that is quite distinct from its cognate ligand, TSH [53] . Thus, in the light of this, we can explain why activation of TSH-R by TSH resulted in a cAMP spike and a reduced Dio3 transcript, but activation of the same receptor by TSH-R antibody yielded different results. From these illuminating observations, we can envisage that although both TSH-R-antibody and TSH induce podocyte injury at the morphological/structural level, they have a unique signaling imprint at the TSH-R. Further, cAMP induction is known to confer protection to podocytes against injury [54] . Interestingly, we observed that despite cAMP induction upon TSH-TSH-R engagement, podocytes underwent rounding and loss of actin cytoskeleton. Since our aim was to show that the TSH-R on podocytes is responsive to TSH and is thus functionally viable, we only looked at one aspect of TSH-TSH-R coupling, which is cAMP production. However, it is possible that the receptor-ligand engagement might have led to the parallel activation of other more dramatic pathological pathways, which potentially outweighed the protective effects of cAMP. In support of our explanation, a recent study by Yang et al [55] demonstrates that TSH binding to extrathyroidal TSH-Rs exerts different cell type-specific effects [ 56 , 57 ]. Nevertheless, to the best of our knowledge, this novel study is a step forward in unraveling one of the many missing links in the regulation of thyroid-kidney axis. Contributors NJT, CC, AB and JR conceptualized the study; NJT and JR secured the research funding, SA, NJT and KHK designed and executed majority of the experiments; KHK carried out the SPR, analyzed and interpreted the data, prepared the figure and provided valuable suggestions; RRD performed the ACR ELISA; JPW did the UPLC; YRS helped in purification of the recombinant proteins at Northwestern University; CL and DS generated the Flox-D3 mouse; BS provided her expertise in performing animal experiments; MMA and SM supervised the research and provided intellectual inputs; SA drafted the original manuscript and with contributions from NJT, MMA, SM, AB and JR, reviewed, edited and prepared the final version. SA, KHK and NJT are the guarantors of this work and had full access to all the data. SA, KHK, NJT, AB and JR have verified the underlying data and take responsibility for the integrity and accuracy of the data analysis. Further, all authors have read and approved the final version of the manuscript. Declaration of Competing Interest Jochen Reiser has patents on novel strategies for kidney therapeutics and stands to gain royalties from their commercialization. He is the co-founder of Walden Biosciences (Cambridge, MA, USA), a biotechnology company in which he has financial interest, including stock. Antonio Bianco is a consultant for Synthonics, Allergan, Abbvie, and BLA Technology. Other authors have nothing to disclose and there are no competing or conflicting interests. Acknowledgements We thank all the laboratory members for helpful discussions. Funding support from American Thyroid Association (ATA) to NJT (ATA-2018-050.R1) is appreciated. Authors thank Dr. Dileep Varma (Department of Cell and Developmental Biology) at Northwestern University for providing the facility for generation of recombinant proteins for SPR. Supplementary materials Supplementary material associated with this article can be found, in the online version, at doi:10.1016/j.ebiom.2021.103617 . Appendix Supplementary materials Image, application 1 Image, application 2
REFERENCES:
1. REISER J (2016)
2. IMASAWA T (2013)
3. MULLERDEILE J (2014)
4. REISER J (2013)
5.
6. MARIANI L (2012)
7. BASU G (2012)
8. BRAUNLICH H (1984)
9. KAPTEIN E (1986)
10. WEETMAN A (2000)
11. ESTEVESIMO V (2008)
12. HASNAIN W (2011)
13. CHADHA V (1999)
14. KOHRLE J (2000)
15. BIANCO A (2019)
16. DENTICE M (2011)
17. HERNANDEZ A (2005)
18.
19. MUNDEL P (1997)
20. DENTICE M (2014)
21. BERRY M (1991)
22. TAKEMOTO M (2002)
23. HAHM E (2017)
24. HUANG M (2011)
25. LEE H (2017)
26. SIMONIDES W (2008)
27. BERGH J (2005)
28. DAVIS P (2011)
29. REISER J (2004)
30. SHINDERMANMAMAN E (2016)
31. WEI C (2011)
32. RAMOSDIAS J (1999)
33. GINSBERG J (1983)
34. LIBERT F (1989)
35. BOUTIN A (2020)
36. NEUMANN S (2010)
37. CHONCHOL M (2008)
38. IGLESIAS P (2009)
39. GALTON V (1988)
40. HUANG H (1999)
41. BERRY D (1998)
42. BENEDETTI V (2019)
43. BOELEN A (2005)
44. DENTICE M (2012)
45. KESTER M (2009)
46. BAQUI M (2003)
47. JO S (2012)
48. FREITAS B (2010)
49. INOUE K (2015)
50. DAVIES T (2019)
51. SELLITTI D (2000)
52. DUTTON C (1997)
53. MORSHED S (2009)
54. LI X (2014)
55. YANG C (2021)
56. KLEIN J (2003)
57. ZHANG W (2009)
|
10.1016_j.bj.2021.05.002.txt
|
TITLE: Long non-coding RNA LINC01559 serves as a competing endogenous RNA accelerating triple-negative breast cancer progression
AUTHORS:
- Yang, Xue
- Yang, Yunqing
- Qian, Xueke
- Xu, Xiaodong
- Lv, Pengwei
ABSTRACT:
Background
Long non-coding RNA (lncRNA) is an endogenous RNA over 200 nt in length involved in gene regulation. LINC01559 is a novel lncRNA that has been identified as a fundamental player in human cancer. However, its role in triple-negative breast cancer (TNBC) remains unknown. Here, we explored the expression, function and clinical implication of LINC01559 in TNBC.
Methods
RNA expression was detected by qRT-PCR analysis. Cell Counting Kit-8 (CCK-8), 5-Ethynyl-2′-deoxyuridine (EdU), wound healing and Transwell assays were used to test cell viability, DNA synthesis rate, migration and invasion, respectively. The competing endogenous RNA (ceRNA) axis involved in LINC01559 was determined by RNA pull-down and luciferase reporter assays. The xenograft model was used to verify the function of LINC01559 in vivo.
Results
LINC01559 was significantly increased in TNBC tissues as compared to matched normal tissues, which was due to high levels of H3K4Me3 and H3K27Ac in the promoter region. Knockdown of LINC01559 inhibited TNBC cell proliferation, migration and invasion in vitro, and also retarded tumor growth and reduced lung metastasis in vivo. Mechanistically, LINC01559 served as a ceRNA that sponged miR-370-3p, miR-485-5p and miR-940, resulting in increasing the expression of a cohort of oncogenes, thus accelerating TNBC progression.
Conclusions
Our data provide a comprehensive analysis of LINC01559 in TNBC, we found that LINC01559 functioned as a carcinogenic ceRNA via sponging miRNAs. Targeting of LINC01559 may be a potential treatment for TNBC patients.
BODY:
At a glance of commentary Scientific background on the subject LINC01559 is a novel lncRNA that has been identified as a fundamental player in human cancer. However, its role in triple-negative breast cancer (TNBC) remains unknown. Here, we explored the expression, function and clinical implication of LINC01559 in TNBC. What this study adds to the field Our data suggest that LINC01559 functions as a novel driver of TNBC progression through serving as a carcinogenic ceRNA sponging multiple miRNAs. Targeting of LINC01559 may be a potential treatment for TNBC patients. Breast cancer is one of the most common malignant tumors in women, according to statistics, the incidence of breast cancer accounts for 7–10% of all malignant tumors in the whole body [ 1 ]. Triple negative breast cancer (TNBC) is a special type of breast cancer that is negative for estrogen receptor (ER), progesterone receptor (PR) and human epidermal growth factor receptor (HER2), accounting for about 15% of all breast cancer types [ 2 ]. TNBC is characterized by high histological grade, early onset age, large tumor volume, and high incidence of visceral and bone metastases [ 3 ]. The lack of ER, PR and HER2 expression means that endocrine therapy and HER2 targeted therapy are not effective for TNBC [ 4 ]. Although it is sensitive to chemotherapy, only about 20% of patients with TNBC have good chemotherapy effect through conventional adjuvant chemotherapy [ 5 ]. Therefore, the exploration for effective therapeutic targets is a hot spot in TNBC research, which will bring a new direction for TNBC diagnosis and treatment, so as to improve its poor prognosis. Long non-coding RNA (lncRNA) is a kind of non coding RNA with a length of more than 200 nt and is the transcription product of RNA polymerase II [ 6 ]. Most of lncRNAs are distributed in the nucleus, and some are specifically distributed in the cytoplasm [ 7 ]. They are composed of most highly conserved promoter sequences, introns or exons containing the "K4–K36" domain, and also have secondary structures [ 8 , 9 ]. Although very few lncRNAs can translate peptides, they play the important roles in the process of gene coding, such as at transcriptional level, post-transcriptional level and epigenetic level, which can control gene expression [ 10 ]. New evidence suggests that abnormal lncRNA expression is one of the hallmarks of cancer [ 11 ]. LncRNAs function as oncogenes or tumor suppressors via affecting different targets and signaling pathways [ 12 , 13 ]. One of the universally acknowledged machine-processed of lncRNA action is as a “miRNA sponge”, namely competing endogenous RNA (ceRNA) network, in which lncRNA sponges miRNA, isolating the repressive effect of miRNA on its targets, indirectly increasing mRNA expression [ 14 ]. The ceRNA network is frequently deregulated in various human cancers, for example, lncRNA KTN1-AS1 was upregulated in non-small cell lung cancer and promoted cell growth and inhibited apoptosis through sponging miR-130a-5p and elevating PDPK1, a key regulator of autophagy [ 15 ]. Recently, almost at the same time, a novel lncRNA, LINC01559, has been identified as a critical driver in different cancers, including gastric cancer [ 16 ], hepatocellular carcinoma [ 17 ] and pancreatic cancer [ 18 , 19 ]. Nevertheless, its role in TNBC remains unexplored. In the present study, we found that LINC01559 was also increased in TNBC, it promoted TNBC cell proliferation and invasion both in vitro and in vivo . Importantly, we further investigated the underlying mechanisms of its dysregulation and pro-oncogenic effect. Materials and methods TNBC tissue samples We collected a total of 96 pairs of TNBC and corresponding non-tumor normal tissues from patients pathologically diagnosed with TNBC between January 2013 to September 2018 at The First Affiliated Hospital of Zhengzhou University. Patients with other serious diseases or who had received any anti-tumor therapy before surgical resection, such as radiotherapy, chemotherapy, neoadjuvant chemoradiotherapy, immunotherapy, etc., were excluded. The clinicopathological characteristics of above patients were shown in Table 1 . We obtained written informed consent from each patient, and followed them up every six months. This study was approved by the Ethics Committee of The First Affiliated Hospital of Zhengzhou University. qRT-PCR analysis Total RNA was isolated from surgically resected fresh tissues and cultured cells by using TRIzol (Invitrogen, CA, USA) according to the manufacturer's instructions. Cell nucleus/cytoplasm fraction isolation was performed using the Nuclear and Cytoplasmic Extraction Kit (Thermo, MA, USA) according to the supplier's recommendation. The first-strand cDNA was generated by using MMLV transcriptase (Promega, WI, USA) with random primers. Real-time qRT-PCR was performed on a CFX96 real-time PCR detection system (Bio-Rad, CA, USA). The fold change of each gene was calculated by 2 −ΔΔCt method. U6 and GAPDH were used as control references for nuclear and cytoplasmic fragments, respectively. The detailed primer sequences are presented in Supplementary Table 1 . Cell culture and transfection Two TNBC cell lines MDA-MB-231 and BT-20 and one normal MCF-10 A cells were all commercially purchased from American Type Culture Collection (ATCC), which were regularly authenticated by STR profiling and tested for mycoplasma contamination every two months. Both of them were cultured in RPMI 1640 medium with 10% FBS at 37 °C with 5% CO 2 . Cell transfection was conducted using Lipofectamine 3000 (Invitrogen) as per supplier's instructions. Three miRNA inhibitors (miR-370-3p, miR-485-5p and miR-940) were commercially purchased from RiboBio (Guangzhou, China). Stable LINC01559 knockdown cell lines Two shRNA oligonucleotides targeting LINC01559 were designed and cloned into the pLV2-U6-Puro lentiviral vector. Then, the lentiviral vector was infected into MDA-MB-231 and BT-20 cells in the presence of infection enhancer polybrene. 48 h later, the fluorescence signals were observed under the fluorescence microscope. And stable cell lines were screened by adding puromycin to the culture medium, the knockdown efficiency was determined by qRT-PCR analysis. Chromatin immunoprecipitation (ChIP) assay The ChIP assays were carried out by using the SimpleChIP® Plus Kit according to the manufacturer's instruction (Cell Signaling Technology). In brief, protein was cross-linked to DNA with 1% formaldehyde for 10min at room temperature, followed by inactivation using glycine for 5min. The micrococcal nuclease was added to digest DNA into a length of 200–1000 bp. Then, the antibodies against Histone H3 (#ab1791, Abcam), H3K27Ac (#ab4729, Abcam) and H3K4Me3 (#ab8580, Abcam) were added and incubated overnight. Lastly, the immunoprecipitated DNA was eluted and enriched. Quantification was calculated as a percentage relative to the input DNA from equation 1 (1) [Input Ct−Target Ct] × 100 (%) CCK-8 and EdU assays MDA-MB-231 and BT-20 cells were plated onto 96-well plates and then cultured 3 days for CCK-8 assay. During this time, 10 μL CCK-8 solution was added into each well, followed by incubation at 37 °C for 1 h. Then, the cell culture plate was taken out and the absorbance value of each well was measured with a microplate reader. For detecting DNA synthesis rate, EdU staining assay was conducted by using Cell-Light™ EdU kit purchased from RiboBio as per manufacturer's instructions. Wound healing and Transwell assays For wound healing assay, a sterile pipette tip was used to generate three vertical scratches in each 6-well plate. Then, cells were cultured for 24 h and the migratory distance of each well was recorded. For Transwell assay, coated basement membrane: Matrigel matrix adhesive was diluted 1:8 and coated with the upper surface of Transwell chamber bottom membrane. 200 μL cell suspension was added to upper chamber and 600 μL culture medium containing 10% FBS was added to lower chamber. After 20 h of incubation, cells in the upper layer of the basement membrane were carefully wiped with a cotton swab, cells in the lower layer were fixed with 4% paraformaldehyde for 20 min, and stained with crystal violet solution for 15 min. Photographs were taken under an inverted microscope. Each sample was randomly counted in 10 fields, and the mean value was taken for statistical analysis. RNA pull-down and immunoprecipitation (RIP) assays LINC01559 was in vitro transcribed using T7 High Yield RNA Synthesis Kit (Ambion, TX, USA) according to the manufacturer's instructions. The transcribed RNA was labeled using RNA 3′-End Biotinylation Kit (Themo, MA, USA). Antisense-LINC01559 was biotinylated and served as a control. TNBC cell lysates were collected, added with above probes and incubated overnight with agitation. Then, the streptavidin magnetic beads were added and incubated at room temperature for 30 min with agitation. The miRNAs bound by LINC01559 probe were eluted and extracted using TRIzol reagent for qRT-PCR analysis. Besides, RIP assay was carried out by using Magna RIP™ RNA-Binding Protein Immunoprecipitation kit (Millipore, MA, USA) with 5 μg anti-AGO2 antibody (#186733, Abcam) as the manufacturer's instructions. Luciferase reporter assay The wild-type and mutant sequences between LINC01559 and miR-370-3p, miR-485-5p or miR-940 were cloned into pmirGLO luciferase vector (Promega, WI, USA). After that, miR-370-3p, miR-485-5p or miR-940 mimics were co-transfected with above vectors into MDA-MB-231 and BT-20 cells using Lipofectamine 3000. After 48 h, the dual-luciferase reporter assay was conducted using a commercial kit (Promega) as per standard protocols. Tumor xenografts in vivo All animal procedures were approved by the Institutional Animal Care and Use Committee of The First Affiliated Hospital of Zhengzhou University. Five BALB/c nude mice per group were used to establish experimental models of subcutaneous neoplasia and lung metastasis. For in vivo tumorigenesis, 1 × 10 7 MDA-MB-231 cells with or without LINC01559 knockdown in 0.2 mL PBS were subcutaneously injected into nude mice. After five weeks of growth, all mice were sacrificed and tumor tissues were photographed and weighed. For tail vein injection, 1 × 10 7 MDA-MB-231 cells with or without LINC01559 knockdown in 0.2 mL PBS were injected into the lateral tail vein of nude mice. After 6 weeks of injection, the mice were killed by cervical dislocation and the lungs were collected to count surface metastases under a dissecting microscope. Extracted lungs were embedded in paraffin using the routine method for hematoxylin&eosin (H&E) staining. Analysis of the downstream miRNAs of LINC01559 Three online tools including miRanda ( http://www.microrna.org/ ), PITA ( https://genie.weizmann.ac.il/pubs/mir07/mir07_data.html ) and RNAhybrid ( http://bibiserv2.cebitec.uni-bielefeld.de/rnahybrid ) were used to predict the miRNAs bound by LINC01559. miRNAs concurrently predicted by these three databases were used for subsequent experimental validation. Statistic analysis Each experiment was repeated at least three times, and the statistics were analyzed using the SPSS 21.0 software and figures were generated using Graphpad Prism v7.0 software. The difference between groups were analyzed by Student's t-test (2 groups) and one-way analysis of variance (ANOVA) (>2 groups). The survival curve was generated by Kaplan–Meier method and analyzed by Log-rank test. In addition, the survival curve based on LINC01559 level from Kaplan–Meier database ( http://kmplot.com/analysis/index.php?p=service&start=1 ) was generated using auto select best cutoff value, the False Discovery Rate was 10%. Pearson correlation coefficient was used to test the correlation between LINC01559 expression and H3K4Me3 or H3K27Ac enrichment on LINC01559 promoter. p < 0.05 was indicative of statistical significance. Results LINC01559 is a upregulated lncRNA in TNBC We first performed qRT-PCR analysis in 96 pairs of TNBC and precancerous normal tissues, as shown in Fig. 1 A, LINC01559 was significantly increased in TNBC tissues as compared to adjacent normal tissues (ANT). Then, we analyzed the subcellular localization of LINC01559, the results showed that LINC01559 was dominantly located in the cytoplasm in both MDA-MB-231 and BT-20 TNBC cell lines [ Fig. 1 B, C]. Clinically, high LINC01559 was positively linked to larger tumor size, advanced TNM stage and lymph node metastasis [ Table 1 ]. Importantly, patients with high LINC01559 had a shorter survival time than patients with low LINC01559 [ Fig. 1 D], which was confirmed by the data from Kaplan–Meier database ( http://kmplot.com/analysis/index.php?p=background ) [ Fig. 1 E]. These data suggest that LINC01559 is frequently upregulated in TNBC, which may be linked to TNBC progression. High LINC01559 expression in TNBC is due to gain of H3K4me3 and H3K27ac Through analyzing the UCSC Genome Browser ( http://genome.ucsc.edu/ ) [ 20 ], we found high enrichment of H3K4me3 and H3K27ac (transcriptional activation markers) on the LINC01559 promoter region [ Fig. 2 A], hinting that these histone modifications may be linked to high LINC01559 expression. To verify above observation, we performed ChIP assay, the results showed that H3K4me3 and H3K27ac were indeed abundantly occupied on LINC01559 promoter in both MDA-MB-231 and BT-20 cells in comparison to normal MCF-10 A cells [ Fig. 2 B, C]. Consistently, high H3K4me3 and H3K27ac on LINC01559 promoter were also observed in TNBC tissues as compared to matched normal tissues, and LINC01559 expression was strongly positively correlated with H3K4me3 (r = 0.759, p = 0.001) or H3K27ac (r = 0.64, p = 0.01) enrichment [ Fig. 2 D–G]. These results indicate that LINC01559 expression is regulated by histone modification. Knockdown of LINC01559 inhibits TNBC cell proliferation, migration and invasion in vitro To study the biological function of LINC01559 in TNBC, we constructed the stable LINC01559 knockdown TNBC cell lines using lentiviral vector, the results of qRT-PCR assay verified the knockdown efficiency in both MDA-MB-231 and BT-20 cells [ Fig. 3 A]. Then, we performed a series of functional assays. As shown in Fig. 3 B, C, cell viability was significantly weakened after depletion of LINC01559. Similarly, the DNA synthesis rate of LINC01559-silenced TNBC cells was remarkably slowed down as compared to control TNBC cells [ Fig. 3 D, E]. Moreover, knockdown of LINC01559 resulted in a significant reduction in the migration distance of TNBC cells, as illustrated by wound healing assay [ Fig. 3 F, G]. And the invasion ability was also dramatically attenuated after LINC01559 silencing [ Fig. 3 H, I]. These data suggest that LINC01559 is a driver of TNBC malignant phenotype. LINC01559 is a ceRNA that sponges miR-370-3p, miR-485-5p and miR-940 Given that LINC01559 is a cytoplasmic lncRNA, we thus inferred that LINC01559 may function as a ceRNA. To test this hypothesis, we performed RIP assay using anti-AGO2 antibody. The results showed that a large amount of LINC01559 was enriched by AGO2 in both MDA-MB-231 and BT-20 cells [ Fig. 4 A]. Through intersecting the prediction results of miRanda ( http://www.microrna.org/ ) (113 miRNAs predicted bound by LINC01559) [ 21 ], PITA ( https://genie.weizmann.ac.il/pubs/mir07/mir07_data.html ) (68 miRNAs predicted bound by LINC01559) [ 22 ] and RNAhybrid ( http://bibiserv2.cebitec.uni-bielefeld.de/rnahybrid ) (47 miRNAs predicted bound by LINC01559) [ 23 ] software, we found five miRNAs that may be sequentially complementary to LINC01559, which were used for experimental validation [ Fig. 4 B]. Then, RNA pull-down assay was carried out by using LINC01559 probe, the results showed that miR-370-3p, miR-485-5p and miR-940 were synchronously enriched by LINC01559 in both MDA-MB-231 and BT-20 cells [ Fig. 4 C, D]. The binding sites between LINC01559 and above three miRNAs were shown in Fig. 4 E–G, we mutated them one by one to perform luciferase reporter assay. The results showed that overexpression of miR-370-3p, miR-485-5p or miR-940 could significantly reduce the luciferase activity of wild-type LINC01559 vector, but did not affect that of the mutant one [ Fig. 4 H, I]. Besides, LINC01559 knockdown led to a distinct upregulation in miR-370-3p, miR-485-5p and miR-940 [ Fig. 4 J, K]. Importantly, the levels of the validated target oncogenes of above three miRNAs were significantly reduced after LINC01559 knockdown, and these effects were partially eliminated by silencing these miRNAs, respectively [ Fig. 4 L–N]. Functionally, knockdown of miR-370-3p, miR-485-5p or miR-940 effectively rescued the attenuated cell viability [ Fig. 4 O] and invasion [ Fig. 4 P] caused by LINC01559 depletion. These results demonstrate that LINC01559 sponges miR-370-3p, miR-485-5p and miR-940 in TNBC cells. Knockdown of LINC01559 inhibits TNBC growth and metastasis in vivo Lastly, we explored the in vivo function of LINC01559 through subcutaneous and tail vein injection of control and LINC01559-silenced MDA-MB-231 cells into nude mice. The results showed that the tumor volume and size in LINC01559-silenced group were significantly smaller than those in control group [ Fig. 5 A–C]. Likewise, the average number of lung metastases in LINC01559-silenced group was evidently less than that in control group [ Fig. 5 D, E]. These data indicate that depletion of LINC01559 represses TNBC progression, which is consistent with in vitro data. Discussion TNBC is a highly heterogeneous and malignant disease with poor prognosis. At present, the pathogenesis of TNBC is still poorly understood. In the current study, we found a TNBC-related lncRNA, LINC01559, which was upregulated in TNBC tissues by high H3K4me3 and H3K27ac enrichment on its promoter. LINC01559 promoted TNBC cell malignant behaviors both in vitro and in vivo . In terms of mechanism, LINC01559 was identified as a ceRNA that sponging miR-370-3p, miR-485-5p or miR-940, resulting in upregulation of a cohort of proto-oncogenes, thereby facilitating TNBC progression. Our data demonstrate that LINC01559 is a novel driver of TNBC, dysregulation of LINC01559/miR-370-3p/miR-485-5p/miR-940 axis may be responsible for TNBC tumorigenesis and dissemination. Gene expression is strictly controlled at the transcriptional level, in which abnormal histone modification plays an important role [ 24 ]. Histone can maintain DNA structure, protect genetic information, its N-terminal domain extends from the nucleosome and interacts with other regulatory proteins and DNA [ 25 ]. Emerging evidence suggests that histone modification imbalance can lead to tumorigenesis [ 26 ], and loss or gain of methylation and acetylation of histone H3 residues has been proved to be involved in gene silencing or activation [ 27 ]. For example, methylation at H3K4 and H3K36 and monomethylation of H3K27 can activate gene transcription, while methylation at H3K9 and H3K79 and dimethylation and trimethylation at H3K27 can inhibit the transcription of target genes [ 28 , 29 ]. On the other hand, histone H3 acetylation adds acetyl group to lysine and makes lysine positively charged, leading to chromatin structure opening and promoting gene transcription, thus high H3 acetylation is a marker of gene active transcription [ 30 ]. Up to now, many genes including lncRNAs have been found to be regulated by histone modification, such as HOXC-AS3 [ 31 ], SATB2-AS1 [ 32 ], SAMMSON [ 33 ] and UCA1 [ 34 ]. Herein, by in silico analysis, we found high enrichment of H3K4me3 and H3K27ac on LINC01559 promoter. ChIP assay confirmed this finding via using specific anti-H3K4me3 and -H3K27ac antibodies, in which more H3K4me3 and H3K27ac occupations were observed on LINC01559 promoter in both TNBC tissues and cells in comparison to respective normal controls. Moreover, high H3K4me3 or H3K27ac enrichment was strongly positively correlated with LINC01559 expression in TNBC tissues. These data suggest that upregulated LINC01559 in TNBC is mainly caused by gain of H3K4me3 and H3K27ac on its promoter region. LncRNA has a variety of modes of action, among which it has been widely verified as miRNA molecular sponge, effectively inhibiting the interaction between miRNA and target mRNA, elevating gene expression [ 35 ]. miRNA is widely distributed in eukaryotes with a length of 18–25 nt, it regulates about 1/3 of human genes [ 36 ]. Most mature miRNAs bind to human protein AGO2 and generate a gene silencing complex (RNA-induced silencing complex, RISC), and then bind specifically to mRNA 3′-UTR, preventing the translation of target genes or directly degrade them. Herein, by performing RIP, RNA pull-down and luciferase reporter assays, we found that LINC01559 was abundantly enriched by AGO2 protein and could concurrently sponge three tumor-suppressive miRNAs, namely miR-370-3p, miR-485-5p and miR-940. Consistently, the respective target oncogenes of above miRNAs were uniformly downregulated after LINC01559 knockdown, and silencing of these miRNAs alone could significantly rescue the decreased malignant phenotype of TNBC cells caused by depletion of LINC01559, suggesting that the ceRNA axis of LINC01559/miR-370-3p/miR-485-5p/miR-940 does exist in TNBC cells. Of note, lncRNA plays different roles or even opposite roles in different contexts, some studies showed that LINC01559 sponged miR-1343-3p and bound to EZH2 in gastric cancer [ 16 ], and absorbed miR-6783-3p [ 17 ] and miR-607 [ 19 ] in hepatocellular carcinoma and pancreatic cancer, respectively. However, we did not observe above phenomena in TNBC (data not shown). It is of great interest to elucidate the mechanism of this functional variation. Moreover, further studies are needed to explore the expression and function mechanism of LINC01559 in other malignant tumors, so as to reveal whether it is a pan-oncogene. Certainly, there are some limitations in our study. For example, there are some off-target and incomplete interference effects of shRNA, and the results will be more convincing if LINC01559 is completely knocked out by gene editing technology, such as CRISPR/Cas9 method. In addition, the samples are all retrospective studies, which can not avoid the potential for bias between different treatments. Conclusion In this study, we for the first time show that abnormal histone modification-mediated activation of LINC01559 enhances TNBC growth and metastasis via acting as a ceRNA and concurrently sponging three miRNAs, which provides evidence for the use of LINC01559 as a promising prognostic biomarker and treatment for TNBC patients. Conflicts of interest The authors declare no competing interests. Acknowledgments This work was supported by Medical Science and technology research plan of Henan (NO. 2018020084 ). Appendix A Supplementary data The following are the Supplementary data to this article: Multimedia component 1 Appendix A Supplementary data Supplementary data to this article can be found online at https://doi.org/10.1016/j.bj.2021.05.002 .
REFERENCES:
1. BRAY F (2018)
2. NAVRATIL J (2015)
3. KUMAR P (2016)
4. LEBERT J (2018)
5. LYONS T (2019)
6. GUTSCHNER T (2012)
7. QUINN J (2016)
8. CABILI M (2011)
9. GUTTMAN M (2012)
10. GAO F (2020)
11. JIANG M (2019)
12. KUMAR S (2020)
13. DU X (2020)
14. PARASKEVOPOULOU M (2016)
15. LI C (2020)
16. WANG L (2020)
17. DONG S (2021)
18. CHEN X (2020)
19. LOU C (2020)
20. KENT W (2002)
21. KRUGER J (2006)
22. HUANG H (2014)
23. JENUWEIN T (2001)
24. AUDIA J (2016)
25. FERRARI P (2015)
26. ALABOUD N (2022)
27. LONGBOTHAM J (2020)
28. REA S (2000)
29. ZHANG E (2018)
30. XU M (2019)
31. SHAO L (2019)
32. GUO X (2018)
33. PANDA A (2018)
34. JONAS S (2015)
35. ACHKAR N (2016)
36. BATISTA P (2013)
|
10.1016_j.redox.2023.102760.txt
|
TITLE: Energy stress modulation of AMPK/FoxO3 signaling inhibits mitochondria-associated ferroptosis
AUTHORS:
- Zhong, Sufang
- Chen, Wenjin
- Wang, Bocheng
- Gao, Chao
- Liu, Xiamin
- Song, Yonggui
- Qi, Hui
- Liu, Hongbing
- Wu, Tao
- Wang, Rikang
- Chen, Baodong
ABSTRACT:
Cancer cells and ischemic diseases exhibit unique metabolic responses and adaptations to energy stress. Forkhead box O 3a (FoxO3a) is a transcription factor that plays an important role in cell metabolism, mitochondrial dysfunction and oxidative stress response. Although the AMP-activated protein kinase (AMPK)/FoxO3a signaling pathway plays a pivotal role in maintaining energy homeostasis under conditions of energy stress, the role of AMPK/FoxO3a signaling in mitochondria-associated ferroptosis has not yet been fully elucidated. We show that glucose starvation induced AMPK/FoxO3a activation and inhibited ferroptosis induced by erastin. Inhibition of AMPK or loss of FoxO3a in cancer cells under the glucose starvation condition can sensitize these cells to ferroptosis. Glucose deprivation inhibited mitochondria-related gene expression, reduced mitochondrial DNA(mtDNA) copy number, decreased expression of mitochondrial proteins and lowered the levels of respiratory complexes by inducing FoxO3a. Loss of FoxO3a promoted mitochondrial membrane potential hyperpolarization, oxygen consumption, lipid peroxide accumulation and abolished the protective effects of energy stress on ferroptosis in vitro. In addition, we identified a FDA-approved antipsychotic agent, the potent FoxO3a agonist trifluoperazine, which largely reduced ferroptosis-associated cerebral ischemia-reperfusion (CIR) injuries in rats through AMPK/FoxO3a/HIF-1α signaling and mitochondria-dependent mechanisms. We found that FoxO3a binds to the promoters of SLC7A11 and reduces CIR-mediated glutamate excitotoxicity through inhibiting the expression of SLC7A11. Collectively, these results suggest that energy stress modulation of AMPK/FoxO3a signaling regulates mitochondrial activity and alters the ferroptosis response. The regulation of FoxO3a by AMPK may play a crucial role in mitochondrial gene expression that controls energy balance and confers resistance to mitochondria-associated ferroptosis and CIR injuries.
BODY:
1 Introduction Ferroptosis is a major mechanism for cell death associated with various human diseases, such as ischemia/reperfusion injury (IRI), brain damage, and cancer [ 1–3 ]. Ferroptosis is characterized by the accumulation of lipid peroxidation, especially those resulting from the oxidation of polyunsaturated fatty acids (PUFA) in membrane phospholipids. Mitochondria are the central hubs for cellular bioenergetics and are the most important source of ROS in mammalian cells, which is vital for regulating ferroptosis [ 4 ]. Mitochondria is a crucial player in erastin-induced or cysteine deprivation-induced ferroptosis, but not induced by RSL3 for inhibiting glutathione peroxidase-4 (GPX4) [ 4 ]. Ferroptosis is different with other forms of cell death because of dramatic morphological changes in mitochondria, including cristae enlargement and mitochondrial fragmentation; mitochondrial fragmentation and accumulation around the nucleus are increased in response to erastin toxicity [ 5 ].The role of mitochondrial morphology and function in ferroptosis are discussed heatedly. The maintenance of cellular energy homeostasis is pivotal for organismal homeostasis throughout life. Glucose deprivation induces energy stress, resulting in adaptive responses to restore energy homeostasis. The AMP-activated protein kinase (AMPK) plays an important role in cellular and systemic energy homeostasis. Energy stress increased the phosphorylation of FoxO3a and acetyl-CoA carboxylase (ACC), two known AMPK substrates, and AMPK-mediated ACC phosphorylation inhibited fatty acid synthesis and was resistant to ferroptosis under glucose starvation conditions [ 6 ]. Whether energy stress mediated AMPK/FoxO3a signaling regulates ferroptosis remains largely unknown. AMPK promotes FoxO3a transcriptional functions and plays an important role in regulating mitochondrial homeostasis [ 7 ]. FoxO3a activation inhibits mitochondrial gene expression through down-regulation of c-Myc function and alters the hypoxia response [ 8 ]. FoxO3a is induced in hypoxia and can prevent hypoxia-inducible factor 1α (HIF-1α)-dependent gene expression by directly binding to HIF-1α [ 9 ]. In particular, mitochondria derived ROS are required for HIF-1α induction in hypoxia [ 10 ]; FoxO3a activation prevented an increase in ROS levels in hypoxic cells by decreasing HIF-1α accumulation [ 11 ]. HIF-1α promotes cystine-glutamate antiporter (SLC7A11/xCT) expression which contributes to cerebral ischemia-reperfusion (CIR)-mediated glutamate release and excitotoxicity [ 12 ]. However, inhibition of SLC7A11 activity by erastin can deprive of cellular cysteine and GSH and disrupt cellular redox homeostasis, resulting in ferroptosis [ 13 ]. Thus, the functional relevance of HIF1α/SLC7A11 and mitochondria-associated ferroptosis in CIR is still highly debatable. In this study, we would like to investigate the relationship between AMPK/FoxO3a activity and mitochondria function in ferroptosis under energy stress. Our findings suggest that energy-stress-mediated AMPK/FoxO3a signaling activation inhibits ferroptosis via mitochondria-dependent mechanisms. We identified the FoxO3a activator trifluoperazine (TFP), a FDA-approved antipsychotic agent that reduced ferroptosis-associated brain ischemia/reperfusion (CIR) injury in rats through AMPK/FoxO3a/HIF-1α/SLC7A11 signaling and mitochondria-dependent mechanisms. 2 Materials and methods 2.1 Animals Male Sprague-Dawley (SD) rats (SPF grade, weighing 250–280 g) were purchased from the Nanjing Coris Biotechnology Co. (Nanjing, China). The rats were housed in an environment with standard lighting conditions (12 h light/dark cycle), controlled temperature (21–25 °C) and humidity (40–60%), and with freely accessible food and water. All animal experiments were approved by the Ethics Committee of Jiangxi University of Chinese Medicine. 2.2 Cell line,primary cells and cell transfection MCF-7 cells, HEK293T cells and BV-2 cells were purchased from Shanghai Cell Bank of Chinese Academy of Sciences. Primary mouse embryonic fibroblasts (MEFs) were established from embryos at E13.5 as previously described [ 14 ]. All cell lines and MEFs were cultured in Dulbecco's modified Eagle's medium containing 10% fetal bovine serum and 1% (v/v) penicillin/streptomycin at 37 °C incubator with humidified atmosphere of 5% CO2. The small interfering RNA (siRNA) of FoxO3a was synthesized by GenPharma Co., Ltd. (Shanghai, China), and the sequences were as follows: FoxO3a sense: 5ʹ-GGAACGUGAUGCUUCGCAATT-3ʹ, antisense: 5ʹ-UUGCGAAGCAUCACGUUCCTT-3ʹ. The BV-2 cells were transfected with siRNA of FoxO3a or negative control siRNA using a lipofectamine 3000 reagent according to the manufacturer's protocols. A lentivirus(Santa Cruz Biotechnology,sc-37887-V) was transduced to knock down FoxO3a in MCF-7 cells. Cells were transduced with lentivirus (multiplicity of infection = 50). Day 5–6 and forward, select stable clones expressing the shRNA via Puromycin dihydrochloride (Santa Cruz Biotechnology, sc-108071) selection. Replace the medium with fresh puromycin-containing medium every 3–4 days, until resistant colonies can be identified. Pick several colonies, expand them and assay them for stable shRNA expression by Western blot. 2.3 Cell viability assay CCK-8 kit (Sigma, 969920) was used to measure cell viability. HEK293T, MCF-7 and MEFs cells in logarithmic growth were inoculated into 96-well plates at a density of 3 × 10 4 /well. The old medium was removed the next day, replaced with high-sugar medium and sugar-free medium, and treated for 24 h with the same gradient concentration of erastin. Following treatment, CCK-8 reagent of 10ul (100μl culture base/well) was added to each well. Then incubate for 1 h in a 37 °C, 5% CO 2 incubator. The absorbance at a wavelength of 450 nm was determined a using a microplate reader. 2.4 Western blot analysis Cells and brain tissue were lysed in ice-cold RIPA buffer containing protease inhibitor, the same amounts of protein were separated using SDS-PAGE gel and transferred on PVDF membranes. The membranes were blocked in 5% skim-milk and then incubated overnight at 4 °C with the primary antibodies for mouse AMPK (1:1000,Proteintech,#66536-1-Ig), rabbit p -AMPK(Thr172) (1:1500,Cell Signaling technology,#2535), rabbit FoxO3a (1:1000,proteintech,#66428-1-Ig), rabbit p -FoxO3a(Ser413)(1:1000,Cell Signaling technology,#8174), rabbit CYCs (1:1000,Cell Signaling technology,#11940S), rabbit HMOX1 (1:1000,ABclonal,#A19062), rabbit SOD2 (1:1000,proteintech,#6647-1-Ig),mouse PGC1α(1:1000,proteintech,#66369-1-Ig),mouse TOM20(1:1000,proteintech,#66777-1-Ig),rabbit GPX4 (1:1000,ABclonal,#A11243),rabbit SLC7A11 (1:1000,ABclonal,#A2413),rabbit DRP1 (1:1500,proteintech,#12957-1-AP),mouse Hsp60 (1:1000,proteintech,#66041-1-Ig), rabbit p -MFF (ser172/ser146)(1:1500,Cell Signaling technology,#AF2365),mouse Total OXPHOS(1:2500,Abcam,#ab110413),mouse β-actin (1:1000,TRANS,#HC201). 2.5 Quantitative real-time PCR(qRT-PCR) The total RNA was extracted using TRIzon Reagent (CWBIO, Beijing, China) according to the manufacturer's protocols. Reverse transcriptase reactions were performed using the First-Strand cDNA Synthesis Kit (Thermo Scientific, USA). The real-time PCR was performed with SYBR Green PCR Master Mix (Yeasen Biotech Co., Ltd., Shanghai, China) under the following conditions: 95 °C for 5 min, followed by 40 cycles of 95 °C for 10s, 60 °C for 30 s. Mitochondrial DNA (mtDNA) copy number was determined by quantitative real-time PCR as described [ 8 , 15 ], quantification of mtDNA was performed by use of the ratio of mitochondrial gene ( cytochrome b ) to nuclear gene ( β-actin ).Other Genes expression levels were normalized to the signals of GADPH expression. The primers were listed in Table S1 . 2.6 Intracellular ATP level measurement Intracellular ATP levels were detected by ATP Assay Kit (Bryotime, S0026) according to the manufacturer's instructions. MCF-7 cells were seeded in 6-well culture plates. The next day, the medium was changed to a high-sugar and sugar-free medium, and erastin was added at a concentration of 16 μM for 24 h. Discard the medium and wash three times with PBS. Add 200ul of ATP detection lysis buffer (S0026-3) to each well to lyse cells. Then the luminescence of each well was subsequently measured with a microplate reader. 2.7 Determination of lipid peroxidation MCF-7 cells were seeded in 24 wells with cell slides. The next day, the medium was changed to a high-sugar and sugar-free medium, and erastin was added at a concentration of 16 μM for 24 h. Discard the medium and wash three times with PBS. Then C11-BODIPY581/591 (Invitogen) at a concentration of 10 μM was added and incubated in the dark for 30 min. Continue to wash 3 times with PBS. Cells were fixed with 4% paraformaldehyde and observed with a silver light microscope at 400 × magnification. Cells were seeded at a density of 2.0 × 10 5 per well on coverslips placed in a 6-well dish. The next day, the medium was changed to a high-sugar and/or sugar-free medium, and erastin was added at a concentration of 16 μM for 24 h. Coverslips were washed in HBSS and incubated in HBSS containing 10 μM BODIPY 581/591C11 (Invitrogen) and 200 nM MitoTracker Deep Red FM (Invitrogen) for 20 min. Coverslips were then inverted onto microscope slides. Observed with a silver light microscope at 400 × magnification and images were processed in Photoshop (Adobe). 2.8 Mitochondrial membrane potential (MMP) measurement The change in MMP was assessed using a JC-1 MMP Assay Kit (Solarbio Science&Technology, Beijing, China). Briefly, after incubating with the JC-1 staining work solution for 20 min at 37 °C, the MCF-7 cells were washed twice with HBSS and imaged using a fluorescence microscope. The red fluorescent aggregate indicates a healthy mitochondrion with normal membrane potential, whereas the green fluorescent monomer indicates loss of MMP. 2.9 Immunofluorescence Rat brains were fixed with 4% polyformaldehyde for 48 h. They were then collected in a 30% sucrose solution for 48 h before being snap frozen and cryosectioned at 15 μm thickness. Sections were incubated in 0.5% Triton X-100 for 20 min and blocked with PBS containing 3% goat serum for 1 h at room temperature before permeating overnight at 4 °C with primary antibodies. After three times washes by PBS (10 min each), sections were incubated with respective secondary antibodies for 2 h, including goat anti-mouse IgG (AlexaFluor-488, 1:200,Abcam,#ab150117) and goat anti-rabbit IgG (AlexaFluor-594,1:200, Abcam,#ab150080). A fluorescent microscope was implemented of observation at 400× magnification. We used the following primary antibodies: mouse anti-FoxO3a (1:200,proteintech,#66428-1-Ig), rabbit anti-GPX4 (1:200,ABclonal,#A11243), rabbit anti-SLC7A11 (1:200,ABclonal,#A2413), rabbit anti-Hif1α (1:200,proteintech,#20960-1-AP). 2.10 Whole-genome ChIP-seq ChIP-seq was performed by Wuhan IGENEBOOK Biotechnology Co., Ltd (Wuhan, China). Briefly, BV-2 cells (2 × 10 7 ) were crosslinked by formaldehyde (1%) at 26 °C for 10 min. Glycine (0.125 mol/L) was added to stop the crosslinking, and chromatin was sonicated into 200–500 bp fragments using sonication. The anti-FoxO3a antibody (PA1-805, Thermo Fisher Scientific Co., Ltd, USA) or IgG (ab231712, Abcam Co., Ltd, Shanghai, China) antibody and beads were applied to pull down the target protein, and proteinase K (345 μg/mL) (1245680100, Merck Co., Ltd, Germany) was used to digest proteins at 45 °C overnight. Immunoprecipitated DNA was used to construct sequencing libraries following the protocol provided by the I NEXTFLEX® ChIP-Seq Library Prep Kit for Illumina® Sequencing (NOVA-5143-02,Bio Scientific, USA) and sequenced on Illumina Xten with the PE 150 method. 2.11 Immunohistochemistry assay Rat brains were fixed with 4% polyformaldehyde, then processed into 5 μm-thick sections and immunostained with specific antibodies for 4-HNE(1:100,Bioss antibdies,#bs-6313R). The slides were imaged under a light microscope (Olympus BX53). The percentage of positive cells was calculated by counting under high magnification ( × 400). 2.12 Rat models of transient middle cerebral artery occlusion (MCAO) The rats were anesthetized with isoflurane gas (5% induction and 2.5% maintenance), and then cut off the neck skin, where they found the right common carotid artery (CCA), isolated and ligated the external carotid artery (ECA) and the internal carotid artery (ICA). A 0.36–0.37 mm monofilament nylon suture with a heat-blunted tip was introduced into the right CCA to the internal carotid artery (ICA) through a small incision in the CCA and then gently advanced from the CCA bifurcation to block the origin of the right middle cerebral artery. Reperfusion was allowed after 2 h of MCAO by monofilament removal. The body temperature of the rats was maintained during and after surgery until recovery from anesthesia. The suture remained there until the rats were sacrificed. Sham-operated rats underwent the same procedures except for the transient MCAO. 2.13 TTC staining TTC staining was exploited to detect infarct volume. In brief, rats were deeply anesthetized with isoflurane gas and sacrificed by decapitation. The brain was rapidly removed and cut into six coronal brain slices with an approximate thickness of 2 mm. These sections were stained with 2% 2, 3, 5-triphenyltetrazolium chloride (Solarbio Science&Technology Co.Ltd, Beijing, China) for 30 min at 37 °C in the dark. The TTC was dissolved in physiological saline solution. Viable tissues are stained deep red while the infarcts remain unstained. Infarcted areas were measured using Image J. 2.14 Mitochondria isolation All steps were carried out in accordance with the instructions of the Cell Mitochondria Isolation Kit (Beyotime, China). First, weigh 100 mg of brain tissue and wash it with cold PBS, then add pre-cooled Lysis and grind the tissue 20 times at 4 °C in an ice bath. Second, the cold lysis buffer was mixed with the tissue. The tissue homogenate was transferred to a clean centrifuge tube containing medium buffer, mixed gently, and centrifuged at 1200g for 5 min at 4 °C. The supernatants were transferred to new tubes and centrifuged at 4 °C 7000 g for 10 min. The precipitate containing mitochondria was lysed with mitochondrial lysis buffer for Western blot. 2.15 Glutamate assay The intracellular glutamate concentration was measured by assay kit from olarbio Science&Technology (BC1585) according to the manufacturer's instructions, and the reaction product was measured by a microplate reader at 340 nm. 2.16 MDA and Fe 2+ detection The MDA concentration in the penumbral brain tissue was measured by the thiobarbituric acid method using a MDA assay kit (BC0025, Solarbio Science&Technology, Beijing, China). MDA was calculated based on cellular protein concentration and expressed as nmol of MDA per gram of protein (nmol/g). Furthermore, the Fe 2+ level was measured using kits from olarbio Science&Technology (593 nm) according to the manufacturer's instructions. 2.17 OCR and ECAR measurements The oxygen consumption rate (OCR) and extracellular acidification rate (ECAR) was measured by an XFp extracellular analyzer (Agilent Technologies, Santa Clara, CA, USA). MCF-7 transfected with FoxO3a shRNA were seeded at 3.0 × 10 4 cells/well density in 8-well plates for overnight incubation to allow adherence to the plate. Then the cells were treated with erastin (16 μM) in glucose depletion medium for 6 h. After 6 h of erastin administration, the cells replaced to Seahorse XF assay Medium (Agilent, Santa Clara, CA, USA) pH 7.4 supplemented with 10 mM glucose, 1 mM pyruvate, and 2 mM glutamine. After monitoring 18 min of basal respiration, erastin (16 μM) was added to the system, then sequential injection 1 μM of mitochondrial inhibitors oligomycin, FCCP, and antimycin (AA) plus rotenone (AR) provided by the manufacturer (#101706–100, Agilent Technologies) every 20 min after monitoring. OCR and ECAR was automatically calculated using the Seahorse XFp software. Every point represents an average of three different wells. 2.18 Statistical analyses All data were expressed as means ± SEM. The data significance was evaluated by PASW Software (SPSS Inc., Chicago, IL, USA). Statistical significance among various groups was calculated by one-way ANOVA using post-hoc tests, p < 0.05 was considered statistically significant. 3 Result 3.1 Energy stress inhibits erastin and RSL3-induced ferroptosis To study the role of glucose deprivation in ferroptosis, we first investigated the effect of glucose deprivation on erastin- or RSL3-induced ferroptosis in HEK293T, mouse embryonic fibroblasts (MEFs) cells and MCF-7 cells. Cells were treated with different doses of erastin or RSL3 in normal and glucose depletion medium for 24 h. Consistent with previous findings, we found that erastin or RSL3 does dependent induced ferroptotic cell death, whereas glucose deficiency largely reversed erastin and RSL3-induced ferroptosis ( Fig. 1 A-B-C-D). We further explored the effect of glucose depletion on erastin-induced morphological change in MCF-7 breast cancer cells, and representative images showed that glucose deprivation reduced erastin-induced cell death ( Fig. 1 E). Glutathione peroxidase-4 (GPX4) and SLC7A11 is a central regulator of ferroptosis. Therefore, we used Western blot to further determine the expression of GPX4 and SLC7A11 in MCF-7 cells. The results showed that lipid peroxidation and oxidative stress were suppressed along with the up-regulation of the ferroptosis antagonism marker GPX4 while down-regulation of SLC7A11 under glucose deprivation conditions ( Fig. 1 F–G).Since the accumulation of lipid peroxidation is a hallmark of ferroptosis, we used C11-BODIPY581/591 probes to detect lipid peroxidation; the green fluorescence indicated oxidation type and the red fluorescence indicated non-oxidation type. As shown in Fig. 1 H, erastin induced ROS accumulation, whereas treatment of glucose-deprived cells with erastin reduced lipid peroxidation. 3.2 Energy stress inhibits erastin-induced ferroptosis partly through the AMPK/FoxO3a signaling pathway AMPK/FoxO3a signaling play an important role in metabolic homeostasis under energy stress. To investigate the role of AMPK/FoxO3a signaling in energy-stress-mediated ferroptosis, MEFs and MCF-7 cells were treated with or without erastin under normal or glucose deprivation medium as indicated for 24 h and then performed a Western blot experiment. We found that MEF cells treated with erastin induced a significant increase in expression levels of AMPK and p- AMPK(Thr172), and this effect was further enhanced under the condition of glucose deprivation ( Fig. 2 A and C). We also found that the ratio between p -AMPK/p-FoxO3a and total protein of AMPK/FoxO3a was significantly increased during energy stress ( Fig. S1 ). This finding was confirmed in MCF-7 breast cancer cells ( Fig. 2 B and D). FoxO3a is a downstream target gene regulated by AMPK that controls energy balance and stress resistance in cells. AMPK phosphorylates FoxO3a at Serine 413 during glucose deprivation. Therefore, to further investigate whether AMPK-mediated FoxO3a activation plays a role in ferroptosis inhibition by energy stress, Western blot was used to detect the protein levels of FoxO3a/p-FoxO3a (Ser413) under energy stress conditions. The results showed that the protein levels of FoxO3a/p-FoxO3a (Ser413) were significantly increased after the erastin treatment, and these effects were further enhanced under energy stress conditions ( Fig. 2 A–D). Next, in order to investigate whether AMPK is involved in glucose deprivation -mediated ferroptosis inhibition, MCF-7 cells cultured in normal or glucose-free medium were pretreated with compound C (an AMPK inhibitor), then the cells were induced to ferroptosis by erastin. We found that compound C significantly promoted ferroptosis induced by erastin under normal medium, glucose deprivation inhibited ferroptosis induced by erastin while compound C treatment reversed these effects ( Fig. 2 E). To further study whether FoxO3a plays any causal role in ferroptosis inhibition in the cancer cells. MCF-7 cells were transfected with FoxO3a shRNA to down-regulate FoxO3a. The efficiency of knockdown was confirmed by Western blot analysis. Knocking down FoxO3a in MCF-7 cells significantly restores ferroptosis sensitivity induced by erastin but not RSL3 ( Fig. 2 F, G, 2H). We further monitored lipid ROS accumulation using C11 BODIPY 581/591.We found knockdown FoxO3a in erastin-treated cells reversed the decline level of lipid ROS accumulation under energy stress( Fig. 2 I–J). Taken together, our findings suggested that FoxO3a is a critical downstream effector of AMPK in regulating ferroptosis inhibition under energy stress conditions. 3.3 FoxO3a activation induced by glucose depletion regulated the expression of antioxidative enzymes The antioxidant genes are important downstream target genes of FoxO3a. Among them, FoxO3a targets mitochondrial superoxide dismutase (SOD) and cytochrome c (CYCs). We used Western blot and quantitative real-time PCR analysis to investigate the expression of FoxO3a-regulated anti-oxidative enzymes SOD2, CYCs and HO-1. The results showed that erastin or glucose deprivation stimulated the mRNA and protein expression of CYCs, and glucose deprivation enhanced the expression of CYCs stimulated by erastin; the mRNA and protein levels of HO-1 increased after the erastin treatment ( Fig. 3 A–E). We observed much higher HO-1 mRNA and protein expression in erastin-treated cells, but there was no significant change under energy stress conditions. In addition, the mRNA and protein expression levels of SOD2 barely changed in erastin-treated cells or under energy stress conditions ( Fig. 3 B–E). These results suggested that CYCs with high expression may be resistant to ferroptosis. To further investigate the regulatory relationship between FoxO3a and antioxidant genes, MCF-7 cells were transfected with a FoxO3a lentivirus to down-regulate FoxO3a. As expected, the western blotting results showed that the protein expression level of CYCs was significantly reduced, while the protein expression level of HO-1 and SOD2 had not changed after FoxO3a knockout in erastin-treated cells under energy stress conditions ( Fig. 3 F–I). These results suggest that activated FoxO3a regulates the expression of the antioxidant gene CYCs. 3.4 Energy stress inhibits erastin-induced mitochondrial dysfunction by inducing FoxO3a Mitochondria are the main source of ROS within mammalian cells, playing an important role in erastin or cysteine deprivation-induced ferroptosis. To determine whether mitochondria play a role in energy stress resistance to ferroptosis, we used the mito-tracker and C11-BODIPY581/591 probes to detect the mitochondrial conditions and lipid peroxidation of cells that were treated with erastin under glucose deprivation conditions. Pictures of mito-tracker-stained cells revealed that glucose deprivation altered mitochondrial appearance from a filamentous network to more punctate structures induced by erastin; the total area of red fluorescence was reduced ( Fig. 4 A–C), suggesting that glucose deprivation causes a contraction of the mitochondrial network. Immunofluorescence was used to check the co-localization of lipid ROS (green) with mitochondria (red). We found the green fluorescence first appeared in a distribution that significantly co-localized with mitochondria in erastin-treated MCF-7 cells while glucose deprivation significantly decreased the level of lipid ROS accumulation compared with erastin-treated cells ( Fig. 4 A–B). These results suggests that energy stress inhibited lipid ROS accumulation in mitochondrial reduced cells. We further used the JC-1 probes to investigate mitochondrial membrane potential (MMP), our results indicate that erastin induced MMP hyperpolarization , while glucose deprivation blocked erastin -induced MMP hyperpolarization ( Fig. 4 D–E). ATP is produced by mitochondria through the utilization of the proton electrochemical gradient potential across the mitochondrial membrane. When mitochondria are reduced or depleted, the levels of ATP will decrease. We further examined ATP levels in cells using commercial kit, and found that ATP levels were clearly reduced in the glucose-deprivation or/and erastin-treated groups, whereas FoxO3a knockdown reversed the decline levels of ATP in MCF-7 cells induced by erastin or/and glucose-deprivation ( Fig. 4 F), indicating that FoxO3a is responsible for the inhibition of mitochondrial activity. To validate the role of FoxO3a in glucose deprivation -induced energy metabolism, cells were treated with erastin under glucose deprivation conditions. Interestingly, knock-down of FoxO3a appeared to enhance basal oxygen consumption, both under normal conditions and erastin administration after 6 h of glucose deprivation ( Fig. 4 G), implying that FoxO3a inhibits mitochondrial oxidation. On the other hand, knock-down of FoxO3a did not affect ECAR ( Fig. 4 H), indicating that FoxO3a did not interfere with the glycolytic machinery in MCF-7 cells. To investigate whether the role of FoxO3a in the regulation of mitochondrial activity confers resistance to erastin-induced ferroptosis. We used immunofluorescence co-localization techniques to detect the expression of FoxO3a in the mitochondria of MCF-7 cells, and immunofluorescence staining showed that FoxO3a expression in both cytoplasm and mitochondria was obviously increased after treatment with erastin or/and glucose-deprivation; the co-immunofluorescence of mitochondria (red) and FoxO3a (green) was found in the control and the erastin-treated cells. Glucose deprivation reduced the number of mitochondria, and a pronounced green fluorescence can be observed in the nucleus under glucose-deprivation conditions, which indicates that a large amount of FoxO3a is activated in the nucleus ( Fig. 5 A). We further measured the expression of FoxO3a in isolated mitochondria, nucleus and cytoplasm, and we found glucose deprivation induced a significant increase in FoxO3a expression in the mitochondria, nucleus and cytoplasm, and the expression of FoxO3a in the cytoplasm was further enhanced by treatment with erastin( Fig. 5 B–D). We next investigated whether FoxO3a activation under energy stress would change the levels of mitochondrial proteins, mtDNA copy number and mitochondrial mass. The result of Western blot showed that glucose deprivation slight decreased of TOM20 expression compared with normal conditions but caused a strong reduction of PGC1α and TOM20 expression in the presence of the erastin( Fig. 5 E). We investigated the expression level of PGC1α and its target genes ATP5B by qRT-PCR. Our results showed that glucose depletion inhibited PGC1α and its downstream genes ATP5B expression in erastin-treated MCF-7 cells. However,either glucose depletion or erastin treatment alone stimulated PGC1α mRNA expression while inhibited ATP5B mRNA expression( Fig. 5 F–G). Moreover, glucose deprivation decreased the amounts of mitochondrial respiratory complexes in MCF-7 cells treated with erastin, which is indicative of decreased mitochondrial oxidative phosphorylation (OXPHOS) activity( Fig. 5 H).We further determined the mtDNA copy by qPCR. Our results indicated that glucose deprivation caused about 50% reduction in the mtDNA copy number and this effect was enhanced after treatment with erastin( Fig. 5 I). Above results confirmed that glucose deprivation lowered the mitochondrial mass under erastin treatment as evidenced by downregulation of PGC1α/ATP5B and TOM20. It has been reported that FoxO3a activation inhibited nuclear-encoded genes with mitochondrial function. To further determine the inhibitory effect of FoxO3a on mitochondria-associated genes, we transfected MCF-7 cells with lentivirus containing FoxO3a shRNA to down-regulate FoxO3a. The qRT-PCR results showed that glucose deprivation significantly inhibited the expression levels of mitochondria-associated gene, including adenylate kinase 2(AK2), mitochondrial transcription factors B2 (TFAM), peroxisome proliferator-activated receptor gamma co-activator- 1β (PGC1β) and PGC-related1 (PRC), fumarate hydratase (FH), and NADH: ubiquinone oxidoreductase subunit A6 (NDUFA6) ( Fig. 5 J-P). The silencing of FoxO3a significantly increased the expression of all mitochondria-related genes in the erastin or/and glucose deprivation group, and knockdown of FoxO3a induced TFAM expression while reducing PRC expression compared to the control group ( Fig. 5 L, O). Our data strongly suggest that glucose starvation inhibits erastin-induced ferroptosis at least partly through FoxO3a-mediated inhibition of mitochondrial gene expression. 3.5 Trifluoperazine, a potent FoxO3a agonist, protects rats from focal cerebral ischemic/reperfusion injury To investigate the FoxO3a function in ferroptosis-associated pathological conditions in vivo , we identified FoxO3a activator trifluoperazine (TFP), a FDA-approved antipsychotic agent, and used it in rat exposed to cerebral ischemic/reperfusion injury (CIR). Rat are subjected to transient middle cerebral artery occlusion (MCAO) for 2 h, followed by reperfusion. Trifluoperazine (5.0 mg kg −1 ) was injected intraperitoneally 5 min after the induction of ischemia. Neurological scores and infarct volumes were evaluated at 48 h of reperfusion( Fig. 6 A). 2,3,5-triphenyltetrazolium chloride (TTC) staining assays showed that TFP treatment for 24 h at the onset of brain ischemia significantly reduced infarct volume after the induction of CIR in rats ( Fig. 6 B and C). Moreover, TFP improved the neurological behaviors ( Fig. 6 D). Given that AMPK directly phosphorylated FoxO3a at Ser413 and led to an enhanced FoxO3a-dependent transcription, we found TFP significantly increases the expressions of AMPK/ p -FoxO3a(Ser413)/FoxO3a induced by CIR ( Fig. 6 E–I). These results indicated that TFP exerts a protective effect on CIR injury at least partly through AMPK/FoxO3a activation. FoxO3a activation blocks the increase in ROS levels and inhibits HIF-1α mRNA after exposure to hypoxia. Consistent with before, we found TFP treatment induced FoxO3a activation while abolishing HIF-1α induction during CIR in rats ( Fig. 6 E–I). Moreover, histological analysis demonstrated that CIR blocked SLC7A11 expression, and TFP efficiently enhanced CIR-induced downregulation of SLC7A11. This observation was further confirmed with western-blot and qRT-PCR ( Fig. 6 G–J). To investigate the role of FoxO3a in the regulation of SLC7A11, BV-2 cells were stimulated by erastin in different doses for 24 h. Western blot analysis showed the expression of FoxO3a was increased while SLC7A11 expression was decreased in a dose-dependent manner ( Fig. 6 K), we transfected BV-2 cells with specific siRNA to inhibit the expression of FoxO3a. Interestingly, knockdown of FoxO3a significantly decreased basal levels of SLC7A11( Fig. 6 L). We further conducted ChIP-seq to obtain the chromatin DNA bound by FoxO3a. Among these binding regions, 42.41% were in the intron, 37.96% were in the intergenic of chromosomes, 11.16% were in the exon, 5.37% were in the promoters of these genes, 1.97% and 1.11% were in the 5′ and 3′ untranslated regions (UTRs), respectively. 0.01% were in the CDS of genes ( Fig. 3 G). What's more, we found there were 2 peaks (peaks 180924 and 180925) annotated in the promoter region of SLC7A11 ( Fig. 6 M − N). These results indicate that FoxO3a bound to the SLC7A11 promoter inhibits its expression. 3.6 Pharmacological activation of FoxO3a by TFP restores the iron alterations, glutamate levels and lipid peroxidation in transient focal brain ischemia in rats Deprivation of oxygen and energy triggers an ischemic cascade, and iron overload exaggerates neuronal damage during reperfusion. We showed that TFP reduced the accumulation of Fe 2+ induced by CIR ( Fig. 7 A), as indicated by lipid peroxidation analysis using a commercial MDA kit and 4-hydroxy-2-noneal (4-HNE, a lipid peroxidation marker) staining. We have shown that TFP treatment inhibited CIR-induced 4-HNE staining and MDA levels in the penumbral brain tissue ( Fig. 7 B and E). We also measured the intracellular glutamate levels using a kit, we found the intracellular glutamate concentration was significantly decreased in the penumbral brain tissue, which was reversed after TFP treatment ( Fig. 7 C). In addition, the low expression of GPX4 indicated enhanced ferroptosis. Our results showed that TFP treatment reversed the decline levels of GPX-4 and enhanced the CYCs expression during CIR ( Fig. 7 D, F,7G,7H,7I). These results indicated that TFP treatment inhibited CIR-induced neuronal ferroptosis, which potentially relates to an inhibitory effect on lipid peroxidation and glutamate excitotoxicity. To further determine whether AMPK/FoxO3a activation promotes mitochondrial fission. Western blotting results displayed that, compared with the CIR group, TFP significantly increased the expression of mitochondrial FoxO3a and Drp1 in the penumbral brain tissue ( Fig. 8 A and B). Furthermore, qRT-PCR was used to detect the mRNA levels of mitochondria-related genes (LARS2, TFAM, PGC-1β, and PRC) in the penumbral brain tissues of rats in each group. Our results showed that the mRNA levels of LARS2 and PRC were significantly decreased compared with the sham group, while the mRNA expression levels of TFAM were significantly increased in the penumbral brain tissue of rats after CIR. TFP treatment further decreased the CIR-induced downregulation of LARS2 PGC-1β, and PRC while inhibiting CIR-induced TFAM expression ( Fig. 8 C–F). To investigate whether FoxO3a activation is associated with OXPHOS and PGC1α in response to CIR stress. We investigated OXPHOS complex and PGC1α levels in the penumbral brain tissues. The results showed that total levels of OXPHOS complex-I and II up-regulated in adaptation to CIR, TFP treatment downregulated Complex II, III and IV levels as compared to CIR group ( Fig. 8 G). We also found the expressions of PGC1α and TOM20 were decreased by TFP treatment in response to CIR stress( Fig. 8 H). In effect, these results indicate that TFP modulated the FoxO3a-Drp1-mitochondrial fission pathway and mitochondrial function, which is of great significance for the neuroprotective effect after CIR. 4 Discussion Glucose deprivation can lead to energy stress and markedly increase the intracellular level of ROS [ 16 , 17 ]. Ferroptosis is induced by ROS-mediated lipid damage, which is involved in many human diseases, including cancer, ischemia/reperfusion and traumatic brain injury [ 18 , 19 ]. Mitochondria are the main source of ROS and are also vital for regulating ferroptosis [ 4 ]. FoxO3a is involved in the regulation of mitochondrial function and energy homeostasis [ 20 , 21 ]. Previous studies found that mitochondrial dysfunction resulted in insufficient ATP supply and activation of the AMPK/FoxO3a pathway [ 22 , 23 ]. However, whether FoxO3a mediates mitochondrial activity in regulating ferroptosis remains unknown. In this study, we demonstrated that glucose deprivation reduced MMP hyperpolarization, mitochondrial mass and lipid peroxidation and ATP levels, resulting in resistance to ferroptosis induced by erastin. Energy stress stimulated AMPK/FoxO3a signaling leading to the induction of FoxO3a's target genes CYCs and inhibition of mitochondria-related genes and mitochondrial activity as well as lowered the levels of respiratory complexes. Finally, we show that, TFP, a FDA-approved drug exerts a protective effect on CIR injury in rats is related to its inhibition of ferroptosis, and the mechanism might be related to its regulation of AMPK/FoxO3a/HIF-1α/SLC7A11 and mitochondria activity. AMPK is a critical sensor of cellular energy status that regulates an adaptive response under energy stress. A recent study showed that energy stress activated AMPK and inhibited ACC, resulting in restrained fatty acid synthesis (FAS) and ferroptosis inhibition [ 6 ]. Moreover, FoxO3a activity, which is evoked by the AMPK pathway, plays an unique role in energy stress response and ROS regulation. We provide evidence that AMPK phosphorylated FoxO3a at Serine 413 under energy stress, which enhanced FoxO3a-dependent transcription and resisted ferroptosis induced by erastin. Glucose deprivation enhances the gene expression of FoxO3a, which targets CYCs, while HO-1 and SOD2 have little regulation. Mitochondrial CYCs play an anti-oxidative role in the generation and elimination of oxygen (O2) and H2O2 [ 24 ]. Our data suggest that FoxO3a might target mitochondrial CYCs, leading to reduced ROS in MEF and MCF-7 cells. Our data demonstrated that FoxO3a deficiency significantly increased these cells' susceptibility to ferroptosis and decreased CYCs expression induced by erastin. As a result, our findings suggest that energy-stress-mediated AMPK activation inhibits ferroptosis via FoxO3a/CYCs-dependent mechanisms. In this study, knocking down FoxO3a in MCF-7 cells significantly restored ferroptosis sensitivity induced by erastin but not induced by RSL3 under energy stress. Mitochondria promote erastin-induced or cystine-starvation-induced, but not RSL3-induced ferroptosis [ 4 ]. This possibility is supported by the fact that FoxO3a had multiple effects on mitochondrial function and regulated energy-stress-mediated inhibitory effects on lipid peroxidation and ferroptosis. Previous studies showed that FoxO3a activation reduces mitochondrial capacity through inhibition of c-Myc and lowers the entry of pyruvate into the tricarboxylic acid (TCA) cycle by induction of PDK4 [ 8 ]. Given that TCA cycle is essential for ferroptosis-associated MMP hyperpolarization [ 4 ], we subsequently used the JC-1 probe to determine the role of glucose deprivation in ferroptosis-associated MMP hyperpolarization. Our data suggest that glucose deprivation blocks erastin-induced MMP hyperpolarization. Furthermore, the fact that knocking out FoxO3a reversed the decline in ATP levels in MCF-7 cells caused by erectin or/and glucose deprivation, combined with the results generated from seahorse analyzer that knockdown FoxO3a resulted in a significant increase in OCR suggests that FoxO3a is involved in the inhibition of mitochondrial activity. It has been reported that FoxO3a inhibits mitochondrial gene expression by reducing c-Myc stability [ 8 ]. Our data clearly showed that glucose deprivation significantly inhibited the expression levels of mitochondria-associated genes (AK2, TFAM, PGC1β, PRC, FH, and NDUFA6), while silencing of FoxO3a increased expression of all mitochondrial genes and made cells more sensitive to ferroptotic death. These results are in line with the data that mitochondria-mediated ferroptosis might contribute to the antitumor function of FH, and loss of FH function renders cells more resistant to ferroptosis [ 4 ]. Further, we show that these results indicated that glucose deprivation activates FoxO3a which also reduced the number of mitochondria and caused a contraction of the mitochondrial network. Consistent with before [ 8 ],FoxO3a activation induced by glucose deprivation decreased total levels of OXPHOS complex in erastin treated cells. We measured total levels of PGC1α, an important regulator of mitochondrial biogenesis [ 25 ]and mitochondria marker TOM20 in our studies. We found glucose deprivation decreased the total expression of PGC1α and TOM20, indicating energy stress decreased mitochondrial biogenesis in erastin-treated conditions. Consistent with this hypothesis, we identified a FDA-approved antipsychotic agent, trifluoperazine (TFP) was able to activate FoxO3a, inhibited mitochondria-associated genes and OXPHOS complexes as well as mitochondrial proteins in a CIR model. All these observations support the major role of FoxO3a in mitochondria-associated ferroptosis under energy stress. Ferroptosis is involved in organ ischemia/reperfusion injury, including brain, heart and kidney, and thus represent a potential therapeutic target [ 26–28 ]. Previous studies showed that FoxO3a can inhibit HIF1α-dependent gene expression by directly binding to HIF-1α [ 9 ]. In particular, ROS derived from mitochondria are required for the accumulation of HIF-1α in hypoxia [ 11 ]. HIF1α knockout in brain reduces hypoxic–ischemic damage [ 29 ]. Because our findings indicate that energy-stress-mediated FoxO3a activation inhibits ferroptosis via mitochondria-dependent mechanisms, we would like to elucidate the relationship between FoxO3a/HIF-1α activity and mitochondria-associated ferroptosis in a CIR model. TFP may play a neuroprotective role in ischemia stroke through the FoxO3a/HIF-1α pathway, and we further show that triggering this pathway by TFP is associated with reducing 4-HNE staining and MDA level in the penumbral brain tissue. Although previous studies have confirmed TFP's neuroprotective effect against CIR-induced injury [ 30 ], to our knowledge, this is the first study to show that FoxO3a can be activated by TFP and inhibit ferroptosis in CIR injuries. In addition, HIF1α directly binds to the promoter of SLC7A11 and promotes long-lasting glutamate excitotoxicity, conditional knockout of HIF-1α in rat reduced extracellular glutamate in CIR [ 12 ]. As expected, following ischemia stroke, the expression of FoxO3a was significantly increased, while the expressions of HIF-1α/SLC7A11 were significantly decreased in TFP -treated group as compared to the control group. Moreover, we found that FoxO3a binds to the promoters of SLC7A11 and reduces CIR-mediated glutamate excitotoxicity through inhibiting the expression of SLC7A11. Many studies provide direct evidence supporting the hypothesis that inhibition of SLC7A11 induces ferroptosis and aggravates ischemia [ 31 , 32 ]. High-expression SLC7A11-cells were more resistant to ferroptosis than low-expression SLC7A11-cells in CIR, we found an inverse correlation between SLC7A11 expression and ferroptosis sensitivity. On the other hand, FoxO3a knockout studies in neural stem cells, revealed that FoxO3a is involved in the regulation of hypoxia-dependent genes [ 33 ]. Hence, FoxO3a may take over some of HIF-1α′s function in the adaptation to energy stress conditions. Previous studies showed that glucose deprivation induced mitochondrial fragmentation in MEFs [ 34 ]. Mitochondrial fission and the resulting fragmentation are key signals responsible for triggering the AMPK-FoxO3a signaling pathway [ 35 ]. Consistent with previous findings [ 5 ], glucose starvation resulted in rapid mitochondrial fragmentation in MCF-7 cells. Fragmented mitochondria are associated with increased production of ROS [ 36 ]. Previous studies showed that frataxin significantly enhanced erastin-induced ferroptosis, resulting in dramatic mitochondrial morphological changes, including decreased the number of cristae and enhanced fragmentation [ 37 ]. However, another studies showed that STING1 promotes ferroptosis through mitofusin 1/2(MFN1/2)-dependent mitochondrial fusion [ 38 ]. Opposing conclusions proposed from different studies may be due to different cell lines and different pathological conditions or diseases. The molecular mechanisms about mitochondrial dynamics and function in ferroptosis deserve further investigation. It has been reported that AMPK promotes mitochondrial fission through phosphorylating mitochondrial fission factor (MFF), the mitochondrial receptor of Drp1 [ 39 ]. Glucose deprivation induces mitochondrial fragmentation, which depends on Drp1 [ 40 ]. Mitochondrial fission has been linked to mitochondrial remodeling during FoxO3a-mediated muscle atrophy [ 41 ]. We observed induction of the fission regulators DRP1 and p -MFF upon FoxO3a activation induced by TFP treatment. Furthermore, TFP treatment inhibited mitochondria-related gene expression in the penumbral brain tissue of rats after the induction of CIR. We proposed that induction of mitochondrial fission regulators and downregulation of mitochondria-related gene expression contributes to the morphological changes following FoxO3a activation, which is responsible for the alterations in mitochondrial activity and resistant to ferroptosis. In conclusion, our study reveals energy-stress-mediated AMPK/FoxO3a signaling activation inhibited ferroptosis through inhibition of mitochondria-related gene and OXPHOS complexs expression and promotion of mitochondrial CYCs expression, glucose deprivation reduced MMP hyperpolarization, oxygen consumption,mitochondrial mass and lipid peroxidation and ATP levels, resulting in resistance to ferroptosis induced by erastin in MEFs and MCF-7 cells. Our study reveals TFP, a novel FoxO3a activator reduced ferroptosis-associated CIR injuries in rats through repression of HIF-1α/SLC7A11 expression and mitochondrial activity. The regulation of FoxO3 by AMPK may play a crucial role in mitochondrial gene expression that controls energy balance and confers resistance to mitochondria-associated ferroptosis in vitro and in vivo . Ethics approval and consent to participate The animal experiments were performed according to internationally followed ethical standards and approved by the research ethics committee of Jiangxi University of Chinese Medicine. Availability of data and materials All data generated or analyzed during this study are included in this published article. Declaration of competing interest All the authors declare no conflicts of interest. Acknowledgment This work is supported by the National Natural Science Foundation of China (No. 81803536 ) and Scientific Research Foundation of Peking University Shen Hospital ( KYQD202100X ; KYQD2023254 ). Appendix A Supplementary data The following is the Supplementary data to this article. Multimedia component 1 Multimedia component 1 Appendix A Supplementary data Supplementary data to this article can be found online at https://doi.org/10.1016/j.redox.2023.102760 .
REFERENCES:
1. KIM S (2021)
2. YI J (2020)
3. ZILLE M (2017)
4. GAO M (2019)
5. NEITEMEIER S (2017)
6. LEE H (2020)
7. HERZIG S (2018)
8. FERBER E (2012)
9. EMERLING B (2008)
10. LEE H (2017)
11. CHANDEL N (2000)
12. HSIEH C (2017)
13. DIXON S (2014)
14. OLMOS Y (2009)
15. LI M (2023)
16. NOGUEIRA V (2013)
17. HAY N (2016)
18. SHEN Z (2018)
19. SHEN L (2020)
20. ZHOU L (2017)
21. MANLEY S (2014)
22. WANG M (2021)
23. APOSTOLOVA N (2020)
24. ZHAO Y (2003)
25. CHATTERJEE A (2022)
26. TUO Q (2022)
27. ZHAO Z (2020)
28. CHEN Y (2021)
29. HELTON R (2005)
30. KURODA S (1997)
31. DONG H (2020)
32. WANG Z (2022)
33. RENAULT V (2009)
34. RAMBOLD A (2011)
35. HU R (2020)
36. QI X (2013)
37. DU J (2020)
38. LI C (2021)
39. TOYAMA E (2016)
40. ZHENG F (2019)
41. ROMANELLO V (2010)
|
10.1016_j.rcsop.2022.100188.txt
|
TITLE: The use of special approval medicines among pediatric patients in a tertiary care hospital: A reality check
AUTHORS:
- Balan, Shamala
- Koo, Kaitian
- Muhamad Hamdan, Muhamad Danial
- Lee, Su Vin
ABSTRACT:
Background
Special approval medicines (SAMs) are medicines used with approval from the Director General of Health Malaysia when the therapeutic options within regulatory and formulary boundaries appear unsuitable or ineffective to treat the patients.
Objectives
To examine and characterize the use of SAMs among children in a Malaysian tertiary care hospital.
Methods
The named-patient basis SAM application forms, cover letter, pharmacist review summary and patient monitoring forms available at the Pharmacy Department between 1st January 2019 and 31st December 2020 were reviewed. Unprocessed, unapproved and stock-basis applications were excluded. The outcome measures were categories, scope, off-label use and cost of SAM. Per-patient data were analyzed descriptively.
Results
Overall, 1010 patients (mean age of 8.7 ± 5.6 years) were involved in 328 SAMs applications. The most common SAMs pharmacological groups were nervous system (n = 371, 36.7%) and antineoplastic and immunomodulating agents (n = 332, 32.9%). Top three SAMs were melatonin (11.5%), scopolamine (7.6%) and cholecalciferol (7.1%). A total of 837 (82.9%) and 513 (50.8%) patients were involved in the SAMs applications for non-formulary and unregistered medicines, respectively. Unregistered, non-formulary medicines were applied for 47.3% (n = 478) of the patients. The majority of the scope for SAMs (64.7%) were to substitute the available alternatives in the national formulary which were ineffective or sub-optimal for the patients. Among the 262 patients with repeat applications, 93.8% reported disease or symptom improvement while 1.9% experienced side effects. Up to 17% of SAMs analyzed in this study were used for off-label indications. The total cost of the SAMs was RM8,748,358.38 (USD 2,090,418.86).
Conclusion
The use of SAMs among children in this hospital involved unregistered, non-formulary medicines used to substitute the available alternatives in the formulary. A concerted effort is warranted in exploring supplementary mechanisms to enhance the medicine registration process and formulary system towards facilitating enhanced provision of treatment for children.
BODY:
1 Introduction In most countries, the choice of medicines used to treat a disease is often ‘controlled’ by the registration status of the medicine and formulary system. However, the use of unregistered and non-formulary medicines are sometimes the cornerstone in the management of certain diseases 1 , or population. 2 3 , Furthermore, off-label use of registered, unregistered or non-formulary medicines is another spectrum of concern, especially in pediatric patients. 4 Pediatric patients are prescribed with an average of 2 to 5 medicines per prescription, 5 6 , which may increase up to 7 and 12 medicines per prescription for terminally-ill patients 7 and patients in tertiary care centres, 8 respectively. When treating pediatric patients, prescribers tend to use therapeutic options outside regulatory and formulary boundaries as the existing alternative is unsuitable or ineffective to principally treat their patients. 4 9 , The problems with access to and supply of unregistered or non-formulary medicines is potentially a barrier in providing optimal care for patients. 10 11 , 12 It was reported that 22% of pediatric patients admitted for highly specialised inpatient care were prescribed with unregistered medicines. Various mechanisms exist in different countries to allow the importation and use of unregistered medicines for patient's use. In Singapore, the Health Sciences Authority (HSA) allows the importation or supply of unregistered medicines for patient's use as named-patient or buffer stock application via the Special Access Route (SAR). 3 Similarly in Australia, the Special Access Scheme (SAS) allows for the importation and supply of an unregistered medicine for an individual patient under the supervision of a medical practitioner, on a case-by-case basis. 13 On the other hand, a ‘single permit for import of drug’ can be applied from the Ministry of Public Health of the People's Republic of China for import of medicines without import registration certificates. 14 Although some variations exist in these mechanisms, the ultimate goal remains unanimous to ensure safety, efficacy and quality of the therapeutic options bypassing the medicine registration regulatory pathways. 15 The use of non-formulary medicines have been reported worldwide in previous studies 16 , whereby up to 20% of hospitalised patients were prescribed non-formulary medicines. 17 In children, non-formulary medicines accounted for 13.4% of the total prescriptions, mostly for patients in the general pediatric wards. 18 Non-formulary medicines were prescribed when conventional therapies have failed 4 or patients developed adverse reactions towards medicines within the formulary. 2 However, the provision of non-formulary medicines in the hospital setting has been shown to incur additional pharmacy cost. 16 19 In some situations, registered and formulary medicines are used for unapproved indications, commonly known as unlicensed or off-label use of medicines. The use of unlicensed or off-label medicines is common in the pediatric population, especially in oncology and critical care settings. 4 , This is attributable to the greatly limited availability in the number of pharmaceutical dosage forms and the lack of scientific evidence for choices of medicines in the pediatric population. 5 Furthermore, the lack of clinical trials in the pediatric settings as a result of the high heterogeneity in pharmacokinetic parameters as well as the ethical and legal issues for research have also contributed to the unlicensed and off-label use of medicines among this group of patients. 4 20 , Consequently, unlicensed and off-label medicines are used as first-line agents to treat pediatric patients. 21 In Malaysia, the National Pharmaceutical Regulatory Agency (NPRA) is a regulatory body which is responsible to register medicines which has fulfilled the registration requirements determined by the Drug Control Authority (DCA). In line with the objective of the Malaysian National Medicines Policy (MNMP), the Ministry of Health Medicines Formulary (MOHMF) serves as a reference to medicine prescribing in the Ministry of Health (MOH) facilities by emphasizing on the priority of using registered medicines to promote equitable access to safe, effective and good quality medicines. However, the usage of medicines outside the MOHMF (hereinafter referred to as non-formulary medicines) or unregistered medicines are allowed, in justified circumstances, with special approval from the Director General of Health Malaysia, Senior Director of Pharmaceutical Services, Director of Pharmacy Practice and Development, Hospital Directors or Family Medicine Specialists (FMS) in-charge of Health Clinics. This group of medicines are known as Special Approval Medicines (SAMs). Generally, SAMs in Malaysia comprise of non-formulary medicines, medicines within the MOHMF used for off-label indications, and unregistered medicines. Overall, the approval for and total cost of SAM in MOH facilities showed a three-fold increase from the year 2016 to 2020. 22 Most of the approved SAMs from the year 2016 to 2020 were non-formulary registered medicines, with about 41% in the year 2020 attributed to unregistered medicines. 23 23 Although the use of unregistered, non-formulary and off-label medicines in pediatric patients appears inevitable, it was reported to be inappropriate in some cases, had a potential for interactions 24 and significantly higher risk of adverse drug reactions (ADR), 4 urging the need to examine the use of these medicines in hospital settings. To the best of our knowledge, there are no studies examining the use of SAM in Malaysia, leading to scarcity of information on the use of SAM particularly in pediatric patients. Hence, this study was conducted to characterize the use of and identify the scope of SAM at a tertiary care children hospital in Malaysia. The data will be of paramount importance to healthcare policy makers for decision-making in the inclusion of medicines in the MOHMF in the future and to explore alternative mechanisms enabling access and supply of SAM in the country. 25 2 Methods 2.1 Ethical consideration The study was approved by the ethics committee for Ministry of Health (MOH) facilities in Malaysia, Medical Research and Ethics Committee (MREC) (NMRR-21-738-59630). Approval to conduct the study was obtained from the hospital director and the head of department. The patient identifiers were kept confidential. The study was reported according to the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) recommendations. 26 2.2 Operational definitions a. Special approval medicine (SAMs) are medicines used in the MOH facilities with special approval from the Director General of Health Malaysia, Senior Director of Pharmaceutical Services, Director of Pharmacy Practice and Development, Hospital Directors or FMS in-charge of Health Clinics. These medicines include: i. Registered, non-formulary medicines ii. Registered, formulary medicines used for indications outside those approved by the DCA or MOHMF iii. Unregistered, formulary medicines iv. Unregistered, non-formulary medicines. b. A registered medicine is a medicine that is approved by the DCA for sale or use in Malaysia. 27 2.3 Study design and setting The conceptualisation of the study was done through mapping of key terms, definitions and constructs of SAM application ( Fig. 1 ). The cross-sectional retrospective study was conducted in a tertiary care hospital in the central region of Peninsular Malaysia. The 600-bedded hospital functions as the national referral centre and Centre of Excellence for women's and children's disease. 2.4 Eligibility criteria Named-patient basis SAM applications for patients below 19 years old, received by the Pharmacy Department between 1st January 2019 and 31st December 2020 were included. Additionally, SAM applications for patients above 18 years old who were still under the care of pediatricians were also included. Unprocessed, unapproved and stock-basis SAM applications was excluded from the study. 2.5 Sample size estimation The hospital formulary is a subset of the MOHMF. The total number of medicines available in the hospital formulary was 1219 items (according to the list updated on the 28th May 2020). Out of this, a total of 161 medicines were SAMs. The Pharmacy Department received an average of 150 SAM applications per year. Therefore, the estimated number of SAMs application during the 2-year study duration was 300. 2.6 Data collection The study source documents were the SAM application forms, cover letter provided by the applicant, pharmacist review summary, patients monitoring forms and SAM approval documents. A structured and piloted data collection form was used to collect patients' demographic data, SAM details, SAM application details, cost of SAM and category as well as the scope of use of SAM. The SAMs were classified according to the World Health Organization (WHO) Anatomical Therapeutic Chemical (ATC) classification. In the absence of exact WHO ATC classification for a particular SAM, the pharmacological grouping was made based on the routes of administration with clearly different therapeutic uses which were verified in discussion with the respective pediatric specialists. All SAMs except unregistered and non-formulary medicines were examined for its off-label use. A SAM was considered used in an off-label manner if the indication of the medicine were outside those approved by the DCA or MOHMF. The off-label categories were classified as: i. Off-label DCA: SAM used outside the indication approved by the DCA ii. Off-label MOHMF: SAM used outside the indication listed in the MOHMF iii. Off-label DCA and MOHMF: SAM used outside the indication approved by the DCA and listed in the MOHMF The determination of the scope of use of SAM was derived by critical reviewing of the SAM application process, the indication for SAM and rationale for the SAM application. The classification of the scope of use of SAM is as stated below: i. Based on the medicine: • Life-saving medicine which are not intended to be used to continue the treatment commenced at a non-MOH facility or sample medicines. • Medicine with a specific indication whereby no alternative is available in MOHMF. ii. Based on the patient: • The patient have been treated using all available and suitable alternatives in MOHMF but the treatment was ineffective or suboptimal for the patient. • The patient developed adverse effect/drug interaction or could not tolerate the treatment using all available and suitable alternatives in MOHMF. Two researchers reviewed the source documents to determine the classification of the scope of use of SAM. Two other researchers reviewed and validated the classifications. Discrepancies were resolved through discussions and consensus among the researchers. The patients monitoring forms accompanying the repeat SAM applications were further examined for occurrence of side effects and reporting of disease or symptom improvement. The cost incurred in purchasing SAMs were primarily funded by the MOH. For cost calculation, the unit cost for each medicine was calculated by dividing the pack price by the size to determine cost per tablet or capsule. The cost of any oral liquids was based on the number of bottles needed for the requested duration (up to 12 months). The cost of parenteral medicines was based on the number of ampules or vials required to administer the prescribed dose for the requested duration. The total cost of SAM was calculated by multiplying unit cost with the quantity of medicine required for the requested duration. The unit cost of the medicine was based on the price quotation (lowest quoted price) enclosed with the SAM application form. The quantity of medicine required was extracted from the SAM application form. Manual calculations of quantity of SAM was determined based on the dose, dosing frequency, duration of treatment and shelf-life of medicines (for diluted or reconstituted medicines). The manual calculations of the quantity of SAM was randomly checked by the researchers. Discrepancies were resolved through a consensus discussion between the researchers. The SAM purchased by the patients (out-of-pocket) and borne by other funding sources besides MOH were excluded from cost calculation. Costs were expressed in 2022 Malaysian Ringgit (MYR) and United States Dollar (USD). 2.7 Statistical analysis Descriptive analysis of per-patient data was conducted using the Statistical Package for the Social Sciences (SPSS) version 24. Continuous variables were summarized using mean (standard deviation) or median (interquartile range) depending on the data normality. Categorical variables were represented using frequencies and percentages. 3 Results In total, 328 applications corresponding to 74 types of SAMs ( Table 1 ) for 1010 patients with the mean age of 8.7 ± 5.6 years were analyzed ( Table 2 ). As the per-patient data was used for analysis, the denominator for percentage calculation was the total number of patients i.e. 1010. The most common SAM dosage forms were tablet or capsule ( n = 556, 55%), injectable ( n = 215, 21.3%) and liquid oral formulations ( n = 97, 9.6%). The profile of SAM by Anatomical Therapeutic Chemical (ATC) classification system is shown in Table 3 . The most common pharmacological group was nervous system ( n = 371, 36.7%) followed by antineoplastic and immunomodulating agents ( n = 332, 32.9%). The top three SAMs were melatonin ( n = 116, 11.5%), scopolamine ( n = 77, 7.6%) and cholecalciferol ( n = 72, 7.1%). The SAM used for off-label indications were analyzed for 532 (52.7%) patients. The off-label status per DCA, MOHMF as well as DCA and MOHMF were 16.7%, 6.8% and 4.9%, respectively. The category and scope of SAM is shown in Table 4 . Unregistered, non-formulary medicines were applied for 47.3% ( n = 478) of the patients while unregistered, formulary medicines were applied for 3.5% ( n = 35) of the patients. This resulted in the overall SAM applications for unregistered medicines for 513 patients (50.8%). On the other hand, registered and unregistered non-formulary medicines were applied for 359 (35.5%) and 478 patients (47.3%), respectively. In total, 837 patients (82.9%) were involved in the SAM applications for non-formulary medicines. The majority of the SAMs (64.7%) were applied to substitute the available alternatives in MOHMF which were ineffective or sub-optimal for the patients ( Table 4 ). The proportion of new and repeat SAM applications were 83.8% and 16.2%, respectively. Among the 262 patients with repeat applications, 93.8% reported disease or symptom improvement while 1.9% experienced side effects. The total cost of SAMs was MYR 8,748,358.38 (USD 2,090,418.86) ( Table 3 ). 4 Discussion This study examined the use and characterized the scope of SAMs in a Malaysian tertiary care children hospital. The findings of the study revealed that drugs acting on the nervous system as well as the antineoplastic and immunomodulating agents were the most commonly used SAMs in the study population. Despite the efforts taken to diversify the treatment of cancer and neurological disorders in children, 28 approved or licensed treatment options for these diseases remains inadequate. 29 30 Up to 17% of SAMs analyzed in this study were used for off-label indications. This data is almost similar to the proportions of off-label prescriptions due to indication (about 20%) that were reported by studies on off-label use of medicines in pediatric patients in various healthcare setting. Using SAM for off-label indication calls for heightened patient autonomy and poses additional responsibilities on the prescribers. Although obtaining informed consent from the patient for off-label use is recommended in Malaysia, 31 other approaches have been suggested in the literature 32 to help prescribers navigate the medical-legal landscape when engaged with off-label prescribing. 33 In the current study, 82.9% and 50.8% of the patients were involved in the SAM applications for non-formulary and unregistered medicines, respectively. This is parallel to the national trend in the SAM applications for non-formulary and unregistered medicines of about 83.7% and 40.6%, respectively. A high proportion of SAM applications for non-formulary medicines may signal the need to develop a national pediatric formulary encompassing best available evidence from registration data, investigator-initiated research, clinical experience and consensus. 23 Besides developing a new pediatric formulary, extension of an existing pediatric formulary with the addition of country-specific information to address country-specific needs have also shown to be successful. 34 35 About 65% of the SAM applications were submitted to substitute the alternatives in the formulary which were ineffective or sub-optimal for the patients. This finding contradicts data reported by another study conducted in Spain whereby the most common cause of non-formulary prescription was unavailability of a formulary therapeutic alternative. Side effects towards SAM were reported in about 2% of the patients with repeat applications. An analysis of ADR reports till the year 2019 at the same study site showed that about 8% of the ADR reports involved SAMs 18 . Given the potential for ADR in pediatric patients with SAM, the development of ADR reporting forms suitable for reporting cases related to the use of SAM is warranted. 36 To the best of our knowledge, this was the first study conducted to examine the use of SAMs at a tertiary care children hospital in Malaysia. The strength of the study lies in the generation of evidence using real world data. This study had signposted that the Malaysian regulatory and formulary boundary could be reengineered to include effective and safe alternatives for children. This study, however, is subject to several limitations. The cost-saving strategies employed at the study site was not taken into account in the drug cost calculation in this study. The occurrence of side effects and reporting of disease or symptom improvement was obtained from the study's source documents and lacks verification against patient's progress notes or adverse drug reaction report. To obtain valuable clinical relevance, future research should identify the effectiveness and risks of SAM in children using “real-world approach” of effectiveness studies. This could be achieved by conducting Hypothesis Evaluating Treatment Effectiveness (HETE) studies involving the SAMs profiled in this study. Firstly, the most commonly used SAM with the highest cost implication need to be identified. Following this, the critical clinical parameters and cutoffs that will define sufficient efficacy and unacceptable safety for the identified SAM should be established. Once the mapping of SAM with its relevant efficacy and safety parameters are established, structured data collection can be implemented as a routine practice to evaluate whether a treatment effect observed under controlled environment gives the same result in the real world. These results may lead to real world evidence-based treatment recommendations that can be implemented to benefit healthcare provision. 37 5 Conclusion The SAMs in the tertiary care children hospital in Malaysia involved unregistered, non-formulary medicines used as a substitute for the available alternatives in MOHMF. A concerted effort is warranted in exploring supplementary mechanisms to reconstruct the medicine registration process and formulary system towards facilitating enhanced provision of treatment for pediatric population. Funding This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. CRediT authorship contribution statement Shamala Balan: Conceptualization, Formal analysis, Writing – review & editing, Supervision, Project administration. Koo Kaitian: Methodology, Data curation. Muhamad Danial Muhamad Hamdan: Investigation, Writing – original draft. Lee Su Vin: Investigation, Writing – original draft. Declaration of Competing Interest The authors declare no competing interests. Acknowledgement The authors would like to thank The Director General of Health Malaysia for permission to publish this paper.
REFERENCES:
1. HILLOCK N (2020)
2. INGLIS J (2019)
3. KOOBLAL Y (2016)
4. TRAMONTINA M (2013)
5. BALAN S (2018)
6. ALDABAGH A (2022)
7. THIRUTHOPU N (2014)
8. EFRAIM J (2022)
9. FLOTATSBASTARDAS M (2020)
10. RINALDI V (2021)
11. NOHAVICKA L (2021)
12. ELAHI E (2021)
13. HEALTHSCIENCESAUTHORITY
14. DONOVAN P (2017)
15. MINISTRYOFCOMMERCEPEOPLESREPUBLICOFCHINA
16. HER Q (2017)
17. BARCELOVIDAL J (2021)
18. RODRIGUEZCARRERO R (2012)
19. SWEET B (2001)
20. NAPOLEONE E (2010)
21. SANTOS D (2008)
22. PHARMACEUTICALSERVICESP (2017)
23. PHARMACEUTICALSERVICESPROGRAMMEMOH
24. HER Q (2016)
25. PRATICO A (2018)
26. VANDENBROUCKE J (2007)
27. NATIONALPHARMACEUTICALREGULATORYAGENCY
28. BARONE A (2019)
29. MUELLER S (2019)
30. MANJESH P (2021)
31. ALLEN H (2018)
32. PHARMACEUTICALSERVICESPROGRAMMEMOFHM
33. SYED S (2020)
34. VANDERZANDEN T (2017)
35. VANDERZANDEN T (2019)
36. BALAN S (2022)
37. BERGER M (2017)
|
10.1016_j.jtocrr.2022.100316.txt
|
TITLE: Comparison of 2-Weekly Versus 4-Weekly Durvalumab Consolidation for Locally Advanced NSCLC Treated With Chemoradiotherapy: A Brief Report
AUTHORS:
- Denault, Marie-Hélène
- Kuang, Shelley
- Shokoohi, Aria
- Leung, Bonnie
- Liu, Mitchell
- Berthelet, Eric
- Laskin, Janessa
- Sun, Sophie
- Zhang, Tina
- Melosky, Barbara
- Ho, Cheryl
ABSTRACT:
Introduction
Durvalumab 10 mg/kg every 2 weeks for 1 year after chemoradiation has improved overall survival (OS) in unresectable stage III NSCLC. Subsequently, a 20 mg/kg 4-weekly regimen was approved. The study goal was to compare the efficacy and toxicity of the two regimens.
Methods
All patients with NSCLC treated with curative-intent chemoradiation followed by durvalumab from March 1, 2018 to December 31, 2020 at BC Cancer, British Columbia, Canada were included in this retrospective review. Durvalumab dosing schedule, toxicity, progression, and OS were collected. Comparisons between treatment groups were made using chi-square and independent t tests. Kaplan-Meier curves and log-rank test were used to analyze OS.
Results
A total of 152 patients were included in the 2-weekly group and 53 patients in the 4-weekly group. The median follow-up was 19.7 months and 12.0 months, respectively. The median OS was not reached, but 12-month survival rates were 88.4% versus 85.2% (p = 0.55). Toxicity profiles were similar in terms of sites and severity.
Conclusions
There was no significant difference in efficacy or toxicity between the 2-weekly and 4-weekly durvalumab in this cohort of patients with advanced NSCLC previously treated with curative-intent chemoradiation.
BODY:
Introduction Stage III NSCLC is a heterogenous group characterized by locally invasive tumors, multiple tumor nodules in the same lobe, with or without mediastinal adenopathy. The potential for cure depends on the feasibility of surgical resection and the ability to encompass disease within a radiation field. Despite curative-intent treatment, 5-year survival remains poor, from 36% in stage IIIA to 13% in stage IIIC. 1 Recently, the PACIFIC (A Global Study to Assess the Effects of MEDI4736 Following Concurrent Chemoradiation in Patients With Stage III Unresectable Non-Small Cell Lung Cancer) trial reported that adjuvant durvalumab 10 mg/kg given every 2 weeks for 1 year after chemoradiation was associated with improved overall survival (OS) compared with placebo (5-year OS 42.9% versus 33.4%, hazard ratio [HR] 0.72). 2 3 , 4 Durvalumab was initially approved for stage III NSCLC with the 2-weekly dosing in February 2018. The European Medicines Agency and the Food and Drug Administration labels were amended in January and February 2021, respectively, to allow the 4-weekly dosing schedule at 1500-mg flat dose. 5 , In Canada, weight-based dosing was permitted on the basis of a pharmacokinetics metric model built with individual patient data (n = 1409) from two large trials in NSCLC and other solid tumors. 6 Results revealed that a 1500-mg 4-weekly dosing led to similar median steady-state exposure, variability, and incidence of extreme concentration values compared with weight-based or fixed 2-weekly regimens. Moreover, this dosing schedule was successfully used in the CASPIAN (Durvalumab plus platinum–etoposide versus platinum–etoposide in first-line treatment of extensive-stage small-cell lung cancer: a randomised, controlled, open-label, phase 3 trial) trial, reporting improved OS with the addition of durvalumab to platinum-etoposide in the treatment of extensive-stage SCLC. 7 8 Whereas the pharmacokinetics data suggest that 2-weekly and 4-weekly administration intervals are equivalent, the clinical impact is unknown. This study aimed to compare the two dosing schedules in terms of efficacy and toxicity in patients with advanced, unresectable NSCLC. Materials and methods BC Cancer is a provincial cancer care program that serves a population of 5.1 million residents in British Columbia (BC). BC Cancer is a single-payer health care system, and as a result, has completed records on the billing and prescribing of all cancer therapies in BC. A retrospective chart review of all patients with NSCLC treated with curative-intent chemoradiation between March 1, 2018 and December 31, 2020 was conducted. All patients who received at least one dose of durvalumab were included. Data on demographics, diagnosis, durvalumab dosing schedule, treatment, progression, survival, and toxicity were collected. Patients were divided into two groups, 2-weekly and 4-weekly, according to the dosing schedule that was used for most (>50%) of the treatment. The administration schedule was at the treating physician’s discretion, and consent for the treatment plan and schedule was obtained per institutional practice. Crossover was defined as switching from one administration schedule to the other at some point during durvalumab treatment. Crossover rates were collected for both treatment groups; however, crossover patients were not analyzed as a separate group. Dosing was weight-based for both regimens. The PACIFIC patient support program for durvalumab was offered by AstraZeneca Canada with the 2-weekly dosing from May 2018 to December 2019. The 2-weekly dosing was launched in February 2020 and the 4-weekly in April 2020 at BC Cancer. The primary outcome was OS, defined as the date of the first durvalumab treatment to the date of death. Secondary outcomes were real-world progression-free survival (PFS), progression pattern, reasons for stopping treatment, and adverse events. Real-world PFS was the time between the date of the first durvalumab treatment and progression identified on imaging, performed at the discretion of the attending physician. Adverse events were graded according to the Common Terminology Criteria for Adverse Events version 5.0 and classified by organ system. Clinically relevant retoxicity was defined as toxicity that caused a missed dose, treatment cessation, or hospital admission. Pulmonary toxicity was divided into three categories: immune-mediated, radiation-mediated, and mixed (unclear between the first two mechanisms). 9 Comparisons were made using the chi-square test for categorical variables and independent t tests for continuous variables. Kaplan-Meier curves and log-rank test were used to analyze OS. A multivariable survival model was built with the demographic, diagnostic, chemotherapy-related, and radiation-related variables that were significantly associated with survival in univariate analyses. The data cutoff date was July 23, 2021. For all the analyses, the statistical significance threshold was p value less than 0.05. This study received approval from the local institutional research ethics board (University of British Columbia—BC Cancer Research Ethics Board; H19-02361), and approval was given for a waiver of consent to extract and analyze the archival data from the database. Results Between March 1, 2018 and December 31, 2020, a total of 453 patients with NSCLC were treated with chemoradiotherapy at BC Cancer. Of those, 205 patients who had at least one dose of durvalumab were identified. A total of 152 patients belonged to the 2-weekly group and 53 to the 4-weekly group. Patient characteristics were well-balanced between groups ( Table 1 ). Crossover between the two regimens was less frequent in the 2-weekly (7.2%) compared with the 4-weekly group (52.8%) ( p < 0.001). Programmed death-ligand 1 tumor proportion score was unknown for 48.8% of the study population and was at least 50% for 27.0% and 24.5%, respectively. EGFR mutation and ALK and ROS-1 fusion status were unknown. In the 2-weekly versus 4-weekly group, 90.7% and 94.3% had two cycles of chemotherapy, and 95.4% versus 96.2% had a minimum of 60 Gy. The median time between radiation completion and durvalumab start was 40 and 43 days, respectively. The durvalumab median (range) cumulative dose was 180 (10–270) mg/kg in the 2-weekly group and 180 (20–270) mg/kg in the 4-weekly group ( p = 0.91). At data cutoff on July 23, 2021, after a median follow-up of 19.7 months (2-weekly) and 12.0 months (4-weekly), 50 patients had died. The median OS was not reached in either group, the HR for death was 1.31 (95% confidence interval [CI]: 0.54–3.15, p = 0.550) ( Fig. 1 ). The 12-month survival rates were similar (88.4% versus 85.9%). We performed univariate analyses with all demographic, diagnostic, chemotherapy-related and radiation-related variables from Table 1 and included significant variables (age, male sex and cisplatin-based chemotherapy) in a multivariable survival model. Adjusted for age (HR = 1.02 [95% CI: 0.98–1.06], p = 0.41), male sex (HR = 1.92 [95% CI: 1.03–3.58], p = 0.04), and cisplatin-based chemotherapy (HR = 0.38 [95% CI: 0.18–0.80], p = 0.01), the durvalumab HR for death was 1.49 ([95% CI: 0.62–3.60], p = 0.38). Median real-world PFS was not different between groups, 21.3 versus 17.7 months, (HR = 1.03 [95% CI: 0.62–1.73], p = 0.90), with 12-month PFS rates of 63.8% versus 66.2%. Progression occurred in 48.0% versus 34.0% of patients ( Table 1 ). The median durvalumab treatment duration was 9.1 versus 8.8 months. The toxicity profiles of the two regimens were similar ( Table 2 ). All-grade adverse events occurred in 58.7% versus 55.6% of patients in the 2-weekly and 4-weekly groups, respectively, for 12.7% versus 11.6% of grade 3 or worse events. Clinically relevant toxicity was observed in 34.0% versus 38.5%. Lung and gastrointestinal adverse effects were the most common for grade 3 or higher toxicity in both groups. One case of mixed radiation and immune-related pneumonitis led to death in the 2-weekly group. There were no toxicity-related deaths in the 4-weekly group. Discussion The 4-weekly dosing schedule for stage III NSCLC durvalumab consolidation after chemoradiotherapy is accepted in standard practice on the basis of pharmacokinetic data. Our real-world study compared the 2-weekly and 4-weekly dosing intervals for consolidative durvalumab and identified no differences in OS and toxicity. This confirms the pharmacokinetic analysis that suggests both regimens are equally effective and safe. The findings of the present study reveal that the 12-month survival rates were similar. The HR for death was 1.31 favoring the 4-weekly dosing; however, this is likely attributable to differences in duration of follow-up. In the multivariate analysis incorporating other variables that significantly impacted survival on univariate analyses, the association between durvalumab schedule and OS remained not significant. Median real-world PFS in the 2-weekly and the 4-weekly groups was similar. The median real-world PFS in both groups was longer and had a higher proportion of metastatic recurrences than the PACIFIC trial 3 , because of the lack of standardized imaging follow-up in this observational study. 4 The immune-related toxicity rates with the 2-weekly and 4-weekly dosing were similar. With other immunotherapy agents, there has been an association between grade 3 or higher toxicity and dosing. Our data do not raise a concerning signal with the higher 4-weekly dosing. The most common grade greater than 3 toxicity was pneumonitis—unsurprising in the light of recent chemoradiotherapy. 10 The strengths of our study include the real-world population of patients receiving combined modality chemoradiotherapy followed by durvalumab and the completeness of follow-up owing to the provincial oversight for cancer treatment. The lack of standardized imaging follow-up and reporting of adverse events is a limitation inherent to the retrospective design. The small number of patients and the shorter follow-up in the 4-weekly group may also have impacted the rates of immune-related adverse events and the OS analysis. Finally, the high crossover rate in the 4-weekly group might have impacted the treatment group's effect on survival. Consolidative durvalumab is now the standard of care in patients with advanced NSCLC treated with chemoradiation. This retrospective study did not find statistically significant differences in efficacy or adverse events between the 2-weekly and 4-weekly administration schedules. This study is providing clinical evidence to reinforce the conclusions of previous pharmacokinetics analyses supporting both administration intervals. Of course, other factors such as logistics and patient preference need to be considered in clinical decision-making. CRediT Authorship Contribution Statement Marie-Hélène Denault: Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Resources, Validation, Writing - original draft, Writing - review & editing. Shelley Kuang: Conceptualization, Data curation, Investigation, Methodology, Resources, Writing - review & editing. Aria Shokoohi, Bonnie Leung, Mitchell Liu, Eric Berthelet, Janessa Laskin, Sophie Sun, Tina Zhang, Barbara Melosky: Conceptualization, Investigation, Methodology, Resources, Writing - review & editing. Cheryl Ho: Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Resources, Supervision, Validation, Writing - original draft, Writing - review & editing.
REFERENCES:
1.
2. GOLDSTRAW P (2016)
3. ANTONIA S (2017)
4. SPIGEL D (2022)
5.
6.
7.
8. PAZARES L (2019)
9.
10. HELLMANN M (2017)
|
10.1016_j.imu.2022.101155.txt
|
TITLE: Prediction of chronic liver disease patients using integrated projection based statistical feature extraction with machine learning algorithms
AUTHORS:
- Amin, Ruhul
- Yasmin, Rubia
- Ruhi, Sabba
- Rahman, Md Habibur
- Reza, Md Shamim
ABSTRACT:
The healthy liver plays more than 500 organic roles in the human body, while a malfunction may be dangerous or even deadly. Early diagnosis and treatment of liver disease can improve the likelihood of survival. Machine learning (ML) is a powerful tool that can assist healthcare professionals during the diagnostic process for a hepatic patient. The standard ML system includes the methods of data pre-processing, feature extraction, and classification. In the feature extraction stage, ML researchers frequently use projection-based feature extraction approaches to remove data redundancy, but this does not produce the desired results. In addition, most statistical projection methods have different purposes when projecting original features. The Indian liver patient dataset (ILPD) from the University of California, Irvin (UCI) repository is used in this study to classify chronic liver disease. The data set has 583 patient disease records; 416 patients have liver disease, and 167 do not. Using several projection methods, we proposed an integrated feature extraction approach to categorize liver patients. In the pipeline, the proposed method first imputes the missing values and outliers for pre-treatment. Then, integrated feature extraction applies the pre-processed data to extract the significant features for classification. A simulation study is also being conducted to strengthen the suggested methodology. The proposed approach incorporates several ML algorithms, including logistic regression (LR), random forest (RF), K-nearest neighbor (KNN), support vector machine (SVM), multilayer perceptron (MLP), and the ensemble voting classifier. The offered system has an accuracy of 88.10%, a precision of 85.33%, a recall of 92.30%, an F1 score of 88.68%, and an AUC score of 88.20% in predicting liver diseases. Our proposed technique yielded 0.10–18.5% better results than the latest existing studies. The findings suggest that the recommended system could be used to supplement a physician's diagnosis of liver disease.
BODY:
1 Introduction In today's world, more than a million people are diagnosed with liver disease each year [ 1 ]. Liver cirrhosis, hepatitis (A, B, C), and liver cancer are common liver diseases. Globally, 1.32 million more people died of liver cirrhosis in 2017 than in 1990, of which 66.7% were men and 33.3% were women. Although overall death is declining because of improvements in advanced treatment and maintaining a sound lifestyle [ 2 ]. However, liver-related fatalities accounted for 3.5% of all deaths this century [ 3 ]. Excessive uses of drugs, alcohol, obesity, and diabetes are the main causes of liver disease [ 4 ]. These life-threatening diseases are manageable if they are diagnosed in their early stages. Machine learning techniques are widely used in the healthcare sector, in particular for the diagnosis and classification of certain diseases based on characteristic information [ 5 ]. These systems will help clinicians make accurate decisions about patients [ 6 ]. The input raw feature space is typically saturated with a significant amount of irrelevant feature information and frequently exhibits high dimensionality when the data is acquired using feature generation techniques in conventional ML systems [ 7 ]. Projection-based statistical methods such as principal component analysis (PCA), factor analysis (FA), and linear discriminant analysis (LDA) work well to reduce dimensionality. PCA reduces the dimensionality of the dataset without losing significant feature information. Factor analysis is an extension of PCA that describes the covariance relationships between variables in terms of some underlying factors [ 8 ]. LDA uses the class label to compute the matrix between and within the class and seeks the directions along which the classes are best separated. However, projection-based feature extraction, how many components should be retained is still an unsolved issue. The majority of the authors in existing research on ILPD data employ a single-feature extraction approach [ 9–11 ]. Different tactics are used by the PCA, FA, and LDA algorithms to transform the original features [ 12 ]. However, the main contribution of the research is an attempt to provide a single feature space that integrates PCA, FA, and LDA projection. Additionally, medical data frequently reveals an imbalance problem among classes. The projection-based dimension reduction approach, as well as the ML algorithm, do not operate well when the dataset is imbalanced and frequently suffers from overfitting [ 13 ]. Our working ILPD data reveals missing values, outliers, and higher-class imbalances where the positive class is more than twice as large as the negative class. To achieve better liver patient prediction using a computer-assisted diagnosis process we are accounting for all the stated data preprocessing techniques. The proposed integrated method enhances liver disease classification accuracy, prevents misdiagnosis of liver disease, and increases patient survival. In this paper, we propose a statistical feature integration approach that aims to improve AUC and accuracy. The complete literature review is covered in section 2 . Section 3 contains a description of the ILPD and artificial data set. Section 4 , discussed different dimension reduction techniques. The methodology section is included in Section 5 . The evaluation protocols, results, and discussion are presented in section 6 . The final section introduces the conclusion. 2 Literature review Human diseases are becoming more prevalent today than in past decades. If we compared liver diseases to other serious diseases, the number of people affected by liver diseases is growing all the time [ 14 ]. The majority of liver diseases however, do not present any significant symptoms in their early stages. In the age of modern databases, it is simple to extract data and get insights to assist in the treatment of any disease [ 15 ]. There are several strategies that the researcher tries to extract insight from the dataset. Some of them are used with ML classifiers in feature selection or extraction and some are not. Presently, data are generated and stored unremarkably. The available data gave the easiest path to the researcher for solving any object in such fields as medical imagining, finance, genomics, transaction, encroachment, and, etc. If we have a large volume of necessary and unnecessary data that could affect the ML algorithm. There are various methods to select and extract the most correlated feature space to predict any disease as well as any objects. To predict heart diseases Pasha and Mohamed et al. [ 16 ] worked on the Cleveland, Hungarian, Statlog, and Switzerland heart disease datasets. They employed a novel feature reduction (NFR) model in their working methodology to predict cardiac disease. Their methods involved initially processing the dataset, followed by the identification of the features that contributed significantly using a variety of statistical techniques, including weighted least squares (WLS), correlation matrices, etc. The ML and Data Mining (DM) algorithms on the reduced set of features and then individual reduced features measure the AUC and accuracy. In their proposed NRF model, the boosted regression trees (BRT) achieved the highest AUC of 96.68%, and an accuracy of 93.53% by LR on the Cleveland dataset. The LR achieved the highest AUC of 92.51%, and an accuracy of 85.06% by BRT, SGB, and SVM on the Hungarian dataset. BRT achieved the highest AUC of 91.79%, and an accuracy of 87.65% by SVM, and RF in the Statlog dataset. Lastly, the BRT achieved the highest AUC and an accuracy of 99.20%, and 95.52% in the Switzerland dataset. The author demonstrates that the proposed NFR model has greater AUC, and accuracy in predicting heart disease, according to the comparison between the NFR and without NFR model. To develop heart disease risk prediction, the same author [ 17 ] worked on the same datasets using an advanced hybrid ensemble gain ratio (AHEGR) feature selection technique. The four feature selection methods, including ensemble feature selection, gain ratio feature selection, backward feature removal, and area under the curve (AUC), have all been applied in their proposed ensemble system. Their suggested feature selection method aids in improving the prognosis of heart diseases. In the Cleveland dataset, with the AHEGR-FS technique among the nine classifications, the AdaBoost, and KNN classifiers achieved the highest AUC, and accuracy of 93.20%, and 87.38%, the RF, and NB classifiers achieved the highest AUC, and accuracy of 95.00%, and 92.00% in the Hungarian dataset, the BRT and NB classifier achieved an AUC, and accuracy of 93.77%, and 89.13% in the Statlog dataset, and the RF, KNN classifier achieved an AUC, and accuracy of 99.00%, and 97.53% in the Switzerland dataset. To classify liver patients early and precisely, numerous researchers have tried to utilize ML algorithms in various ways. Sreejith et al. [ 10 ] evaluated the classification performance by using ILPD, Thoracic Surgery (TSD), and Pima Indian Diabetes (PID) datasets in their paper. The main goal of their work is to compare performance before and after feature selection, as well as to use the class balancing Synthetic Minority Over-sampling Technique (SMOTE) technique and the Chaotic Multi-Verse Optimization (CMVO) evolutionary feature selection approach. They obtained an accuracy of 69.43% on the ILPD dataset using a random forest classifier without OSMOTE and CMVO-based feature selection, 82.620% with OSMOTE and without CMVO-based feature selection, and 82.46% with OSMOTE and without CMVO-based feature selection. Kuzhippallil et al. [ 11 ] compare and improve the performance of chronic liver disease classification employed by a variety of data preprocessing strategies (missing value imputation, outlier detection and elimination using isolation forest, duplicate value removal, and so on) and feature selection methods. They used MLP, KNN, LR, DT, RF, Gradient Boosting, AdaBoost, XGBoost, Light GBN, and Stacking Estimator to classify liver patients and achieved an accuracy after the proposed method is 82%, 79%, 76%, 84%, 88%, 84%, 83%,86%, 86%, and 85%. Gan et al. [ 18 ] implemented four classification techniques, including AdaC-TANBN and TANBN, BN, and SVM. After experimenting with his proposed method, the integrated TANBN using a cost-sensitive method (AdaC-TANBN) provided an accuracy of 69.03%, which outperformed the results compared to the others. Abdar et al. [ 19 ] use various classification algorithms such as the decision tree (C5.0), the classification and regression tree (CART), and the automatic chi-square interaction detector (CHAID) with boosting technique. Based on the author's research protocol, they achieved 93.75% accuracy at the first stage using the boosted decision tree (B-C5.0) algorithm, and then their proposed method was a combination of multilayer perceptron neural network (MLPNN), namely MLPNNB-C5.0, offers the highest accuracy of 94.13%. Anagaw et al. [ 20 ] proposed a method called the compliment naive Bayesian (CNB) classification method and compared it to the naive Bayes classifier and a few other classifiers. The result of the proposed method is 71.36%, which scores better than the others. The author Babu et al. [ 21 ] suggested a K-means clustering strategy for detecting liver patients using the various classification model in their work. Following the implementation of the classification model, the accuracy of NBC, KNN, and C 4.5 was 56%, 64%, and 69%, respectively. P. Kumar et al. [ 22 ] worked on the ILPD dataset to classify liver patients more accurately. To classify the liver patients faultlessly, they used a 10-fold cross-validation technique. The authors used neighbor-weighted K-NN (NWKNN), fuzzy-neighbor-weighted K-NN, and variable-neighbor-weighted fuzzy KNN classifiers to diagnose liver patients. Since the data set is imbalanced, they work with this data using Tomek link and redundancy-based under-sampling technology (TR-RUS) to balance the data set and achieve an accuracy of 72.31% for the NWKNN classifier, 76.61% accuracy on the fuzzy NWKNN classifier, and finally their proposed method Variable-NWFKNN achieved an accuracy of 87.71%, which is above the other two classifiers. I. Straw et al. [ 23 ] diagnosed ILPD liver disease using various ML algorithms based on gender (male and female) stratification. They incorporated the SMOTE data balancing technique in their proposed method, and feature selection is performed using Recursive Feature Elimination (RFE) to improve performances and reduce biases. They used RF, LR, SVM, and Gaussian Nave Bayes (GNB) classifiers in their study, both with and without applying data balancing techniques, and feature selection based on sex disparities. In the all-classifier RF, LR gives higher performance than others. The results of the experiment show that the greatest false negative rate disparity of RF is −21.02% and LR is −24.04%. 3 Datasets description 3.1 ILPD dataset The North East of Andhra Pradesh, India, is where the ILPD dataset was gathered. This dataset contains 583 observations with ten features and one target output. Table 1 provides information on the ILPD dataset in more depth. Then we retrieved the dataset from the UCI ML repository to evaluate our research [ 24 ]. A vast collection of databases, domain theories, and data generators are hosted at UCI, which serves as a hub for machine learning and information systems. This resource is used by the machine learning community such as students, experts, researchers, instructors, and others as the primary and key source to assess ML problems. Fig. 1 shows that 416 are positive/disease cases, and 167 are controlled cases. In general, ML algorithms presumptively assume that the data will be evenly distributed between classes; otherwise, the results would be heavily biased in favor of a few classes [ 25 ]. To detract from the ML algorithm biases, we employed the random oversampling data balancing technique. After that, we achieved 832 cases, of which 416 were disease/positive cases and 416 were non-disease/negative cases. Total bilirubin, direct bilirubin, alkaline phosphatase, alamine aminotransferase, and aspartate aminotransferase are shown as outliers in Fig. 2 . We employed the Q-Q plot and Shapiro-Wilk test to check the normality of the data set. Fig. 3 and the Shapiro-Wilk test value show that none of the ILPD dataset's features follow the exact normal distribution. Although, based on Fig. 3 , we may argue that the age, total protein, albumin, and globulin ratio features follows approximate normality. 3.2 Simulation data To validate our strategic plan on ILPD's data, we generated simulation data from the Python scikit-learn library using the make classification command. The simulated database comprises 1000 sample observations with 10 features, five of which are informative, three of which are redundant, and two of which are number repeated features. The number of target classes is two. 4 Dimension reduction methods A data set can contain thousands or millions of features. When we want to analyze this massive type of dataset the computational costs and time, as well as the analytical complexity will increase the machine-learning algorithm. The feature extraction or selection method can eliminate these problems. In the feature extraction methods, we extracted the influential features using PCA, FA, and LDA from the ILPD dataset to predict liver patients. 4.1 Principle components analysis (PCA) Principal component analysis (PCA) is one of the most important techniques for feature reduction. It extracts the feature from the higher dimension to the lower dimension without losing significant information. To obtain an optimal number of PCAs from the given data set, we erected a 95% variation explanation tactic that incorporates 95% of the entire dataset information. Suppose we have a random vector X. X = [ X 1 , X 2 , … , X r ] T Calculate the variance-covariance matrix. C o v ( X ) = [ σ 1 2 σ 12 … σ 1 r σ 21 σ 2 2 … σ 2 r σ r 1 σ r 2 … σ r 2 ] Calculate the eigenvalues ( and eigenvectors ( ω 1 , ω 2 , ω 3 , … … , ω r ) ) from the variance-covariance matrix. After sorting the eigenvalues, we choose the number v 1 , v 2 , v 3 … … , v r (p) of principal components, i.e. Y 1 = v 11 X 1 + v 12 X 2 + … + v 1 r X r Y 2 = v 21 X 1 + v 22 X 2 + … + v 2 r X r ⋮ ⋮ ⋮ ⋮ Here the first PCA represents the maximum variance among the total number of the linear combination [ Y r = v r 1 X 1 + v r 2 X 2 + … + v r r X r 26 , 27 ]. The proportion of the total population variance due to the r principal component is th . λ r Σ i = 1 r λ i 4.2 Factor analysis (FA) Like PCA, factor analysis is a feature reduction method in which an unobserved or latent variable is sought out from the observed variable or the manifest variables. This method extracts all maximum common variance from the observed variable and inserts it into a common score so that we can use this for further analysis [ 28 ]. There are several methods for extracting the factor from a dataset, including principal component analysis, common factor analysis, image factoring, maximum likelihood method, and so on. In general, the extraction of too many factors produces undesirable results, whereas the extraction of a few factors reduces the common variance without undesirable results. Consequently, it is important to carefully select the number of components in an analysis. The most commonly used techniques for determining the optimal number of factors are the eigenvalue, scree plot, Kaiser's criterion, and Jolliffe's criterion. We used the ‘scree plot’ approach, which is based on eigenvalues, to determine the number of expected factors. Suppose we have a random vector X . For this random vector calculate the mean vector X = [ X 1 , X 2 , … , X r ] T . γ γ = [ γ 1 , γ 2 , … , γ r ] T The q common factor collected from the observed variable is: f = [ f 1 , f 2 , … , f q ] T ; [ H e r e q < r ] Finally, our factor model will be a multiple regression model predicting all of the q -observed variables [ 29 ]. X 1 = γ 1 + k 11 f 1 + k 12 f 2 + … + k 1 n f n + ε 1 X 2 = γ 2 + k 21 f 1 + k 22 f 2 + … + k 2 n f n + ε 2 ⋮ ⋮ ⋮ ⋮ ⋮ X r = γ r + k q 1 f 1 + k q 2 f 2 + … + k q n f n + ε q The general matrix form is [ X = γ + K F + ε 30 ]. 4.3 Linear discriminant analysis (LDA) Linear Discriminant Analysis (LDA) is a supervised dimensionality reduction technique used to classify two or more classes. The main goal of this technique is to reduce the n -dimensional spaces to m -dimensional spaces. Typically, the total number of projected axes extracted by LDA is less than one of the number of classes in a dataset. In this approach, the projected new data matrix consists of a lower dimension which minimizes the within-class variance and maximizes the between variances. Each class has a single dimension that distinguishes it. Suppose we have a sample group and its class mean . X i ‾ Where i . e . , X i ‾ = 1 N i ∑ i = 1 M i X i j denotes data point in a class group. M i The sample variance-covariance matrix is defined as: S i = 1 M i − 1 ∑ i = 1 n ( X i j − X i ‾ ) ( X i j − X i ‾ ) T Compute the within-class variance that measures the distance mean and sample of each class i.e., R w = ∑ i = 1 n ( M i − 1 ) S i Compute the between-class variance, which measures the distance between the mean of different classes i.e., Where R b = ∑ i = 1 n M i ( X i ‾ − X ‾ ) ( X i ‾ − X ‾ ) T is the grand mean. X ‾ = 1 M ∑ i = 1 M i N i X i ‾ Now finally created the lower-dimensional projection space using the above-mentioned variances within and between the classes. Let Q as a lower-dimensional space i.e., arg max Q . The characteristic (C-1) is extracted using LDA for any data set to find a projection matrix that maximizes the between-class ( | Q t R b Q Q t R w Q | and minimizing the within the class R b ) ) [ ( R w 31 ]. 5 Methodology In this big data era, having a large number of data points while having a low number of features as well as if exist meaningless feature space the ML algorithm faces challenges known as the curse of dimensionality [ 32 ]. To deal with this problem, we proposed a statistical projection-based (i.e., PCA, FA, and LDA) feature integration strategy that extracts useful features and makes use of all of the suggested approaches. The mathematical execution of the proposed integration method is as follows: Firstly, using PCA, let be an x 1 , x 2 , … , x r r- dimensional data matrix i.e., . Searching set of a basis vector x ∈ R r , when v 1 , v 2 , … , v ξ 1 . Summarize an v i t v = 0 , ‖ v j ‖ = 1 r -dimensional vector with X dimensional feature vector ξ 1 ( ξ 1 < r ) h ( X ) ∴ Y j = V j . X ∴ h ( X ) = ( y 1 , y 2 , … , y ξ 1 ) T = V T X = V T ( X − μ 0 ) The projected new data representation is: Where (1) X ∈ R r → V T x ∈ R ξ 1 chosen ξ 1 . ∑ i = 1 ξ 1 σ i 2 ∑ i = 1 r σ i 2 > p , p = 0.95 In the next step, FA to consider the error term, and the proposed integration feature space can assume a factor model. Let input factor generating ξ 2 r- observables ( ξ 2 < r ) . X i − γ i = K i 1 f 1 + K i 2 f 2 + … + K ξ 2 f ξ 2 + ∈ i X − γ = K F + ∈ Searching such that K , where S = K K T + ψ is the estimation of the covariance matrix and S is the loading, K , E ( ε i ) = ψ i Solutions using eigen values and eigen vectors are: K ∈ R r * ξ 2 ( ξ 2 < r ) . F = X W = X S − 1 K Then the reduced dimensionality using the FA model is: After applying the FA model, the suggested system employs LDA to make use of the class information. Let's assume C-classes constructing (2) X ∈ R r → F = X W = X S − 1 K ∈ R ξ 2 sample means, M i i = 1 , 2 , … , C . M = ∑ i = 1 c m i ; μ = 1 c ∑ i = 1 c μ i LDA seems the projection transformation, That maximizes Y = U T x . , where Max = | R b | | R w | . The column of the matrix R w = Σ i = 1 c Σ j = 1 m i ( X l i − μ i ) ( X l j − μ i ) T , R b = ∑ i = 1 c ( u i − μ ) ( u i − μ ) T are eigenvalues and correspond to the largest eigenvectors. U (3) ∴ R b u ξ 3 = λ ξ 3 R w u ξ 3 The Max dimensionality sub-space is , since ( C − 1 ) has the most rank R b . Finally, the proposed 'augmenting' features obtain from ( C − 1 ) (1), (2), and (3) are as follows: S = [ Z ∈ R ξ 1 ⋮ F ∈ R ξ 2 ⋮ Y = U T x ∈ R ( c − 1 ) ] The ML algorithm is fed by this integrated feature space S . Thereafter, to detect, and replace outliers the standard normal Z-Score statistic (μ and σ are the mean and standard deviation, respectively) the rule is used for outlier detection, whereby values of the second quartile are selected for the corresponding outlier implementation. In the working ILPD dataset, some features consist of missing values and outliers. Outliers and missing values affect the classification results in the ML system [ x − μ σ 33 ]. To reduce the effect of the outliers and missing values, we replaced them with the value of the second quartile. Mathematically, (4) | Z | = | X i − X ‾ σ | Outliers = > 3, where 3 is a commonly used cut-off number for detecting outliers [ | X i − X ‾ σ | 34 ]. To make the programming analysis easier, we use an absolute function to convert the z-score value unidirectional. After replacing missing values and outliers, standard scaling transformation removes the different sizes, units, and ranges of the features. A dataset can be standardized scaler as follows: y Where y = x − m e a n s t a n d a r d d e v i a t i o n and the standard deviation m e a n = ∑ i = 1 N x i N . = ∑ i = 1 N ( x i − m e a n ) 2 N − 1 Furthermore, to avoid bias and overfitting in our experimental results, we endeavor a random oversampling approach. Fig. 4 depicts the feature integration notion, and details of the proposed method are shown in Fig. 5 . 5.1 Pseudo-code of the proposed feature extraction method Input: Pre-processed Dataset . X ∈ R r Output: Extracted Feature Matrix. Algorithm Step 1: Extract feature vectors from the processed dataset using principal component analysis (PCA) with 95% variation, then store the resultant reduced feature vectors, that is v 1 , v 2 , … , v ξ 1 X ∈ R r → V T x ∈ R ξ 1 Step 2: Using Factor Analysis select reduced features, and store the features. X ∈ R r → F = X W = X S − 1 K R ξ 2 Step 3: Utilizing LDA, separate the optimal (C-1) discriminant features from the input dataset, then store the feature vectors. Step 4: Integrate all of the stored features into a new matrix space as: S = [ Z ∈ R ξ 1 ⋮ F ∈ R ξ 2 ⋮ Y = U T x ∈ R ( c − 1 ) ] Step 5: Update the matrix space until the desired data variation is required. Step 6: Return the extracted feature matrix S. 6 Evaluation protocols, results, and discussion To first and accurately diagnose liver disease and enable a fair comparison of existing approaches, the performance of the proposed method will be assessed by both the train-test split and cross-validation method. In an experimental protocol for evaluating the results, the data set was randomly divided into 75% training and 25% testing set using stratified random samples. We also perform 10-fold cross-validation to compare other existing works. In addition, the receiver-operating curve, known simply as the ROC uses for evaluating and comparing the performance of various classifiers. The ROC curve is created by the true positive rate versus the false positive rate. The total area under the curve is in the range of 0.5–1 [ 35 ]. Python programming on the Google cloud computing GPU hardware accelerator platform and R (desktop version x64 3.5.1) programming for graphical visualization is utilized to implement all machine learning algorithms. We carried out our research using a workstation with an Intel Core i7 with 8 GB RAM and an 11-GEN CPU processor. 6.1 Performance evaluation metrics Classification or prediction is one of the contentious subjects that has received the greatest attention in scientific circles globally. To ascertain whether the specified classification algorithm performs well or unsatisfactorily, we must measure classification performance. The performance evaluation matrices such as accuracy, precision, recall, f-1, and AUC score have been taken into consideration while conducting the proposed research. However, we methodically discuss the following metrics to evaluate the Classification algorithm: A c c u r a c y = T P + T N T P + T N + F P + F N P r e c i s i o n = T P T P + F P R e c a l l = T P T P + F N F 1 − S c o r e = 2 * p r e c s i o n ⋅ r e c a l l p r e c i s i o n + r e c a l l The terms TP, TN, FP, and FN have been used to denote true positive, false positive, and false negative cases, respectively. 6.2 Experimental result and discussion The results have been derived from both simulated and real data. Table 2 shows simulation study results using the proposed feature integration method. On the simulated dataset, 7 PCs explained 95% variation of the data out of 10 simulated features, where the ensemble classifier achieved the best accuracy and AUC of 91.00%. In the factor analysis, 7 latent factors were found with the help of a scree plot, the MLP classifiers achieved an accuracy of 91.70% and AUC of 91.69%. On the other hand, one (Class-1) LDA component feature provided an accuracy of 77.90% and an AUC of 77.91% by SVM. Our proposed feature integration provided the highest accuracy of 92.00%, and an AUC score of 91.99% in the KNN. Additionally, the evaluation measures used in the proposed technique considerably enhanced the detection of liver disease, as demonstrated in Table 2 . The ILPD data classification results utilizing a different protocol are shown in Table 3 through Table 5 . We analyzed the ILPD data in two different protocols. Table 3 displayed the train-test method and the 10-fold cross-validation is shown in Table 4 . Table 3 shows that the MLP classifier provides the highest accuracy of 85.50%, and AUC of 84.51% which is a higher value than other evaluation measures in the train-test split protocol without any feature extraction of the ILPD data. Table 4 shows the 10-fold cross-validated result, the RF classifier gives an accuracy of 87.78%, and an AUC score of 87.74% which is much better compared to the train-test split method. We then compared the considered statistical features integration methods using several classifiers with the train-test method shown in Table 5 and the 10-fold cross-validation method shown in Table 6 . In our proposed model, the MLP classifier reduced the detection rate of liver diseases by 0.89% compared to the train test scheme without feature extraction, as shown in Table 7 . In addition, precision, recall, F1, and AUC score are increased by 2–3%. As can be seen from Table 5 , the RF classifier outperforms all other classification models in all evaluation matrices and it exhibits accuracy, precision, recall, F-1, and AUC score values of 88.10%, 85.33%, 92.30%, 88.68%, and 88.20%, respectively. Table 8 compares the proposed method with recent existing studies on the ILPD dataset. The final results of the research have been improved through the use of the feature integration technique and achieved an accuracy of 88.10%, and an AUC score of 88.20% which is significantly better for liver patient classification than that of the existing work. In our proposed approach, Fig. 6 shows that the ROC curve for RF classifiers covers more than 88% of the area covered by the ROC curves for the other classifiers, indicating that the RF classifier is superior to the others. Three factors contribute to the increased performance: (a) proper treatment of outliers and replacement of missing values with the median; (b) data balancing strategy; and (c) feature integration approaches that improved liver disease classification accuracy. As a result, of integrated features and proper management of outliers and missing values, the suggested method of liver patient recognition performance has improved. 6.3 Run time comparison of the proposed ML model We calculated the time taken by each ML algorithm, which is another illuminating performance measurement, to contrast the strength of the proposed model with the existing models. In the two evaluation protocols, by utilizing the train-test method with all features the MLP algorithm achieved the highest accuracy of 85.50% and AUC of 84.5% in 0.018 s of run time, and with the integrated features the MLP algorithm achieved the highest accuracy of 84.61% and AUC of 88.2% in 0.634 s run time. And through the 10-fold cross-validation method with all features the RF algorithm achieved the highest accuracy of 87.78% and AUC of 87.74% in 0.026 s run time, and with the integrated features the RF algorithm achieved the highest accuracy of 88.1% and AUC of 88.2% in 3.337 s run time. Finally, we measure the performance of simulation data, which obtained supreme accuracy of 92.00% and AUC of 91.99% in 0.016 s, the run time of our proposed feature integration method. In the performance comparison of the recent studies, most of the existing research work for this dataset, the authors do not calculate the execution run time for the ML algorithm. Authors, J. Singh et al. [ 9 ] and Kuzhippallil et al. [ 11 ] only considered the execution run time in their research work whereas J. Singh et al. used the LR model and achieved the highest accuracy of 74.36% and took the execution run time of 0.01 after the feature selection and Kuzhippallil et al. XGBoost and Light GBM achieved the same accuracy of 86.00% with an execution run time of 0.191 s and 0.0059 s after the feature selection method. The framework proposed in this paper utilizes feature extraction. In most cases, feature extraction takes longer than feature selection. The accuracy and AUC of our proposed work are 88.10% and 88.20%, respectively, outperforming other works. However, the execution time of our algorithm is a little bit longer than that of J. Singh and Kuzhipallil's work due to the feature extraction technique. 6.4 Benchmarking of the proposed integration model The proposed feature integration model is compared to recent existing work on liver patient identification, both with and without feature selection/extraction using the ILPD dataset. We take into account the key criterion accuracy to identify liver patients using our proposed integration model. In our research, another statistic called AUC is also considered to be a crucial assessment in the medical field used to detect liver diseases. We have used the proposed statistical feature integration model using the six-classification algorithm to the ILPD dataset and contrasted our results with the most recent ILPD research, which is shown in Table 8 . K. Gupta et al. [ 36 ] achieved the highest accuracy of 63.00% with the Light GB and RF classifier, which is 25.1% lowest than our proposed model. With the help of voting ensemble classifiers, I. Altaf et al. [ 37 ] acquired the greatest performance of 73.56% which is 14.54% worse than our proposed model. J. Singh et al. [ 9 ] obtained the maximum accuracy of 74.36% by the LR classifier, which is 13.74% less than our proposed model. Sreejith et al. [ 10 ] proposed a method, which combines the Chaotic Multi-Verse Optimizations (CMVO) algorithm for feature selection with the Orchard-enhanced Synthetic Minority Over-sampling Technique (OSMOTE) for data balancing, which is applied to the ILPD dataset, the RF classifier achieves an accuracy of 82.46%, which is 5.64% less than our proposed model. To flawlessly diagnose liver patients, Gan et al. [ 18 ] utilize both accuracy and AUC metrics, and they reached the best performance for accuracy of 71.74% and AUC of 69.53% where both accuracy and AUC score is 16.36%, and 18.67% less than our proposed model. By using an RF classifier, Kuzhipallil et al. [ 11 ] achieved the greatest accuracy of 88.00%, which is 0.10% less than our proposed strategy. P. Kumar et al. [ 22 ] found that Variable-NWKNN received the greatest performance for detecting liver disorders, with an accuracy of 87.71% and AUC of 82.41% a difference of 0.39% and 5.79% less than our suggested model. Amare et al. [ 20 ] and Babu et al. [ 21 ], who both worked on the ILPD liver diseases dataset, obtained the best accuracy of 71.36% by the NB classifier and an accuracy of 69.00% by C4.5, respectively. In comparison to the feature integration model we proposed, these results are, respectively, 16.74% and 19.1% less precise. However, our proposed model's highest and lowest levels of accuracy developed is 34.1% and 0.10%, respectively, when compared to the results of recent studies. 7 Conclusion & future work In this paper, we have explored improved feature extraction systems for liver patient classification using statistical machine learning techniques by adopting dimensionality reduction approaches such as PCA, FA, and LDA. The system extracted an improved feature space that accounts for the maximum variation in the data, the covariance between the observed variables, and a linear combination of observed variables that maximizes the class separation. Additionally, different robust statistical measures were used to handle missing values, outliers, and data balancing performed to avoid overfitting and bias. We have also performed a simulation study to reproduce the result using the proposed approach and achieved an average accuracy of 91.40% in the ensemble classification algorithm. Using the proposed method on the challenging ILPD benchmark dataset, the recognition rate has improved between 1% and 18.5%, or nearly 89% accuracy, and AUC by RF ML algorithm based on cross-validation protocol compared to given reference-based approaches. The proposed method is advantageous when we have a massive amount of data and want to reduce the number of features without losing any important information. Due to time limitations, we are not able to investigate whether the proposed method reduced or not the dimensionality of deep-transfer learning features from a pre-trained model or features obtained from a different layer of the convolutional neural network on image data. To improve this method, some more directions can be investigated in the future. These include investigating the non-linear dimensionality reduction method and evaluating the proposed method on an automated feature extraction system to further confirm this method. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgement This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. Appendix A Supplementary data The following is the Supplementary data to this article. Multimedia component 1 Multimedia component 1 Appendix A Supplementary data Supplementary data to this article can be found online at https://doi.org/10.1016/j.imu.2022.101155 .
REFERENCES:
1. LIN R (2009)
2. COLLABORATORS C (2017)
3. HARSHPREETKAUR G (2021)
4. TAPPER E (2018)
5. JOLOUDARI J (2019)
6. JACOB J (2018)
7. ULLAH S (2018)
8. STONE J (2018)
9. SINGH J (2020)
10. SREEJITH S (2020)
11. KUZHIPPALLIL M (2020)
12. ALI M (2020)
13. JOHNSON J (2019)
14. PASYAR P (2021)
15. HASSANNATAJ J (2019)
16. PASHA S (2020)
17. PASHA S (2022)
18. GAN D (2020)
19. ABDAR M (2018)
20. ANAGAW A (2019)
21. BABU M (2016)
22. KUMAR P (2021)
23. STRAW I (2022)
24.
25. KRAWCZYK B (2016)
26. LI L (2016)
27. THARWAT A (2016)
28. PEDROLI E (2019)
29. BERG R (1972)
30. JOHNSON R (2002)
31. LI K (2014)
32. CAI Z (2022)
33. MANIRUZZAMAN M (2018)
34. NKECHINYERE E (2015)
35. DELEO J (1993)
36. GUPTA K (2022)
37. ALTAF I (2022)
|
10.1016_j.jma.2019.01.003.txt
|
TITLE: Influence of processing route on microstructure and wear resistance of fly ash reinforced AZ31 magnesium matrix composites
AUTHORS:
- Dinaharan, I.
- Vettivel, S.C.
- Balakrishnan, M.
- Akinlabi, E.T.
ABSTRACT:
Utilizing fly ash (FA) as reinforcement for magnesium matrix composites (MMCs) brings down the production cost and the land pollution. Magnesium alloy AZ31 was reinforced with FA particles (10 vol.%) successfully by two different processing methods namely conventional stir casting and friction stir processing (FSP). The microstructural features were observed using optical microscope, scanning electron microscope and electron backscatter diffraction. The sliding wear behavior was tested using a pin-on-disc wear apparatus. The stir cast composite showed inhomogeneous particle dispersion and coarse grain structure. Some of the FA particles decomposed and reacted with the matrix alloy to produce undesirable compounds. Conversely, FSP composite showed superior particle dispersion and fine, equiaxed grains by dynamic recrystallization. FA particles encountered disintegration but there was no interfacial reaction. FSP composite demonstrated higher strengthening and wear resistance to that of stir cast composite. The morphology of the worn surface and the wear debris were studied in detail.
BODY:
1 Introduction Various elements such as aluminum, zinc, zirconium, tin etc. are added to pure magnesium to produce magnesium alloys for the improvement of mechanical and tribological properties. In spite of improvement in properties, magnesium alloys possess poor resistance to temperature and do not exhibit considerable strengthening effect compared to existing structural materials. Reinforcing magnesium alloys with ceramic particles to produce magnesium matrix composites (MMCs) provides good resistance to heat and adequate strengthening. MMCs are characterized by several desirable properties which are not limited to low density, good castability, higher specific strength, superior damping capacity and improved wear resistance [1–4] . MMCs meet the requirement of automotive and aerospace industries to pull down the weight of the structure to improve fuel economy and reduce greenhouse gas emission. MMCs are considered as potentially competitive material to aluminum composite due to those desirable properties [5,6] . However, the production cost of MMCs is higher compared to composites based on aluminum [7] . The production cost can be brought down if natural and industrial waste materials are effectively adopted as reinforcement particle for MMCs [8] . Fly ash (FA) is one among that category of material which is a potential reinforcement particle. It is an industrial residue originating from the combustion of coal in thermal power plants. It is captured by suitable filtration methods before the flue gas escapes the chimney. It is available in copious quantities across the globe which can be used to reinforce MMCs lest it causes land pollution and adverse environmental effect. It is inexpensive compared to traditional ceramic particles and carbon nano tubes. It is categorized into solid precipitator and cenosphere types. The latter is a hollow FA particle. Silicon oxide (SiO 2 ), aluminum oxide (Al 2 O 3 ) and calcium oxide (CaO) constitute the major composition of FA [9–12] . Some studies on FA reinforced MMCs were reported in literature [13–20] . Rohatgi et al. [13] demonstrated the feasibility to produce AZ91/(5,10 and 15 wt.%) FA MMCs using die casting method and showed improved tensile behavior. Huang et al. [14] produced AZ91/(4,6,8 and 10 wt.%) FA MMCs using stir casting and showed improved compressive strength. Huang and Yu [15] fabricated AZ91/5 wt.% FA MMC using compocasting method and explored the growth mechanism of intermetallic compounds such as Mg 2 Si and MgO. Lu et al. [16] produced AZ91/FA MMCs using compocasting method. They used Ca(OH) 2 to modify the surface of FA and obtained enhanced electromagnetic interference shielding. Sankaranarayanan et al. [17] prepared Mg/(5 and 15 wt.%) FA MMCs using powder metallurgy method and observed several intermetallic compounds in the matrix. Malik and Kamieniak [18] developed AZ91/60 vol% FA MMCs through negative pressure infiltration method. They applied Ni–P coating on the cenosphere to suppress interfacial reactions. Kondaiah et al. [19] fabricated AZ31/FA MMC using friction stir processing method and reported an increase in mechanical and wear properties. Liu et al. [20] prepared AZ91/6 wt.% FA MMC using compocasting method and applied isothermal heat treatment to improve the damping capacity. Literature survey suggests that the interest on FA reinforced MMCs is constantly growing. Various liquid metallurgy routes were followed to produce those MMCs. Undesirable features such as porosity, interfacial reaction, coarse grain structure and improper dispersion were observed. Friction stir processing (FSP) has been acknowledged as an alternative solid-state processing method to produce MMCs to get rid of issues in casting methods [21] . FSP plasticizes the substrate using a combination of frictional heat and intense deformation and blends the packed powders with the plasticized matrix to result in a composite [22] . MMCs reinforced with several ceramic particles and carbon nano tubes were successfully synthesized using FSP [23–27] . No literature compares the microstructure and properties of MMCs by FSP process and conventional casting process. The present work attempts to produce AZ31/FA MMCs using stir casting and FSP routes and compares the microstructure and the wear properties of both the processes to obtain proper dispersion and higher wear resistance. 2 Experimental procedure 2.1 Stir casting Premeasured quantity of magnesium alloy AZ31B was melted in an electrical resistance furnace under an inert atmosphere at a temperature of 720 °C. Table 1 presents the composition of the base metal. Preheated FA particles (10 vol.%, average size ∼10 µm) at a temperature of 300 °C for 2 h were fed to the molten magnesium which was mechanically stirred by a rotating stirrer to create a vortex. After complete mixing, the composite melt was instantly transferred to a preheated permanent mold and solidified under a pressure of 20 kN. A detailed casting procedure is available elsewhere [28] . The morphology of the FA particle used is presented in Fig. 1 . Table 2 documents the composition of the FA particle. 2.2 Friction stir processing A groove was machined out at the center of the magnesium plates of size 100 mm × 50 mm × 6 mm and packed with FA particles (10 vol.%). FSP was accomplished using a robust computer numerical controlled vertical machining center at a tool rotational speed of 1200 rpm and a traverse speed of 40 mm/min. The processing tool made of high carbon high chromium (HCHCr) Steel had a shoulder diameter of 18 mm, pin diameter of 6 mm and pin length of 5 mm. A detailed FSP procedure and tooling were reported in an earlier work [29] . 2.3 Microstructural characterization Specimens of arbitrary size were sliced from the casting as well as from the friction stir processed plates for microstructural observation. Standard metallographic procedure was adopted to polish the specimens and the etching was done using a solution consisting of 100 ml H 2 O and 10 g citric acid. The etched specimens were viewed using an optical microscope (OLYMPUS BX51M), Field Emission Scanning Electron Microscope (FESEM) (CARL ZEISS-SIGMAHV) and electron backscatter diffraction (EBSD). EBSD was carried out in a FEI Quanta FEG SEM equipped with TSL-OIM software. The microhardness was measured using a microhardness tester at 500 g load applied for 15 s at various locations in the composite. 2.4 Sliding wear The wear rate was tested using a pin-on-disc wear apparatus (DUCOM TR20-LE) at room temperature according to ASTM G99 -04A standard. Pins of size 6 mm × 6 mm × 40 mm were prepared from the casting as well as from the FSP zone by WEDM. The wear test was run at sliding velocity of 1.0 m/s, normal force of 20 N and sliding distance of 3000 m. The polished surface of the pin was slid on a hardened chromium steel disc. A computer aided data acquisition system was used to monitor the loss of height. The volumetric loss was calculated by multiplying the cross-sectional area of the pin to its loss of height. The wear rate was obtained by dividing the volumetric loss to the sliding distance. The worn surface and the collected wear debris were observed using FESEM. 3 Results and discussion 3.1 XRD and microstructure of AZ31/FA MMCs Fig. 2 presents the XRD patterns of AZ31/FA MMCs. The diffraction peaks of SiO 2 which constitutes the major composition of the FA particles are detected in both the composites. Few peaks of SiO 2 and Al 2 O 3 are overlapping with the peaks of magnesium. The height of SiO 2 peaks is more and well pronounced in MMCs by FSP which can be attributed to proper dispersion of FA particles as elaborated subsequently. Figs. 3 and 4 respectively present the optical and SEM micrographs of AZ31/FA MMCs produced by stir casting. A dispersion of FA particles among coarse grain structure is observed in the magnesium alloy matrix. The dispersion suggests successful incorporation of FA particles during casting. There was no rejection of particles from the melt which ensured good wettability between the molten magnesium and the FA particle. However, the dispersion is not homogenous. Some regions are filled with particles while other regions have little or no dispersion. There is no constant interparticle distance which can be attributed to the characteristics of the stir casting process. The dispersion is influenced in every stage of the process starting from particle introduction, holding and solidification. The density difference and the particle size always result in a movement of particle during the holding period between incorporation and pouring [30] . The density difference is marginal compared to conventional ceramic particle which may enable the FA particle to be suspended for a lengthy period without sinking immediately. The final dictating factor is the solidification front. The presence of FA particle in the melt may lead to heterogenous nucleation of grains and initiate multiple solidification fronts in many directions. There was a large variation in the size of the collected FA particles as seen in Fig. 1 which might have caused random inhomogeneous dispersion. However, there is no clear continuous segregation of FA particles along the regions of grain boundaries. This observation suggests that all the particles were not pushed by the solidification front. SEM micrographs in Fig. 4 show that many FA particles decomposed due to interfacial reaction with the molten magnesium and lost their shape. The elevated casting temperature triggers reaction between magnesium melt and the fed FA particle. Compounds such as MgO, Mg 2 Si and MgAl 2 O 4 may be formed according to the following equations [15, 16] . The negative free energy of formation indicates the possibility of the formation of those compounds at the chosen casting temperature. Mg 2 Si usually exhibit a clear polygonal shape which is not observed in the micrographs. (1) 2Mg (l) + SiO 2 (s) = 2MgO (s) + Si (s), ΔG Θ = –76,500 + 15.4 T (2) 2Mg (l) + Si (s) = Mg 2 Si (s), ΔG Θ = –24,000 + 9.4 T (3) 3Mg (l) + Al 2 O 3 (s) = 3MgO (s) + 2Al (l), ΔG Θ = –35,190 + 6.47 T (4) MgO (s) + Al 2 O 3 (s) = MgAl 2 O 3 , ΔG Θ = –35,600–2.09 T Figs. 5 and 6 respectively show the optical and the SEM micrographs of AZ31/FA MMCs prepared using FSP. The micrographs reveal the state of dispersion of FA particles in the composite. FA particles are evenly scattered in the magnesium alloy matrix. The nature of dispersion can be counted as homogenous. Every section of the magnesium alloy is reinforced with FA particles. No unreinforced section called as particle free zones is observed in the micrographs. The packed particles in the machined groove are driven deep into the surrounding matrix to produce the composite. This occurs due to the evolving frictional heat which enables the magnesium alloy to be plasticized. The rotating tool induces a three-dimensional material flow in the space between the cold base metal and the surface of the tool. The mechanical action of the tool causes an intense mixing of FA particles and the plasticized material flow which results in a homogenous dispersion [31] . Since the entire process occurs below the melting point of the chosen magnesium alloy, the displacement of particles due to the force of the solidification front is totally absent. The nature of dispersion is preserved after forging throughout the period of cooling to ambient temperature. A comparison between Figs. 1 and 6 reveals that FA particle underwent a change in size and shape during FSP. A reduction in size and few sharp corners are visualized in Fig. 6 . Similar change in size and morphology was reported in literature for various kinds of reinforcement particles [32] . The enormous plastic strain was identified as the primary cause. The mechanical strain developed during FSP is several folds to that of conventional deformation techniques. The deformability of the magnesium alloy is superior to that of fly ash. As a result, the matrix alloy deforms plastically but FA disintegrates due to poor toughness. The continuous motion of the material flow does not allow the disintegrated debris to be accumulated at one spot. Fig. 6 (c) and (d) show nanometric level debris dispersed in the matrix and in the vicinity of the large disintegrated particles. The interface between the FA particle and the magnesium alloy matrix is observed as sharp without any kind of reactive layer surrounding the particle. The desirable interface can be attributed to the processing temperature which is far below the temperature required to trigger any interfacial reaction. The density measurement ( Table 3 ) indicates that stir cast composite is slightly less dense than friction stir processed composite. This is possible due to the existence of micro pores in the composite inherited from the nature of the casting process. Conversely, FSP applies enormous compressive force which closes micro pores and helps to densely consolidate the plasticized composite. Fig. 7 presents the EBSD images of AZ31/FA MMCs showing the existing grain structure. The stir cast MMC shows a coarser grain structure ( Fig. 7 (a)) with an average grain size of 145 µm ( Table 3 ). The patches in the image represent the location of FA particles. The evolution of coarse grain structure can be attributed to the nature of the casting process. The cooler spots of the molding wall and the undercooling zone around the FA particles may act as grain nucleating sites. It is possible that some grains may solidify directly on the FA particles. The grain growth is unrestricted until the temperature drops below the solidus line or intersects with the boundary of other growing grains. There was enough time for the growth of nucleating grains due to slower cooling rate caused by the preheated die. No chill or quenching was applied to aid rapid solidification. The inhomogeneous distribution might have offset the beneficial effect of heterogenous nucleation by providing more space for the grain growth. On the contrary, fine and equiaxed grains with an average grain size ( Table 3 ) of 4 µm can be seen ( Fig. 7 (b)) in the MMC using FSP method. The cause for the fine grains can be related to dynamic recrystallization which is well acknowledged in literature [21] . The combined effect of frictional heat and the intense deformation results in the formation of fine grains. Since FSP method is grouped into hot working processes, the processed material encounters dynamic recrystallization which reduces the grain size drastically. Moreover, the broken and unbroken FA particles may contribute to pinning effect which further contributes to grain refinement [24] . Fig. 8 depicts the misorientation angle distribution of AZ31/FA MMCs. The stir cast composite is characterized by large number of high angle boundaries compared to friction stir processed composite. The higher amount of lower angle boundaries in FSP process is due to: (a) formation of sub grain boundaries due to rearrangement of dislocations during the recovery process and (b) pinning effect of the broken reinforcement particles in the formation of sub grain boundaries. The microstructural features such as the nature of dispersion, interface and grain structure are greatly influenced by the two methods used in this investigation. The optical micrographs in Figs. 3 and 5 were respectively recorded at various locations within the casting and the stir zone. The dispersion varies largely across the casting while a consistent dispersion is obtained in the stir zone. FSP method is not influenced by the density gradient between the matrix alloy and the reinforced particle. This feature helps to totally arrest the free motion of particles during processing which is not possible during casting. The variation in the solidification pattern and the motion of particles due to density gradient cause inconsistent dispersion in the casting. FA particles suffered a change in shape and size during the process by either method due to decomposition and disintegration. Decomposition alters the composition of FA and produces undesirable compounds. Conversely, disintegration does not change the composition but reduces the size of FA particles. Disintegration is advantageous to improve the strengthening effect due to a reduction in average interparticle distance [33] . Decomposition contaminates the interface between the matrix and the reinforcement and decreases the load transferring effect. A lower processing temperature is beneficial to produce FA reinforced MMCs without decomposition. The application of pressure during solidification minimized the presence of pores in the casting. Micron sized pores are hardly spotted in Figs. 3 and 4 . The pressure enabled to relieve the trapped gases if any in the casting. No pores are present in the stir zone as seen in Figs. 5 and 6 . Pores occasionally form at the interface during FSP if plasticized material does not cover the surface of the reinforcement particles due to improper selection of process parameters [34] . The microstructural features obtained through FSP are desirable for the enhancement of properties compared to conventional stir cast MMCs. 3.2 Microhardness of AZ31/FA MMCs The microhardness values of AZ31/FA MMCs are 62 HV and 94 HV respectively by stir casting and FSP methods of preparation. The microhardness of FSP composite is 51% higher to that of stir cast composite. The microstructural features are responsible for the large variation in the hardness. FSP provided significant strengthening of the composite. The undesirable features in the stir cast composite such as inhomogeneous dispersion, interfacial reaction and coarse grain structure limited the strengthening of the composite in spite of successful reinforcement of FA particles in the magnesium alloy matrix. On the other hand, homogenous dispersion, reaction free interface and fine grain structure were the ideal conditions for improving the strength of the composite. The homogenous dispersion activates Orowan strengthening mechanism where the path of the dislocations is diverted numerous times [35] . The grain size is inversely proportional to the strength according to well-known Hall–Patch relationship. The average interparticle distance is lower in FSP composite which provides more interaction between the matrix and the fly particle during hardness testing resulting in a higher hardness [36] . 3.3 Sliding wear behavior of AZ31/FA MMCs The wear rate of stir cast and FSP composite is respectively 420 and 280 × 10 −5 mm 3 /m. FSP composite exhibited 33% lower wear rate to that of stir cast composite. The lower wear rate can be attributed to the higher hardness of the FSP composite. Archard's wear law states that the wear rate bears an inverse relationship with the hardness of metallic materials. The enhanced hardened surface resists the cutting action of the counterface asperities effectively. Further, the coefficient of friction (COF) is lower for the FSP composite ( Table 3 ). The contact area between the sliding pin and the counterface is reduced by the protruding particles successfully due to homogenous dispersion. The applied load is well supported by the FA particles. The reduction in COF decreases the acting shear stress on the sliding surface leading to an improvement in wear resistance. Fig. 9 shows the progressive height loss of specimens during the sliding wear test. The curve is not smooth for the stir cast pin. This is due to the inhomogeneous dispersion which exposes the cutting action at various levels. A sudden drop occurs if the counterface asperities remove material from unreinforced regions or during debonding of clusters. The drop levels off once particle reinforced regions resist the cutting action. SEM micrographs of the worn surface ( Fig. 10 ) of both the composites display a parallel groove pattern which is one of the primary characteristics of abrasive wear mechanism. The morphology of the worn surface rules out adhesive wear mechanism which is usually found in unreinforced magnesium alloys [37] . The reinforcement of FA particles by both the processing methods restricted the uncontrolled plastic deformation due to the applied load assisted by the evolution of frictional heat at the sliding surface. However, larger crater due to the plowing action and piling of deformed material at the ridges of the grooves are observed on the worn surface of the stir cast composite ( Fig. 10 (a)). This can be related to the existence of several particle free regions observed in the micrographs ( Fig. 3 ). Those unreinforced regions are subjected to the frictional heat and the strong cutting action resulting in deformation and craters as evidenced in the spikes of the plot in Fig. 9 . This process produces larger size wear debris ( Fig. 11 (a)) which indicates bulk removal of the material causing higher wear rate. On the contrary, the worn surface of FSP composite is covered by numerous fine debris ( Fig. 10 (b)). No large craters are observed. This is attributed to the homogenous dispersion of FA particles which prevents the cutting asperities to deeply penetrate into the subsurface. A layer of FA particle would initially support the normal load followed by fracture or debonding and a fresh layer will be exposed again. The fine debris ( Fig. 11 (b)) trapped on the sliding track and at the sliding interface provide a rolling action and are accountable to initiate three body abrasion. The cutting action is reduced due to the abrasive wear leading to a reduction in wear rate. The rate of material removal is retarded and the interaction between new debris and existing wear debris would further reduce the average size of wear debris improving the wear rate further. 4 Conclusion AZ31/10 vol.% FA MMCs were successfully produced using conventional stir casting and relatively new FSP method. The microstructure, microhardness and sliding wear behavior were characterized using optical microscopy, SEM, EBSD and pin-on-disc apparatus. The following conclusions were derived from the present research work. • Solid precipitator type FA particles possessed adequate wettability to be reinforced into AZ31 using stir casting. There was no rejection of particles from the melt. It is difficult to achieve a homogeneous dispersion. Several factors caused movement of particles during casting and solidification and produced an inhomogeneous dispersion in the composite. The dispersion was inconsistent across the casting. The casting conditions promoted interfacial reaction and decomposition of FA particles. The application of pressure during solidification reduced the porosity. The grain structure was coarse due to inhomogeneous dispersion and unrestricted growth. • FSP composite exhibited a homogeneous dispersion with smaller interparticle distance. The dispersion was consistent irrespective of the location within the stir zone. The grain structure was extremely fine and equiaxed by dynamic recrystallization due to frictional heat and severe plastic deformation. FA particles experienced a change in shape and size due to the severe strain which caused disintegration of the particles. The interface was clean without any kind of reaction due to lower processing temperature below the melting point of the matrix alloy. • The microhardness and wear resistance of stir cast composite was inferior to that of FSP composite due to the undesirable microstructural features. The worn surface showed plastic deformation at the ridges and the large craters which were absent on the worn surface of FSP composite. The wear debris of stir cast composite was larger in size due to exposure to higher cutting action. Conflict of interest There are no conflicts between the authors or with the institution. Acknowledgments The authors are grateful to Thermal Power Station at Tuticorin, Centre for Metallurgy and Materials processing at V V College of Engineering, Vigshan Tools and Speed Spark EDM at Coimbatore, Microscopy Lab at University of Johannesburg, FESEM lab at Coimbatore Institute of Technology, and OIM and Texture Lab at Indian Institute of Technology Bombay for providing the facilities to carry out this investigation.
REFERENCES:
1. DEY A (2015)
2. KRISHNAN M (2018)
3. KORAYEMA M (2009)
4. ESMAILY M (2016)
5. GUPTA M (2015)
6. GHASALI E (2017)
7. VISWANATH A (2015)
8. BAHRAMI A (2016)
9. ZAHI S (2011)
10. DEY A (2016)
11. LANZERSTORFER C (2018)
12. NGUYEN Q (2016)
13. ROHATGI P (2009)
14. HUANG Z (2010)
15. HUANG Z (2011)
16. LU N (2015)
17. SANKARANARAYANAN S (2016)
18. MALIK K (2017)
19. KONDAIAH V (2017)
20. LIU E (2018)
21. RATNASUNIL B (2016)
22. SHARMA V (2015)
23. LEE C (2006)
24. AZIZIEH M (2011)
25. NASER A (2017)
26. LU D (2013)
27. VEDABOURISWARAN G (2018)
28. MATIN A (2015)
29. BALAKRISHNAN M (2015)
30. WANGA X (2014)
31. NAVAZANI M (2016)
32. FENOEL M (2016)
33. KOUZELI M (2002)
34. SHARIFITABAR M (2016)
35. QUEYREAU S (2010)
36. IZADI H (2013)
37. AHMADKHANIHA D (2016)
|
10.1016_j.asej.2015.07.012.txt
|
TITLE: Chemical reaction and heat source effects on MHD oscillatory flow in an irregular channel
AUTHORS:
- Satya Narayana, P.V.
- Venkateswarlu, B.
- Devika, B.
ABSTRACT:
This paper investigates the effect of heat and mass transfer on MHD oscillatory flow in an asymmetric wavy channel with chemical reaction and heat source. The unsteadiness in the flow is due to an oscillatory pressure gradient across the ends of the channel. A magnetic field of uniform strength is applied in the direction perpendicular to the channel. However, the induced magnetic field is neglected due to the assumption of small magnetic Reynolds number. The temperature difference of the channel is also assumed high enough to induce heat transfer due to radiation. The governing equations are solved analytically by regular perturbation method. The analytical results are evaluated numerically and then are presented graphically to discuss the effects of different parameters entering into the problem. It is observed that the heat transport of a system is more increased in oscillatory flow than in ordinary conduction.
BODY:
Nomenclature a 1 , b 1 amplitudes of the wavy walls a , b amplitude ratios B 0 electromagnetic induction c p specific heat at constant pressure d 1 + d 2 width of channel d mean half width of the channel D a Darcy number g Gravitational force Gr Grashof number Gc modified Grashof number H 0 intensity of magnetic field K porous medium shape factor Kr chemical reaction parameter k ∗ porous medium permeability coefficient k thermal conductivity M Hartmann number Nu 1 Nusselt number at the wall y = h 1 Nu 2 Nusselt number at the wall y = h 2 Pe Peclet number p pressure q radiative heat flux Greek symbols θ fluid temperature β T coefficient of thermal expansion β C coefficient of mass expansion μ e magnetic permeability σ c conductivity of the fluid ρ fluid density υ kinematics viscosity coefficient λ wave length ω frequency of the oscillation α mean radiation absorption coefficient τ 1 skin friction at the wall y = h 1 τ 2 skin friction at the wall y = h 2 Q heat source parameter Re Reynolds number R radiation parameter Sc Schmidt number t time U flow mean velocity u axial velocity 1 Introduction An extensive amount of study [1–3] has been performed in the last few decades to get an enhanced indulgent of flow mixing and heat transfer development in channels with geometrical in homogeneity’s, such as asymmetric and symmetric wavy walls, grooved, communicating and corrugated channels and other geometrical configurations, such as backward facing steps, channel expansions, passages with eddy promoters and grooved tubes. Asymmetric and symmetric wavy wall channels are used in industrial and biomedical applications [4–7] , for instance, compact heat exchangers, oxygenators and hemo-dialyzers. These channels are easy to fabricate and can provide significant heat transfer enhancement if operated in an appropriate Reynolds number range. Peristaltic flow and heat transfer of a conducting fluid in an asymmetric inclined asymmetric channel have been investigated by [8,9] . Unsteady oscillatory free convective flows play a significant role in chemical engineering, in turbo machinery and in aerospace technology. Such flows arise due to either unsteady motion or boundary temperature. Besides, unsteadiness may also be due to oscillatory free stream velocity or temperature. The physical idea behind the MHD oscillatory flow in a channel is that, if heat source is connected to a heat sink via a fluid and the fluid is oscillated, the convective motion will bring about sharp spikes in the velocity profile which in turn will enhance the heat transport over pure conduction due to both radial and axial gradients. This research has many applications including removing heat from outer space modules, reactors, closed cabins, NASA’s long-term manned and un-manned missions. Many parameters can be varied to maximize the convective heat transport, such are pulse amplitude and frequency, pipe radius and length, as well as the transporting fluid and the geometry of the transport zone. It has been established both experimentally and analytically that large quantities of heat is transported axially provided the fluid is oscillated at high frequency with large tidal displacement. It is also confirmed that under laminar conditions the radial variation in velocity and temperature produces an effective axial transport of heat, which is several orders of magnitude larger than the absence of oscillations. The study of MHD flow deals with the interface of electrically conducting fluids and magnetic fields. If an electrically conducting fluid moves in a magnetic field, the magnetic field employs forces which may significantly change the flow. On the other hand, the flow itself gives rise to a second, induced field and thus alters the magnetic field. In maximum technical applications this interaction is concentrated to the one-way action that externally applied magnetic fields can have on fluid flows that are to be stabilized, stirred, or modified. The magnetic Reynolds number R is an important parameter in MHD, which is defined as follows: m , where R m = Induced field Applied field = μ σ uL μ is the permeability of free space, σ is the electrical conductivity, u and L are the characteristic velocity and length scale correspondingly. If there is no excess charge density, then , where ∇ · E = 0 E is the induced electric field. Since the magnetic Reynolds number is very small, the induced magnetic field and consequently are negligible which is conjunction with the result ∇ XE (see Ref. ∇ · E = 0 [10] ). This is frequently known as the “inductionless” approximation of MHD [11] . In most engineering and laboratory flows (in hydromagnetic dynamo effects [12] ) it is impossible to reach large values of velocity and length scale. Consequently, MHD flows in earthly applications typically occur at small magnetic Reynolds number [13–15,32] . In view of these industrial applications, many researchers [16–20] are keen on MHD flows and its varied applications. Hayat et al. [21] have reported the effects of radiation and magnetic field on the mixed convection stagnation-point flow over a vertical stretching sheet in a porous medium. The distance a photon travels before something happens to it is called optical depth. Radiation analysis can be described by limits of this parameter [See Ref. 22 ]. Optically thin fluids (long photon travel) or optically thick fluids (short photon travels) are the limiting conditions for radiation. The assumption of non-participating media and situations in which energy may be emitted from the fluid but not absorbed are examples of optically thin phenomena. For optically thick cases, the radiation is essentially emitted and/or absorbed at the fluid boundaries. In this case the radiative heat fluxes can be approximated by the Rosseland diffusion approximation [23] which has been extensively used in many radiation related studies [24–27] . It should be noted that for both CO 2 in the temperature range of 100–650°F (with the corresponding Prandtl number range 0.76–0.6) and NH 3 vapour in the temperature range of 120–400°F (with the corresponding Prandtl number 0.88–0.84) at 1 atm the value of radiation parameter ranges from 0.033 to 0.1, whereas for water vapour in the temperature range of 220–900°F (with the corresponding Prandtl number Pr = 1.0) the radiation values lie between 0.02 and 0.3 (see Cess [24] ). However, radiation heat transfer has a key impact in high temperature regime. Many technological processes occur at high temperature and good working knowledge of radiative heat transfer plays an instrumental role in designing the pertinent equipment. In many practical applications depending on the surfaces properties and solid geometry, the radiative transport is often comparable with that of convective heat transfer. In view of these industrial and engineering applications, Makinde [28] analysed the radiation effects on free convection flow over a channel with slowly varying width. Many researchers [29–32] have studied analytically and numerically, the flow and a heat transfer characteristic of Newtonian/non-Newtonian fluids over various geometries. The study of heat transfer with chemical reaction is of most realistic significance to engineers and scientists because of its universal incidence in many branches of science and engineering. This phenomenon plays a significant role in chemical industry, power and cooling industry for drying, evaporation, energy transfer in a cooling tower and the flow in a desert cooler, etc. Devika et al. [33] studied the influence of chemical reaction effects on MHD free convection flow in an irregular channel with porous medium. Satya Narayana and Sravanthi [34] have analysed the influence of variable permeability on unsteady MHD convection flow past a semi-infinite inclined plate with thermal radiation and chemical reaction. Some recent investigators dealing with the chemical reaction effects in different flow fields are given in Refs. [35–36] . To the best of our knowledge, the problem of MHD oscillatory flow in an asymmetric wavy channel with chemical reaction, heat source and non-uniform wall temperature under long wave length and low Reynolds number assumptions has remained unexplored. Hence, the main objective of this study is to investigate the combined effects of chemical reaction and radiative heat transfer in an asymmetric wavy channel filled with saturated porous medium. The governing equations of the flow are solved analytically, and the effects of various flow parameters on the flow field have been discussed. The format of the paper is as follows. We depict the mathematical model and argue the non-dimensionalization of the governing equations in Sections 2 and 3 which contain results and discussions. Finally, Section 4 highlights the important conclusions derived from the present study. 2 Mathematical formulation of the problem We consider the unsteady viscous, incompressible, electrically conducting and chemically reacting optically thin fluid in an asymmetric wavy channel with heat source (see Fig. 1 ) whose walls are given by where (1) H 1 = d 1 + a 1 cos 2 π x λ and H 2 = - d 2 - b 1 cos 2 π x λ + φ and a 1 , b 1 , d 1 , d 2 satisfy the condition φ . a 1 2 + b 1 2 + 2 a 1 b 1 cos φ ⩽ ( d 1 + d 2 ) 2 The phase difference varies in the range φ and 0 ⩽ φ ⩽ π corresponding to symmetric channel with waves out of phase and φ = 0 the waves are in phase. The walls of the channel are maintained at temperatures φ = π and T 1 respectively, which is high enough to induce radiative heat transfer. It is assumed that the transversely applied magnetic field and magnetic Reynolds number are very small and hence the induced magnetic field can be negligible. The fluid is set oscillation by (i) an oscillatory pressure gradient across the ends of the channel and (ii) asymmetric wavy channel. Viscous and Darcy’s resistance terms are taken into account with constant permeability of the porous medium. With these assumptions the governing equations are given by (see Refs. T 2 [37–39] ) (2) ∂ u ∂ t = - 1 ρ ∂ p ∂ x + ν ∂ 2 u ∂ y 2 + g β T ( T - T 2 ) + g β C ( C - C 2 ) - ν k ∗ u - σ B 0 2 u ρ (3) ∂ T ∂ t = k ρ c p ∂ 2 T ∂ y 2 - 1 ρ c p ∂ q ∂ y + Q H ρ c p ( T - T 2 ) (4) ∂ C ∂ t = D ∂ 2 C ∂ y 2 - K r ∗ ( C - C 2 ) The relevant boundary conditions are as follows: (5) u = 0 T = T 1 C = C 1 On y = H 1 u = 0 T = T 2 C = C 2 On y = H 2 The radiative heat flux (see Ref. [40] ) is given by where (6) ∂ q ∂ y = 4 α 2 ( T 2 - T ) plank’s function (see Ref. α 2 = ∫ 0 ∞ K λ w ∂ e b λ ∂ T d λ , K λ w is the absorption coefficient, e b λ [41] ). The radiative heat transfer coefficient depends strongly on temperature and is less useful as a concept than the convective heat transfer coefficient. For conduction and convection, energy transfer between two locations depends on their temperature difference to approximately the first power. Thermal radiation energy transfer between two bodies however depends on the difference between their absolute temperatures, each raised to about the fourth power [42] . So, it is useful for practical problems involving both convection and radiation. We introduce the following non-dimensional quantities: (7) x ¯ = x λ y ¯ = y d u ¯ = u U t ¯ = tU d h 1 = H 1 d 1 h 2 = H d 1 d = d 2 d 1 a = a 1 d 1 b = b 1 d 1 D a = k ∗ d 2 Re = Ud ν Q = Q H d 2 k K = 1 D a Sc = D Ud Kr = dK r ∗ U p ¯ = d 2 p ρ ν λ U Pe = Ud ρ c p k R = 4 α 2 d 2 k M = σ B 0 2 d 2 ρ ν Gr = g β T ( T 1 - T 2 ) d 2 ν U Gc = g β C ( C 1 - C 2 ) d 2 ν U θ = T - T 2 T 1 - T 2 ϕ = C - C 2 C 1 - C 2 The boundary conditions in non-dimensional form becomes where h 1 = 1 + a cos 2 π x and h 2 = - d - b cos ( 2 π x + φ ) and a , b , d satisfy the condition φ . a 2 + b 2 + 2 ab cos φ ⩽ ( 1 + d ) 2 The dimensionless governing equations together with the appropriate boundary conditions (neglecting bar symbols) can be written as follows: (8) Re ∂ u ∂ t = - ∂ p ∂ x + ∂ 2 u ∂ y 2 + Gr θ + Gc ϕ - ( M + 1 / K ) u (9) Pe ∂ θ ∂ t = ∂ 2 θ ∂ y 2 + ( R + Q ) θ (10) ∂ ϕ ∂ t = Sc ∂ 2 ϕ ∂ y 2 - Kr ϕ The corresponding boundary conditions in non-dimensional form are as follows: (11) u = 0 θ = 1 ϕ = 1 on y = h 1 u = 0 θ = 0 ϕ = 0 on y = h 2 2.1 Method of solution The system of partial differential Eqs. (8)–(10) can be reduced to a system of ordinary differential equations in dimensionless form by assuming the pressure gradient for purely oscillatory flow and velocity, temperature and concentration as (see Refs. [31,37,39,43,44] ): - ∂ p ∂ x = λ e i ω t (12) u ( y , t ) = u 0 ( y ) e i ω t θ ( y , t ) = θ 0 ( y ) e i ω t ϕ ( y , t ) = ϕ 0 ( y ) e i ω t 2.2 Calculation Substituting the Eq. (12) into the Eqs. (8)–(10) , we obtain the following set of equations (i.e. the real part of the complex solution needs to be evaluated and analysed): (13) d 2 u 0 dy 2 - n 2 u 0 = - λ - Gr θ 0 - Gc ϕ 0 (14) d 2 θ 0 dy 2 + m 2 θ 0 = 0 (15) d 2 ϕ 0 dy 2 - l 2 ϕ 0 = 0 The corresponding boundary conditions can be written as follows: Solving Eqs. (16) u 0 = 0 θ 0 = 1 ϕ 0 = 1 On y = h 1 u 0 = 0 θ 0 = 0 ϕ 0 = 0 On y = h 2 (13)–(15) under the boundary conditions (16), we obtain the expression for velocity, temperature and concentration as follows: (17) u ( y , t ) = λ n 2 e i ω t + λ n 2 + Gr m 2 + n 2 - Gc l 2 - n 2 sinh n ( h 2 - y ) sinh n ( h 1 - h 2 ) e i ω t + λ n 2 sinh n ( y - h 1 ) sinh n ( h 1 - h 2 ) + Gr m 2 + n 2 sin m ( y - h 2 ) sin m ( h 1 - h 2 ) - Gc l 2 - n 2 sinh l ( y - h 2 ) sinh l ( h 1 - h 2 ) e i ω t (18) θ ( y , t ) = sin m ( y - h 2 ) sin m ( h 1 - h 2 ) e i ω t (19) ϕ ( y , t ) = sinh l ( y - h 2 ) sinh l ( h 1 - h 2 ) e i ω t The skin friction coefficient across the channels wall is given by (20) τ = μ ∂ u ∂ y y = h 1 , h 2 = μ nGc l 2 - n 2 - nGr m 2 + n 2 - λ n cosh n ( y - h 2 ) sinh n ( h 1 - h 2 ) e i ω t + μ λ cosh n ( y - h 1 ) n sinh n ( h 1 - h 2 ) - mGr m 2 + n 2 cosh m ( y - h 2 ) sinh m ( h 1 - h 2 ) - lGc l 2 - n 2 cosh l ( y - h 2 ) sinh l ( h 1 - h 2 ) e i ω t The rate of heat transfer across the channels wall is given by (21) Nu = - ∂ θ ∂ y y = h 1 , h 2 = - m cos m ( y - h 2 ) sin m ( h 1 - h 2 ) e i ω t ⇒ | J | cos ( ω t + ψ ) The rate of mass transfer across the channels wall is given by where (22) Sh = - ∂ ϕ ∂ y y = h 1 , h 2 = - l cosh l ( y - h 2 ) sinh l ( h 1 - h 2 ) e i ω t ⇒ | F | cos ( ω t + ψ ) and J = J r + iJ i , | J | = J r 2 + J i 2 and ψ = tan - 1 ( J i / J r ) F = F r + iF i , | F | = F r 2 + F i 2 ψ = tan - 1 ( F i / F r ) l 2 = Kr + i ω / Sc m 2 = Q + R - i ω Pe n 2 = M + 1 / K + i ω Re It is worth mentioning that in the absence of chemical reaction parameter, Schmidt number, modified Grashof number, and heat source parameter ( Kr , Sc , Gc , Q → 0) the present problem reduces to those of Ref. [39] . We also noticed that the governing equations reduce to Ref. [20] when a , b and d → 0. 3 Results and discussion In this paper, we analysed the MHD oscillatory flow in an asymmetric wavy channel with chemical reaction and heat source. The analytical results are reported in the previous section was performed and a representative set of results is reported graphically in Figs. 2–12 and Table 1 . These results are obtained to illustrate the influence of various parameters entering into the problem on velocity, temperature, concentration profiles as well as skin friction, Nusselt number and Sherwood number. In these calculations we consider and ω = 0.1 , λ = 0.1 , t = 0.1 , π = 3.14 , x = 0.5 , a = 0.2 , b = 1.2 , d = 2.0 while other parameters are varied over a range which are listed in figure captions. Re = 0.1 Figs. 2(a) and 2(b) , illustrate the effect of permeability parameter K on the velocity distribution for two different values of phase angle (i.e. ). It is clear from the figures that as φ = 0 ; φ = π / 2 K increases, the velocity also increases. Physically, as K increases, the degree of porosity of the porous media increases as well allowing free passage of (the resistance of the medium may be neglected) the fluid within the channel. In addition, when fluid motion is induced by oscillations the fluid velocity is maximum towards the centre of the channel. Further, we observe that the velocity at is highly greater than that of the velocity at φ = π / 2 . This is due to increase in phase angle. These results are clearly supported from the experimental point of view (see Ref. φ = 0 [19] ). Fig. 3 displays result of velocity profiles for various values of magnetic field parameter M . The velocity field decreases with the increase of M along the surface. This is due to the fact that the effect of a transverse magnetic field gives rise to a resistive type force called the Lorentz force. This force has the tendency to diminish the motion of the fluid. These results are same as noted in Refs. [20,39] . This result has a significant role in large number of industrial applications, particularly in favour to solidification processes such as casting and semiconductor single crystal growth applications. In these claims, as the liquids experience solidification, fluid flow and turbulence occur in the solidifying liquid pool and have critical inferences on the product quality control. The practice of magnetic fields has effectively been applied to monitoring melt convection in solidification systems. Figs. 4(a)–4(c) respectively show the velocity and temperature profiles for different values of heat generation parameter Q . It is obvious from the graphs, all the velocity and temperature distributions increase with increase of Q . The positive sign indicates the heat generation (heat source) whereas negative means heat absorption (heat sink). Heat source physically implies generation of heat from the surface (this is due to ), which increases the temperature in the flow field. Therefore, as heat source parameter increased, the temperature increases steeply and exponentially from the surface. The influence of heat source parameter T w > T ∞ on velocity and temperature profiles is very much significantly related to the heat sink parameter Q > 0 . These results are clearly supported from the physical point of view. Q < 0 Figs. 5(a) and 5(b) respectively show the velocity and temperature profiles for different values of the thermal radiation parameter R . The effect of increasing values of the thermal radiation parameter R is to increase the velocity and temperature profiles in the flow region. Physically, when the amount of heat generated through thermal radiation improved, the bond holding the components of the fluid particles is easily broken and the fluid velocity will increased. Thus, it is pointed out that the radiation should be minimized to have the cooling process at a faster rate. Figs. 6(a)–6(c) correspondingly show the velocity and concentration profiles for different values of the chemical reaction parameter at Kr and φ = 0 . It is obvious from the figures that the velocity and concentration profiles decrease with the increase of φ = π / 2 . This shows that the buoyancy effects (due to concentration and temperature difference) are important in the channel. Moreover it is observed that the fluid motion is retarded on account of chemical reaction. This shows that the destructive reaction Kr Kr > 0 leads to fall in the concentration field which in turn weakens the buoyancy effects due to concentration gradients. Consequently, the flow field is retarded. In addition to this, in the generative reaction, i.e. for Kr < 0, the reverse effect is observed. We also observe that the velocity in the case is comparatively less than that of the φ = 0 . This occurrence has a superior agreement with the physical realities. The concentration distributions are in good agreement with the outcomes found in case of Rout et al. φ = π / 2 [45] . Fig. 7 depicts the temperature distribution for various values of Peclet number It is clear from this graph that the temperature decreases with an increase of Pe . Pe . Fig. 8 shows the distribution of Schmidt number over concentration profiles. It is noticed that the effect of increasing values of results in a decreasing concentration profiles across the boundary layer. Physically, the rise in the value of Schmidt number means reduction of molecular diffusion. Hence, the concentration of the species is advanced for smaller values of Sc Sc and lesser for larger values of Sc . Fig. 9 displays the variation of the local skin friction coefficient with chemical reaction parameter τ for various values of Kr at Gr and y = h 1 It is observed that the local skin friction coefficient y = h 2 . at τ increases due to increase in the chemical reaction parameter y = h 1 whereas reverse effect is observed at Kr y = h 2 . Fig. 10 displays the heat transfer coefficient against Q for different values of R. It is obvious from the graph that, the Nusselt number decreases with increase in the value of at R and decrease at y = h 1 . Furthermore, the periodic oscillatory behaviour in the stream function characteristic in the problem was rotten by the occurrence of the magnetic field. This decay in the transient oscillatory behaviour was speeded up by the presence of a heat source. y = h 2 Fig. 11 reveals with the variation of rate of mass transfer for different values of Sh and Sc at the walls Kr and y = h 1 . We noticed from the figure that the Sharehood number decreases with increase of y = h 2 at both the walls. Sc Figs. 12(a)–12(c) show the time series of the skin friction coefficient, Nusselt number and Sherwood number for different values of Gr , R , Kr respectively. It is noticed from the figures that , τ Nu and Sh vary periodically due to the asymmetric surface motion. The values of skin-friction coefficient, Nusselt number and local Sherhwood number are given in Table 1 for different values of the physical parameters, namely and K , M , R , Q . From this table, it is observed that the skin-friction co-efficient increases with the increase of porous permeability parameter Kr K , magnetic field parameter M , radiation parameter F and chemical reaction parameter. Nu enhances with increasing the values of Q and R whereas it has reverse effect in case of K . It is also pointed out from this table that increasing values of Kr leads to rise in the values of Sh. Thus, it is pointed out from the table that, has no effect on M and Nu . Also from Eqs. Sh 3 and 4 it is clear that the equations are independent of . So M has no effect on M and Nu . Sh 4 Conclusions In this paper, we have studied the effects of chemical reaction and heat source on MHD oscillatory flow and mass transfer of an incompressible fluid in the presence of an asymmetric wavy channel. The nonlinear and coupled governing equations are solved analytically by using regular perturbation method. Numerical results are presented to illustrate the details of the flow, heat and mass transfer characteristics and their dependence on material parameters. From the present investigation, we make the following conclusions: 1. The fluid velocity profile is parabolic with maximum magnitude along the channel centreline and minimum at the walls. It is interesting to note that the magnitude of fluid velocity increases with an increase in radiation parameter and decreases with an increase in Hartmann number. Physically, increase in the strength of the applied magnetic field over uniform magnetic field causes the reduction in the fluid motion in the channel. 2. The effects of the permeability and magnetic parameters on velocity are opposite. 3. The fluid motion is retarded due to chemical reaction. Hence the consumption of chemical species causes a fall in the concentration field which in turn diminishes the buoyancy effects due to concentration gradients. 4. The oscillatory flow increases the heat transport of a system by several orders of magnitude compared to ordinary conduction. This is due to the large quantities of heat is transported axially provided the fluid is oscillated at high frequency with large tidal displacement. It was established both experimentally and analytically. 5. The skin friction coefficient rises when chemical reaction parameter is increased at the wall, while it has the opposite effect on the skin friction at the wall y = h 1 . y = h 2 6. The Sherhwood number decreases for increasing values of chemical reaction parameter Kr at both the walls. It is hoped that the present investigation will provide an useful information for many scientific and industrial applications (such as in cooling of electronic devices) where one should not disturb the assembly of integrated chips by oscillations and also serve as a complement to the previous studies. Acknowledgements The author’s are grateful to the reviewers for their suggestions that extensively improved our paper. We also thank Prof. G Sarojamma, SPM University and Prof. Sreenadh, Department of Mathematics, S.V. University, Tirupati, A.P, India, for their continuous support and encouragement in the preparation of this manuscript.
REFERENCES:
1. GREINER M (1991)
2. GREINER M (1995)
3. GUZMAN A (2009)
4. SRINIVAS S (2009)
5. KODANDAPANI M (2008)
6. ALI N (2007)
7. MARTINEZSUASTEGUI L (2008)
8. RAMIREDDY G (2009)
9. RAMIREDDY G (2010)
10. LI J (2013)
11. DAVIDSON P (2001)
12. MULLER U (2001)
13. RUDIGER G (2004)
14. ROBERTS P (1967)
15.
16. RAJU K (2014)
17.
18. DEVIKA B (2013)
19. KURZWEG U (1984)
20. MAKINDE O (2005)
21. HAYAT T (2010)
22. FARMER R (2009)
23. ROSSELAND S (1936)
24. SARPAC V (1968)
25. CESS R (1966)
26. CHENG E (1972)
27. HOSSAIN M (1996)
28. MAKINDE O (2001)
29. SATYANARAYANA P (2013)
30.
31. MAKINDE O (2007)
32. MAKINDE O (2010)
33. DEVIKA B (2013)
34. SATYANARAYANA P (2012)
35. VENKATESWARLU B (2015)
36. TALUKDAR B (2010)
37. OGULU A (2005)
38. PRAKASH J (2007)
39. MUTHURAJ R (2010)
40. OGULU A (1993)
41. PAL D (2010)
42. SIEGER R (2002)
43. OAHIMIRE J (2014)
44.
45. ROUT B (2014)
|
10.1016_j.net.2017.07.021.txt
|
TITLE: Radioactive effluents released from Korean nuclear power plants and the resulting radiation doses to members of the public
AUTHORS:
- Kong, Tae Young
- Kim, Siyoung
- Lee, Youngju
- Son, Jung Kwon
- Maeng, Sung Jun
ABSTRACT:
Korean nuclear power plants (NPPs) periodically evaluate the radioactive gaseous and liquid effluents released from power reactors to protect the public from radiation exposure. This paper provides a comprehensive overview of the release of radioactive effluents from Korean NPPs and the effects on the annual radiation doses to the public. The amounts of radioactive effluents released to the environment and the resulting radiation doses to members of the public living around NPPs were analyzed for the years 2011–2015 using the Korea Hydro & Nuclear Power Co., Ltd's annual summary reports of the assessment of radiological impact on the environment. The results show that tritium was the primary contributor to the activity in both gaseous and liquid effluents. The averages of effective doses to the public were approximately on the order of 10−3 mSv or 10−2 mSv. Therefore, even though Korean NPPs discharged some radioactive materials into the environment, all effluents were within the regulatory safety limits and the resulting doses were much less than the dose limits.
BODY:
1 Introduction In Korea, 24 nuclear power plants (NPPs) are currently in operation, 20 pressurized water reactors (PWRs) and four pressurized heavy water reactors (PHWRs). With an additional five PWRs under construction and another four PWRs planned to be constructed in the near future, the number of NPPs in Korea is continuously increasing. In addition, owing to the increase in the number of operating NPPs, concerns from the public and the regulatory body for radioactive materials released from NPPs to the environment have increased rapidly. In particular, periodic monitoring and safety management for radioactive materials are conducted thoroughly to achieve radiation protection for members of the public living around NPPs, taking into account the operation of multiple reactors in a single site. In general, radioactive materials created by the operation of NPPs are categorized into two types: gaseous and liquid materials. All these materials are controlled as radioactive wastes, which are defined by the Korean regulation of radiation protection, regardless of their concentration of radioactivity [1] . Gaseous and liquid wastes are discharged to the environment under monitoring after appropriate processes of waste management [2,3] . Radioactive effluents are defined as gaseous and liquid wastes that are released to the environment. Furthermore, the management of radioactive effluents includes the whole process of monitoring and controlling radioactive materials in the effluents. This management of radioactive effluents is normally regarded as a very important factor not only to reduce the radiation risks to both the public and the environment but also to increase public acceptance of nuclear facilities. The goal of this study was to understand the current status of the release of radioactive effluents from Korean NPPs and its characteristics. To achieve this goal, the amounts of radioactive effluents released to the environment and the resulting radiation doses to members of the public living around NPPs were analyzed for the years 2011–2015. The results of the analysis can be used to compare the changes of the release of radioactive effluents and the radiation dose to the public, because the monitoring of carbon-14 at PWRs has been conducted since 2012 and the number of operating NPPs in a single site has increased during the years 2011–2015. 2 Materials and methods 2.1 Radioactive effluents released from NPPs The Korean regulation “Standards for Radiation Protection, etc.” requires the continuous monitoring of gaseous and liquid effluents released from NPPs to prevent radiological hazards to the environment [1] . Radioactive effluents are generally regulated by two criteria: the concentration of radioactivity and the dose to members of the public. First, the concentration of radioactivity in effluents should be less than the effluent control limits at the boundary of the unrestricted area. The values of the effluent control limits are equivalent to the radionuclide concentrations, which—if inhaled or ingested continuously over the course of a year—would produce the annual dose limit for the public, 1 mSv/y [1] . Second, the dose from radioactive effluents to members of the public living around NPPs should not exceed the annual dose standards, which are design objectives for equipment to keep levels of radioactive effluents to unrestricted areas as low as reasonably achievable (ALARA) during the normal operation of NPPs [1] . Theses annual dose standards are practically used as dose constraints for the public in Korea. Table 1 shows the numerical guidance on annual dose standards for members of the public as applied in the design of Korean NPPs [1] . In particular, dose standards are based on the design objectives of Appendix I to 10 Code of Federal Regulations (CFR) Part 50, part of the US Nuclear Regulatory Commission (USNRC) regulations pertaining to radioactive effluents, and 40 CFR 190 Subpart B, part of the US Environmental Protection Agency (USEPA) regulations pertaining to environmental standards [4,5] . These regulations are based on a common concept that dose estimation for members of the public originates from the release of radioactive effluents from NPPs. Therefore, in this study, we analyzed the amount of radionuclides discharged from NPPs to the environment in gaseous and liquid effluents to understand their effects on maximum annual doses to the public resulting from these effluent releases. The Korea Hydro & Nuclear Power Co., Ltd (KHNP) provides annual summary reports, “Survey of radiation environment and assessment of radiological impact on environment in vicinity of nuclear power facilities,” and the 2011, 2012, 2013, 2014, and 2015 summary reports are currently available [6–10] . Common radionuclides in radioactive effluents are identified through the analysis, and their contributions to the total activities of radioactive effluents are also evaluated. 2.2 Dose estimation for members of the public at NPPs According to the Korean regulation “Survey and assessment of radiological impact to the environment around nuclear facilities,” NPPs must estimate the dose to members of the public living around NPPs and evaluate compliance with the annual dose limits [11] . Dose estimation for the public is calculated as the projected dose prior to the release of radioactive effluents from NPPs per reactor unit. The dose at the boundary of the unrestricted area is calculated as the sum of the doses for all reactor units in a single site. The KDOSE-60 program is currently used to conduct dose calculations for members of the public living around Korean NPPs. This program consists of the three following codes: GAS for dose calculations due to gaseous effluents, LIQ for dose calculations due to liquid effluents, and XQDQWQ for calculation of the atmospheric diffusion parameters [10] . KDOSE-60 calculates the most conservative estimate of the dose received by members of the public, where a hypothetical person receiving this dose is referred to as the maximally exposed individual (MEI). The MEI is defined as the single individual with the highest exposure in a given population [12] . The exposure pathways from gaseous and liquid effluents are based on USNRC Regulatory Guide 1.109 and comprise some site-specific considerations [13,14] . Other particular parameters including population information, meteorology, and effluent release activity around plants are taken from the annual reports on radioactive material released from the individual NPPs. Because KDOSE-60 operates under the conditions of routine NPP operation and the normal environment, it is not appropriate to use it for evaluating the dose due to short-term release of effluents and for assessing the accident dose. The simplified process of dose estimation for the public, including exposure pathways and computer codes, is shown in Fig. 1 . 3 Results and discussion 3.1 Analysis of radioactive effluents released from NPPs In general, NPP operation produces various radioactive materials. The amount of radioactive materials is expressed as activity with the unit Bq. Most of these radioactive sources originate from the fission of nuclear fuel and activated corrosion products. During the normal operation of NPPs, a small fraction of these radioactive sources is generally discharged to the environment through gaseous and liquid effluents. These radioactive effluents typically originate from several sources: (1) the fission of tramp uranium, which is dissolved from exposed fuel rods and plated out onto the structure of the coolant system; (2) leaks from failed fuel rods; (3) the diffusion of radioactive gases through intact fuel rods; (4) the activation of materials in the reactor cooling water; and (5) corrosion of activated materials from pipes, valves, pumps, and ancillary equipment [15] . According to a series of KHNP annual summary reports of radioactive effluents and public doses, some radionuclides are typically reported in effluent release from NPPs, and these are shown in Table 2 [6–10] . We analyzed the amount of radioactive effluents released from NPPs during the years 2011–2015, including both PWRs and PHWRs in Korea, using the data from the KHNP's annual summary reports. The total annual activities in radioactive effluents discharged from Korean NPPs were 4.87 × 10 14 Bq, 4.87 × 10 14 Bq, 3.80 × 10 14 Bq, 3.96 × 10 14 Bq, and 4.06 × 10 14 Bq in 2011, 2012, 2013, 2014, and 2015, respectively [6–10] . The activities of radionuclides in gaseous and liquid effluents are displayed in Table 3 for each NPP site [6–10] . The activities in effluents released from Wolsong site, which includes four PHWRs and two PWRs, were approximately 2–5 times higher than those from other sites. The percentages of the activities in effluents released from Wolsong site were approximately 47–59% of the total activities from all of the NPP sites. This phenomenon results from the higher production of tritium in PHWRs. In general, heavy water is used in both the PHWR moderator system and coolant system, and tritium is produced primarily from neutron capture by deuterium ( 2 H) in heavy water [16] . Table 3 also shows that the main radionuclide, which primarily contributed to the activity in both gaseous and liquid effluents, was tritium. The percentage of activity from tritium effluents of the total activity was approximately 95% for all NPP sites. In particular, the remarkable difference in effluent monitoring between 2011 and 2012–2015 was the carbon-14 monitoring in gaseous effluents. Korean NPPs have generally monitored the radionuclides that account for greater than about 1% of the total annual activities in radioactive effluents. As noble gases and particulates released to the environment have been decreasing continuously owing to the effluent reduction efforts of Korean NPPs, the contribution by carbon-14 to the effluent activities has relatively increased [17] . Thus, Korean PWRs have been monitoring carbon-14 in effluents since 2012. The percentage of activity from carbon-14 of the total activity in effluents was 0.43–0.72% after 2012. 3.2 Analysis of the radiation dose to members of the public living around NPPs To ensure compliance with the requirements of the Nuclear Safety and Security Commission's (NSSC) annual dose standards and dose limits for the public, all NPPs regularly estimate the public dose from radioactive effluents released from NPPs [6–10] . This estimation is based on both measurements and theoretical models, including (1) real measurements of radioactive effluents to the environment; (2) models for the dispersion and dilution of radioactive materials in the environment; (3) models for the incorporation of radioactive materials into animals, plants, and soil; and (4) a biokinetic model for the human uptake and metabolism of radioactive materials [13] . These models were developed to estimate the dose to an MEI who may be exposed to the highest activities from effluents. Therefore, the estimated dose would be much higher than the actual dose to the residents living around NPPs. All NPPs established procedures for estimating the public dose according to the Regulatory Guide 02 of Korea Institute of Nuclear Safety and the USNRC Regulatory Guide 1.109, and they were combined into the offsite dose calculation manual [13,14,18] . The annual effective dose resulting from radioactive effluents released from NPPs was analyzed using the data from the KHNP's annual summary reports. During the years 2011–2015, the effective doses to members of the public due to radioactive effluents released from NPPs met both the NSSC's dose standard and the dose limit. The averages of effective doses to the public for 2011, 2012, 2013, 2014, and 2015 were 3.14 × 10 −3 mSv, 1.46 × 10 −2 mSv, 1.28 × 10 −2 mSv, 3.55 × 10 −2 mSv, and 2.02 × 10 −2 mSv, respectively. These average values were much lower than the dose standard for a site, 0.25 mSv/y. In comparison with the dose limit, the effective doses to members of the public from 2011 to 2015 were only 0.31–3.55% of the annual dose limit for members of the public, 1 mSv/y. Because the average natural background radiation dose received by an individual in Korea is approximately 2.5 mSv/y, the effective doses to the public due to radioactive effluents from NPPs during the years 2011–2015 were only 0.13–1.42% of what the average person receives each year from natural background radiation [19] . The effective doses to members of the public living around NPPs from 2011 to 2015 are summarized in Table 4 . Gaseous effluents contributed more to the total radiation doses than liquid effluents. More than 92% of total effective dose resulted from the gaseous effluents. Therefore, most of the effective dose in radioactive effluents released from NPPs during the years 2011–2015 resulted from gaseous effluents. The main radionuclide that primarily contributed to the effective dose to the public was carbon-14 in gaseous effluents. The percentages of dose from gaseous carbon-14 of the total effective dose for 2012, 2013, 2014, and 2015 were 67%, 88%, 74%, and 78%, respectively. Prior to 2012, the tritium in gaseous effluents was the main contributor to the public dose. Estimates of public radiation dose were higher approximately 4–10 times during the years 2011–2015 compared with the public dose in 2011 due to carbon-14 monitoring in gaseous effluents in PWRs. Although carbon-14 is a minor nuclide in the total effluent release, it is the main contributor to the effective dose around NPPs. 4 Conclusion This analysis confirmed that there were no remarkable differences in the amount of radioactive effluents released from Korean NPPs during the years 2011–2015. Effluents released from Wolsong site including four PHWRs and two PWRs were relatively higher than those from other NPP sites, which have only PWRs. The primary contributor to the activity in both gaseous and liquid effluents was tritium, accounting for approximately 95%. Although carbon-14 monitoring in gaseous effluents in PWRs has been conducted since 2012, the contribution of carbon-14 in gaseous effluents to the total activity from both gaseous and liquid effluents was regarded as minor, accounting for approximately less than 1%. During the years 2011–2015, the effective doses to members of the public due to radioactive effluents released from Korean NPPs met both the NSSC's dose standard and the dose limit. The averages of effective doses to the public from 2011 to 2015 were approximately on the order of 10 −3 mSv or 10 −2 mSv. This analysis indicated that although carbon-14 is a minor nuclide in effluent releases, it is the main contributor to the effective dose to members of the public living around NPPs. As public doses have realistically been kept at very low levels, the annual doses to the public from radioactive effluents released from NPPs under normal operating conditions are considered trivial when compared to the dose limit and even the natural background radiation dose. Conflicts of interest None.
REFERENCES:
1. NUCLEARSAFETYANDSECURITYCOMMISSION (2014)
2. KOREAHYDRONUCLEARPOWERCOLTD (2015)
3. KOREAHYDRONUCLEARPOWERCOLTD (2015)
4. USNUCLEARREGULATORYCOMMISSION (1956)
5. USENVIRONMENTALPROTECTIONAGENCY (1977)
6. KOREAHYDRONUCLEARPOWERCOLTD (2011)
7. KOREAHYDRONUCLEARPOWERCOLTD (2012)
8. KOREAHYDRONUCLEARPOWERCOLTD (2013)
9. KOREAHYDRONUCLEARPOWERCOLTD (2014)
10. KOREAHYDRONUCLEARPOWERCOLTD (2015)
11. NUCLEARSAFETYANDSECURITYCOMMISSION (2014)
12. KOREAHYDRONUCLEARPOWERCOLTD (2006)
13. USENVIRONMENTALPROTECTIONAGENCY (1992)
14. USNUCLEARREGULATORYCOMMISSION (1977)
15. NATIONALRESEARCHCOUNCIL (2012)
16. KIM H (2009)
17. KIM H (2009)
18. KOREAINSTITUTEOFNUCLEARSAFETY (2016)
19. SANDERS C (2010)
|
10.1016_S0960-9776(23)00177-7.txt
|
TITLE: P058 Outcomes of Rural Men With Breast Cancer: A Multicenter Population Based Retrospective Cohort Study
AUTHORS:
- Fisher, L.
- Ahmed, O.
- Chalchal, H.
- Deobald, R.
- El-Gayed, A.
- Graham, P.
- Groot, G.
- Haider, K.
- Iqbal, N.
- Johnson, K.
- Le, D.
- Mahmood, S.
- Manna, M.
- Meiers, P.
- Pauls, M.
- Salim, M.
- Sami, A.
- Wright, P.
- Younis, M.
- Ahmed, S.
ABSTRACT: No abstract available
BODY: No body content available
REFERENCES:
No references available
|
10.1016_j.jscs.2024.101879.txt
|
TITLE: Aluminium-based MOF CAU-1 facilitates effective removal of florfenicol via hydrogen bonding
AUTHORS:
- Li, Zhengjie
- Liu, Miao
- Fang, Chunxia
- Zhang, Huanshu
- Liu, Tianyi
- Liu, Yixian
- Tian, Heli
- Han, Jilong
- Zhang, Zhikun
ABSTRACT:
The widespread use and subsequent accumulation of florfenicol (FFC), a common antibiotic, through the food chain poses significant risks to aquatic ecosystems and human health, necessitating effective strategies for its removal from water bodies. To solve the challenge, herein, we developed a novel aluminum-based metal–organic framework CAU-1, engineered for the efficient adsorption of FFC. CAU-1 contains the plentiful –NH2 and μ-OH groups with the positively charged on the surface. In the adsorption process, CAU-1 perform hydrogen bonding interactions with FFC’s functional groups (−F, –OH, −Cl, –NH–, and −SO2–). Furthermore, the positively charged surface of CAU-1 enhances FFC adsorption via electrostatic attraction together. FFC adsorption equilibrium on CAU-1 is attained within 180 min with a monolayer adsorption capacity of 386 mg/g at 303 K, surpassing most of the reported adsorbents. The exothermic FFC adsorption process on CAU-1 remains largely unaffected by coexisting ions. Additionally, CAU-1 can be efficiently regenerated using a mixed solution of 0.1 M HCl and ethanol–water as an eluent. This work highlights CAU-1’s potential as an effective adsorbent for FFC removal, emphasizing the importance of tuning or designing surface functional groups on adsorbents to boost their adsorption capabilities.
BODY:
1 Introduction Chloramphenicol antibiotics are a type of broad-spectrum antibiotics that have been widely used in clinical, aquaculture, and livestock farming because of their efficient antibacterial properties and low cost [1,2] . Chloramphenicol, thiamphenicol, and florfenicol (FFC) are the three commonly used chloramphenicol antibiotics. Owing to the excessive abuse of chloramphenicol antibiotics, chloramphenicol has been detected in global wastewater, surface water, river, lake, and groundwater with concentrations ranging from 1 ng/L to 11200 ng/L [2–4] . The existing chloramphenicol in water may have some serious impacts on the environment and human health. For example, it has been reported that chloramphenicol is a contributing factor to cause aplastic anemia in animals. Moreover, it can lead to increased antibiotic-resistance in bacteria and the spread of resistance genes, even in low concentration. Hence, several counties, including America, Canada, Japan, and China, banned the use of chloramphenicol as an additive of animal fodder. Hence, FFC, the third generation of chloramphenicol, has substituted for chloramphenicol and thiamphenicol and its global consumption in aquaculture and veterinary medicine can reach thirteen thousands of tons in 2013 [5,6] . Owing to the fact that FFC cannot be metabolism completely by animals as well as the traditional wastewater treatment technologies cannot remove FFC completely, FFC has been detected in breeding wastewater, fish pond, and rivers. For example, Zheng et al. found that the FFC concentration in aquaculture wastewater in Jiangsu Province reached 151.40 ng/L [7] . According to the literature, the residual FFC in aquatic environments can affect the health of raised animals and pose a potential threat to food safety [3,8,9] . For example, broiler claws act as a carrier of FFC to transfer them to humans by eating or getting mixed with other livestock products that may have an indirect negative impact on human health [10,11] . What’s more, long-term consumption of water with low concentrations of FFC can lead to obesity in children and development of antibiotic resistance [12] . Therefore, it is emergent and essential to remove FFC from aqueous solutions efficiently, which will benefit the aquatic environment and human health. Currently, several kinds of treatment methods have been explored for removing FFC from aqueous solution, such as advanced oxidation processes [13] , catalysis [14] , and adsorption [15–17] . Among them, adsorption is considered to be a promising method for the removal of FFCs from water due to its low cost, its ease of operation and the reduction of secondary toxic pollutants [18,19] . To the best of our knowledge, only a few types of porous materials have so far been used for FFC removal. For example, Jiang et al. [15] prepared biochar from Sargasso pine and cedar wood trunks and investigated their adsorption performance on FFC. Compared with the original soil sample, their FFC adsorption capacity increased by 266 % (84.75 mg/kg) and 206 % (70.92 mg/kg), respectively. The adsorption mechanism indicated that the enhancement of hydrogen bonding interactions between biochars and FFC was the main reason for the higher FFC adsorption. Zhao et al. [17] synthesized a magnetic reed biochar (MRBC) to remove FFC from an aqueous solution. The prepared MRBC exhibited a pronounced pH-dependent pattern and can be regenerated by using 0.5 mol/L NaOH solution for five cycles. However, the adsorption capacity of MRBC for FFC was 9.29 mg/g. 3D porous-structured biochar aerogel (3D-PBA) with a large BET surface area was prepared by using a one-step direct carbonization-activation method and its adsorption performance for FFC was investigated [16] . 3D-PBA showed a higher adsorption capacity of FFC, partly due to the hydrogen bonding interactions between –OH and FFC on 3D-PBA. Obviously, both the adsorption capacity and the reusability of the reported adsorbents can hardly meet the current requirement for FFC removal. Hence, it is urgent to develop adsorbents or introduce other porous materials with high adsorption capacity and good reusability for FFC. In recent decades, metal–organic frameworks (MOFs) constructed through coordination bonds between metal ions and organic linkers have been widely applied to remove or eliminate pollutants from aqueous solutions due to their unique structural features of high specific area, devisable pore structure, and abundant adsorption sites [20–28] . Among the several types of water-stable MOFs, CAU-1 is an aluminum-based MOF material synthesized by using non-toxic chemical reagents within several hours and the activation process does not introduce additional harmful substances. Besides, its specific surface area is up to 1700 m 2 /g and there is a large amount of –OH and –NH 2 groups on its framework, which can serve as adsorption sites for organic compounds through molecular-level interactions. These outstanding physicochemical properties endow it with an ecologically harmless liquid-phase adsorbent, which has been widely used for the removal of organic pollutants from water [29–31] . For example, Xie et al. [29] evaluated the adsorption performance of CAU-1 on nitrobenzene, and its nitrobenzene adsorption capacity reached 970 mg/g, which was much higher than that of other reported materials. The adsorption mechanism indicated that the μ 2 -OH groups in the Al-O-Al units can strongly interact with nitrobenzene. Zhao et al. [30] found that CAU-1 exhibits a record-high adsorption capacity towards tinidazole (TNZ). Its high adsorption capacity is due to the following facts: (i) the large number of μ 2 -OH groups in CAU-1 can interact with the –NO 2 and −SO 2 groups of TNZ molecule through hydrogen bonding interactions; (ii) the strongly positively charged surface of CAU-1 can connect with TNZ − through strong electrostatic interaction. What’s more, CAU-1 can serve as a pH-responsive carrier to control TNZ release. Further, Song et al. [31] investigated the adsorption performance of CAU-1 on metronidazole (MNZ), in which the adsorption of MNZ in CAU-1 can reach equilibrium at 360 min with an adsorption capacity of 254.6 mg/g and CAU-1 can be well regenerated through a simple elution method. The strong hydrogen bonding interactions and its suitable pore size together promote the MNZ adsorption of CAU-1. These works suggest that CAU-1 can serve as a potential adsorbent for organic pollutants due to its abundant adsorption sites and high specific surface area. Inspired by the excellent adsorption performance of CAU-1 for kinds of organic pollutants, CAU-1 may be an ideal platform to eliminate or reduce the concentration of FFC in wastewater before discharge. There are three aspects to support the speculation: (i) the abundant polar groups of FFC and CAU-1 make it possible to interact with each other through strong hydrogen bonding interaction; (ii) the high specific surface area of CAU-1 can enhance its adsorption ability as well as the pore size of CAU-1 is close to the molecular size of FFC (12.7 Å × 3.2 Å × 4.3 Å) that also contribute to the adsorption process [31] ; (iii) the aluminum element in CAU-1 has the advantages of low cost, lightweight, and non-toxicity, which is beneficial for the industrial application. Therefore, the propose and novelty of this work is to introduce a low-cost MOF to remove FFC from aqueous solution with high efficiency and the preparation of CAU-1 together with the adsorption performance is illustrated in Fig. 1 . CAU-1 has been synthesized by the solvothermal method and was used to adsorb the FFC in a high efficient way ( Fig. 1 a). In order to increase the adsorption capacity, the effect of hydrogen bonding and electrostatic interactions on the adsorption process was studied at different pH values. Besides, the impact of adsorption time, concentration, ionic strength, and temperature on adsorption process were evaluated. It was found that the adsorption capacity, which can reach up to 386 mg/g, is mainly determined by hydrogen bonding interactions ( Fig. 1 b). Finally, further study was conducted to evaluate the regeneration capability of CAU-1. Overall, this study provides a potential adsorbent for the effective removal of FFC from wastewater. 2 Experimental components 2.1 Chemicals Aluminum chloride hexahydrate (AlCl 3 ·6H 2 O, 97 %), and 2-amino terephthalic acid (C 8 H 7 NO 4 , >98 %) were purchased from Aladdin Reagent Co., Ltd (Shanghai, China). Maclean Biochemical Co., Ltd (Shanghai, China) provided the analytical grade florfenicol (C 12 H 14 C l2 FNO 4 S). Methanol (CH 3 OH) and ethanol (C 2 H 5 OH) were purchased from Tianjin Yongda Chemical Reagent Co., LTD (Tianjin, China). The other common chemicals including NaCl, KCl, MgCl 2 , Na 2 SO 4 , HCl, and NaOH were supplied by Tianjin Damao Chemical Reagent Factory (Tianjin, China). The deionized water was prepared by a lab water purification system. 2.2 Preparation of CAU-1 CAU-1 was synthesized using a solvothermal method with a few modifications based on literature [31] . Specifically, 2.967 g of AlCl 3 ·6H 2 O and 0.746 g of 2-amino terephthalic acid (NH 2 -H 2 BDC) were evenly dispersed in 30 mL of methanol, and the mixture was then ultrasonically stirred for 20 min until a clear solution was formed. Subsequently, the solution was transferred into a Teflon-lined autoclave reactor and allowed to react at 398 K for 5 h. Upon cooling to room temperature, the resulting suspension was subjected to centrifugation to collect the yellow solids. These solids were then washed three times with CH 3 OH and four times with deionized water, respectively. Finally, the solid product was dried at 353 K overnight to obtain the activated CAU-1. 2.3 Characterization The physicochemical structural properties of CAU-1 were assessed through various characterization methods. The powder X-ray diffraction (XRD) patterns of both original CAU-1 and CAU-1 after the adsorption process were obtained using a D/MAX-2500 X-ray diffractometer with Cu-Kα radiation. Morphological characteristics of CAU-1 were observed via scanning electron microscopy (SEM, S-4800-I). Fourier Transform Infrared Spectroscopy (FT-IR, Nicolet iS5) was employed to detect the functional groups of the adsorbent before and after the adsorption process. N 2 adsorption–desorption isotherms of the adsorbent, pre and post-adsorption, were recorded at 77 K on an Autosorb iQ surface area analyzer to assess specific surface areas and pore size distribution. X-ray photoelectron spectroscopy (XPS) spectra of the samples were collected using a Thermo Escalab 250Xi to analyze their chemical composition and elemental chemical states. Surface charges of CAU-1 at different pH values were analyzed using a zeta potential analyzer (NanoBrook 90plus). 2.4 Adsorption experiments Adsorption experiments were conducted in brown bottles to mitigate the influence of light. In a typical procedure, 0.01 g of CAU-1 and 50 mL of FFC solution with a known concentration were combined in a brown bottle. The bottle was then placed into a gas-bath thermostatic shaker at a specified temperature and shaken for a predetermined contacting time. Subsequently, the mixture underwent filtration using a 0.22 μm polyethersulfone membrane and a syringe, and the FFC concentration in the resulting clear solution was determined by measuring its absorbance at 224 nm using UV–visible spectrophotometry (TU-1900). To investigate the impact of pH values on adsorption, experiments were conducted with a constant FFC concentration of 100 mg/L and a contact time of 24 h. The pH values of FFC solution was adjusted using HCl and NaOH solutions with concentration of 0.1 mol/L. For adsorption kinetics experiments, FFC concentrations of 100 mg/L and 300 mg/L were selected to assess the effect of concentration at the optimal pH value, with an adsorption temperature of 303 K. Adsorption isotherms at temperatures of 303 K, 313 K, and 323 K were examined to elucidate the effect of temperature on adsorption behavior and mechanism. In these experiments, the contact time was fixed at 5 h, and pH values were adjusted to the optimal level. The adsorption capacity of FFC was calculated using the following equation: where (1) q e = ( C 0 - C e ) V m m (mg) represents the mass of the adsorbent, V (L) is the solution volume in brown plastic bottles, C 0 (mg/L) is the initial FFC concentration, and C e (mg/L) represents the equilibrium concentration. 2.5 Regeneration experiments Regeneration experiments were carried out to assess the potential industrial application of CAU-1. Typically, the concentration of residual organic drug molecules in wastewater falls within the range of ppm/ppb. Therefore, in this study, the regeneration performance of CAU-1 was evaluated using an FFC concentration of 100 mg/L. Initially, the adsorption process of CAU-1 for FFC was conducted under optimal conditions: the initial pH value of FFC solution was set to 5.0, the contact time was 6 h, and the temperature was maintained at 303 K. Subsequently, CAU-1 samples collected after the adsorption process were obtained via filtration and dried overnight at 373 K in a vacuum oven. The regeneration process involved using a mixture of 0.1 M HCl and ethanol–water solution (4:1) as the eluent. The CAU-1 samples post-adsorption were rinsed thrice with a 0.1 M HCl and ethanol–water mixture (4:1). Subsequently, the collected CAU-1 powder was dried at 353 K to activate the adsorbent. Finally, the adsorption capacity of the activated CAU-1 was evaluated under the same adsorption conditions. These procedures were repeated three times. 2.6 Calculation method A density functional theory (DFT) calculation was conducted to understand the interaction mode of FFC and CAU-1. DFT calculations were carried out via using CP2K and the CP2K input files were generated with help of Multiwfn program [32,33] . The structures of the output files were viewed with help of GaussView 6.0 [34] . All calculations employed a mixed Gaussian and planewave basis sets. Core electrons were represented with norm-conserving Goedecker-Teter-Hutter pseudopotentials, and the valence electron wavefunction was expanded in a double-zeta basis set with polarization functions along with an auxiliary plane wave basis set with an energy cutoff of 400 eV [35–38] . The generalized gradient approximation exchange–correlation functional of Perdew, Burke, and Enzerhof (PBE) was used [39] . Each configuration was optimized with the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm with SCF convergence criteria of 1.0 × 10 −8 a.u. To compensate the long-range van der Waals dispersion interaction between the adsorbate and the MOF, the DFT-D3(BJ) scheme with an empirical damped potential term was added into the energies obtained from exchange-correlation functional in all calculations [40] . The value of binding energy (BE) was calculated as the energy difference between the products and the reactants in the adsorption process, as defined by the following equation: where (2) BE = E MOF + a d s o r b a t e - E MOF - E adsorbate E MOF+adsorbate is the total energy of the MOF/adsorbate sorption system in equilibrium state; E MOF and E adsorbate are the total energy of the adsorbate-free MOF structure and the adsorbate, respectively. 3 Results and discussion 3.1 Characterization of CAU-1 The structural and chemical characteristics of the synthesized CAU-1 were thoroughly investigated, with the results presented in Fig. 2 and Fig. 3 . As depicted in Fig. 2 (a), the XRD diffraction peaks of the as-prepared CAU-1 closely match those of the simulated structure, indicating successful synthesis with high fidelity. The narrow and well-defined peaks further suggest the excellent crystallinity of CAU-1. The stability of the adsorbent under different pH conditions was assessed, with the findings illustrated in Fig. 2 (b) and c). Notably, Fig. 2 (d) demonstrates that the XRD patterns of CAU-1 samples soaked in water at various pH levels closely resemble those of the simulated CAU-1. However, the intensity of the diffraction peaks diminishes notably when the pH exceeds 7.0. This observation aligns with the trend depicted in Fig. 2 (c), where the concentration of CAU-1 in suspension decreases as pH rises, ultimately leading to dissolution at pH = 11. These results suggest that CAU-1 maintains its crystal structure effectively in acidic solutions, underscoring the importance of maintaining a pH < 7 during the adsorption process. Furthermore, the XRD spectra of CAU-1 remain unchanged after the FFC adsorption process, indicating the exceptional stability of CAU-1 throughout the adsorption process. The FT-IR spectra of the as-synthesized CAU-1 and FFC are depicted in Fig. 3 (a). The characteristic stretching peak at 590 ∼ 625 cm −1 confirms the presence of Al-O(H)-Al bonds in CAU-1, while the peaks at 1577 cm −1 and 1394 cm −1 correspond to the stretching vibration of –COO–, indicating the coordination of amino terephthalic acid with Al metal [41] . Additionally, the vibrations of –NH 2 and μ-OH groups are indicated by the peaks at 3300 ∼ 3450 cm −1 , with the signs of –NH 2 and μ-OH groups overlapping. The stretching band at 1253 cm −1 establishes the presence of C–N bonds in amino groups of organic ligands. Furthermore, the peaks at 2800 ∼ 3050 cm −1 are attributed to the C–H bonds in methanol, which was not completely removed from the pores of CAU-1. As for FFC, the stretching vibration of C O is observed at 1680 cm −1 , while the symmetric stretching vibration of −SO 2 is represented by the peak at 1142 cm −1 . The –NH– group is confirmed by the single peak at 3315 cm −1 , while the –OH stretching is evident at 3445 cm −1 and 1080 cm −1 . Additionally, the peak at 964 cm −1 is attributed to the C–O bonds, and the asymmetrical stretching vibration of –CH in methyl groups is observed at 2900 cm −1 . The SEM image of the as-synthesized CAU-1 is shown in Fig. 3 (b), revealing particles with a regular cube shape but irregular sizes, consistent with previous literature [30,31] . The BET-specific surface area of CAU-1 is 1246.56 cm 2 /g, with a pore volume of 0.56 cm 3 /g, as calculated from the N 2 adsorption–desorption isotherm recorded at 77 K. Furthermore, the pore size distribution of CAU-1, depicted in Fig. 3 (d), aligns with reported values in the literature [29–31] . 3.2 Adsorption performance 3.2.1 Effect of pH The initial pH value of the aqueous impacts both the type of the adsorbent and the surface charge of both the adsorbent and adsorbate, subsequently influencing the adsorption process and the adsorption capacity. Therefore, the effect of pH values on CAU-1’s adsorption performance was investigated, and the results are presented in Fig. 4 (a). Initially, the FFC adsorption capacity of CAU-1 increases with rising pH values from 3.0 to 5.0, reaching its peak at pH = 5, and then gradually declines as pH values increase. To elucidate this trend, the zeta potential of CAU-1 and FFC at different pH values was measured, as shown in Fig. 4 (b). The isoelectric point of FFC is approximately 4.43, indicating that FFC carries a positive charge at pH values below 4.43. Subsequently, the –NH and –OH groups begin to deprotonate, resulting in a negatively charged surface. In contrast, CAU-1’s surface remains positively charged across the entire pH range (3.0–9.0) due to the protonation of –NH 2 groups. Within the pH range of 3.0 to 5.0, the slight increase in FFC adsorption capacity can be attributed to the electrostatic attraction between the positively charged CAU-1 and the deprotonated FFC. However, at pH values above 5.0, the electrostatic interaction does not lead to increased FFC adsorption, particularly noticeable when pH shifts from 7.0 to 9.0. The decline in FFC adsorption capacity under alkaline conditions can be attributed to several factors: (i) weakening of the electrostatic interaction due to a decrease in CAU-1’s zeta potential; (ii) strong competition from OH − ions; (iii) structural instability of CAU-1 in an alkaline environment, potentially resulting in zero adsorption. Despite the strong electrostatic repulsion between the positively charged CAU-1 surface and cationic FFC molecules, relatively high FFC adsorption is observed at pH = 3.0. This suggests the presence of another strong interaction influencing FFC adsorption, besides electrostatic interaction, contributing to its high adsorption capacity within the pH range of 3.0–5.0. As FFC is a polar molecule containing various functional groups, including −F, –OH, −Cl, –NH–, and −SO 2 –, they can act as either hydrogen-bond donor or acceptor. Similarly, the –NH 2 , Al-O-Al, and μ-OH groups of CAU-1 can also serve as H-bond acceptors and donors. Therefore, it is speculated that FFC interacts with the CAU-1 framework via hydrogen bonding. In our previous work, we found that the H proton of the bridging μ-OH groups fall off to form μ-O at pH > 4.0 [31] . Hence, the μ-O groups can act as an hydrogen-bond acceptor to form hydrogen bonding with the –OH group of FFC. Besides, the –NH 2 groups could interact with the −Cl and −SO 2 – groups to form hydrogen bonds [41,42] . The ratio of protonated –NH 2 groups in CAU-1 decreases as pH increases from 3.0 to 5.0, contributing to the increased FFC adsorption capacity. In alkaline conditions, the weakened H-bond interaction between the –OH groups of FFC and μ-O groups of CAU-1, coupled with competition from OH– ions and anionic FFC, results in a rapid decrease in FFC adsorption. To ensure the stability and optimal adsorption performance of the adsorbent material, the pH value for FFC adsorption by CAU-1 is fixed at 5.0 for further exploration of its adsorption performance. 3.2.2 Effect of ion concentration Inorganic salt ions are commonly present in industrial wastewater, impacting the adsorption performance of adsorbents either through competition with ionic adsorbates or the salting-out effect. Hence, various inorganic salts (NaCl, KCl, MgCl 2 , and Na 2 SO 4 ) were selected to investigate their effects on the adsorption process of CAU-1. The concentration of inorganic salts was maintained at 20 mmol/L, and the results are depicted in Fig. 5 . Clearly, the introduced ions, including K + , Na + , Mg 2+ , and Cl − , exert no significant effect on the adsorption capacity of CAU-1. This observation suggests that electrostatic interaction is not the primary mechanism governing the adsorption process. However, in the case of Na 2 SO 4 , the FFC adsorption capacity decreases by 50 % due to the solution’s pH value increasing to above 7.5. In such alkaline conditions, the FFC adsorption is restrained owing to competition between OH − ions and the anionic FFC, along with the unstable structure of CAU-1. Consequently, CAU-1 demonstrates promise as an adsorbent for removing FFC from acidic or neutral saline wastewater. 3.2.3 Adsorption mechanism Due to the polarity and distribution pattern of FFC in aqueous solution, multiple interactions likely govern FFC adsorption on CAU-1. As discussed in Section 3.2.1 , hydrogen bonding and electrostatic interactions play roles in FFC adsorption within the pH range of 3.0 to 7.0. Additionally, the aromatic benzene rings present in both FFC and CAU-1 may facilitate adsorption through π-π stacking interactions. Therefore, FT-IR and XPS analyses were conducted on CAU-1 before and after FFC adsorption to investigate these interactions. The results are shown in Fig. 6 . Firstly, the abundant polar groups of FFC, including −F, –OH, −Cl, –NH–, and −SO 2 –, can act as both hydrogen bond donors and acceptors, forming strong hydrogen bond interactions with CAU-1. As discussed in Section 3.2.1 , the H proton of the bridging μ-OH groups in CAU-1’s framework fall off to form μ-O, which can serve as hydrogen bonding acceptor to interact with FFC through hygrogen bonds. Besides, the –NH 2 groups of CAU-1 could interact with the −Cl and −SO 2 – groups of FFC to form hydrogen bonds. This is supported by the shifts in the peaks observed in the FT-IR spectra ( Fig. 6 (a)). The Al-O(H)-Al peak of CAU-1 shifts from 610.42 cm −1 to 613.73 cm −1 , and the C–N bonds shifts from 1252.09 cm −1 to 1261.80 cm −1 after FFC adsorption, indicating the involvement of hydrogen bonding interactions in the FFC adsorption process [43] . Furthermore, the changes in the XPS spectra of O 1s and N 1s provide additional evidence for the proposed mechanism of FFC adsorption on CAU-1. From Fig. 6 (c) and (d), it can be seen that the binding energies of O–H, Al–O, N–H and C–N decrease from 533.33 eV, 531.04 eV, 401.24 eV, and 399.47 eV to 532.95 eV, 530.96 eV, 400.54 eV, and 399.33 eV, respectively, which indicates the participation of the hydrogen bonding interactions [44] . These results clearly indicate the hydrogen bonding interaction between the Al-O-Al and –OH of FFC as well as the –NH 2 of CAU-1 and −Cl and −SO 2 – groups of FFC. The calculation results shown in Fig. 7 can provide visible support for the formation of hydrogen bonds. As depicted in Fig. 7 (a), the distance between the H atom of –OH group of FFC and the O atom of Al-O-Al of CAU-1 is 2.070 Å and the distance between the two H atoms of –NH 2 groups of CAU-1 and −Cl atom of FFC are 2.396 Å and 2.909 Å, respectively. The binding energy between FFC and CAU-1 is −50.42 kcal/mol, suggesting the strong hydrogen bonds [42] . From Fig. 7 (b) and (c), the distance between the H atoms of –NH 2 groups of CAU-1 and H-bonds acceptors of FFC ranges from 2.489 Å to 3.338 Å, whilst the binding energies also suggest that the hydrogen bonding interaction is the main adsorption mechanism affecting the FFC adsorption process. Except for them, the binding energy of O–C O shows a decrease of 0.29 eV, and the peak of −C O also exhibits a small decrease of 0.24 eV, shown in Fig. 6 (c). Moreover, the percentage of O–C O on CAU-1 after adsorption increases from 15.45 % to 17.31 % ( Fig. 6 (b)), while the percentage of −C O on CAU-1 after adsorption decreases from 62.19 % to 58.10 % ( Fig. 6 (c)). These observations suggest that the carboxylic groups of CAU-1 can also interact with FFC molecules through H-bond interaction [45,46] . This can be confirmed by that the distance between the H atom of the –OH group of FFC and the O atom from carboxylic groups of CAU-1 is 3.086 Å, suggesting that the O atom from carboxylic groups of CAU-1 can serve as H-bond acceptors to promote the FFC adsorption. As demonstrated in Fig. 6 (b), although the binding energy of C C bonds shows a slight shift (0.05 eV) after FFC adsorption, the percentage of C C on CAU-1 after adsorption decreases from 62.19 % to 58.10 %. This suggests that benzene rings of CAU-1 and FFC molecules are involved in the adsorption process by π-π stacking interactions [46,47] . In summary, the adsorption process of CAU-1 on FFC is primarily controlled by hydrogen bonding interactions, supplemented by electrostatic and π-π interactions. 3.2.4 Adsorption kinetics Fig. 8 (a) illustrates the impact of adsorption time on the adsorption process of CAU-1. It is evident that the adsorption capacity undergoes a significant increase within the first 60 min. This rapid rise can be attributed to the higher mass transfer driving force and the abundance of adsorption sites on the surface of the adsorbent. Subsequently, at both low and high FFC concentrations, adsorption reaches equilibrium after approximately 180 min, indicating that the initial FFC concentration does not significantly affect the adsorption kinetic behavior of CAU-1. The three-dimensional mesh structure of CAU-1 may introduce some spatial site resistance for FFC molecules at the window, leading to a relatively slower adsorption rate between 60 and 180 min. To gain deeper insights into the adsorption mechanism of CAU-1 towards FFC, the pseudo-first-order model, pseudo-second-order model, and intraparticle diffusion model were employed to fit the obtained kinetic data. The corresponding fitting equations are as follows: (1) Pseudo-first-order model l n ( q e - q t ) = l n q e - k 1 t (2) Pseudo-second-order model t q t = 1 k 2 q e 2 + t q e where (3) Intra-particle diffusion model q t = k d t 0.5 + c q t (mg·g −1 ) is the adsorption capacity at time t , t (min) represents the adsorption time, q e (m·g −1 ) is the adsorption capacity at equilibrium time, k 1 (min −1 ), k 2 (g·(mg·min) −1 ) and k d (g/(mg min · 0.5 )) are the rate constants for the selected kinetic model. The fitting results are illustrated in Fig. 8 (a) and the fitted parameters are listed in Table 1 . According to the fitted results, the pseudo-second-order model has a higher correlation coefficient ( R 2 = 0.9935) and the experimental value of the adsorption capacity (210 mg/g) is close to the calculated one (227.65 mg/g). These findings imply that the pseudo-second-order model can describe the kinetic process of adsorption of FFC on CAU-1 better. This indicates that the adsorption behavior of FFC on CAU-1 may be controlled by chemical adsorption. The kinetic data were analyzed using an intraparticle diffusion model to elucidate the rate-controlling stage of the adsorption process. Fig. 8 (b) depicts the fitting results, while Table 2 presents the corresponding parameters. From Fig. 8 (b), the adsorption process of CAU-1 on FFC can be delineated into three distinct stages, both at low and high initial concentrations. These stages signify the influence of different diffusion mechanisms: the first stage suggests boundary layer diffusion as the controlling factor; the second stage indicates intra-particle diffusion; and the third stage represents the attainment of equilibrium. Notably, none of the fitted curves intersect the origin, implying the involvement of both boundary layer diffusion and intra-particle diffusion in the adsorption process. 3.2.5 Adsorption isotherms and thermodynamics The adsorption isotherms of FFC by CAU-1 were determined at temperatures of 303 K, 313 K, and 323 K to assess the impact of temperature on FFC adsorption and to explore the adsorption mechanism. As depicted in Fig. 9 (a), the FFC adsorption capacities initially increase with the rising FFC concentration from 100 mg·L −1 to 200 mg·L −1 . Subsequently, the adsorption capacity reaches equilibrium at around 300 mg·L −1 , indicating saturation of the active adsorption sites in CAU-1. Furthermore, the adsorption capacity diminishes with increasing temperature, suggesting an exothermic adsorption process. To analyze the distribution of FFC molecules on the surface of CAU-1, two common isotherm models, Langmuir and Freundlich models, were employed to fit the isotherm data. The equations of these models are as follows: (4) L a n g m u i r i s o t h e r m m o d e l : C e q e = C e q m + 1 q m k L where (5) F r e u n d l i c h i s o t h e r m m o d e l : l n q e = 1 n l n C e + l n k F C e (mg·L −1 ) is the equilibrium concentration of FFC; k L (L·mg −1 ) is the rate constant for adsorption from the Langmuir model; q e (mg·g −1 ) represents the equilibrium adsorption capacity; 1/ n and k F ((mg·g −1 )(L·mg) 1/n ) represent the intensity and adsorption constant from the Freundlich model, respectively; and q m (mg·g −1 ) represents the calculated maximum adsorption capacity. The Langmuir and Freundlich fitting curves are presented in Fig. 9 (b) and (c), respectively. Table 3 provides the obtained correlation coefficients and fitting parameters. The FFC adsorption process of CAU-1 aligns more closely with the Langmuir model, as indicated by a comparison of the correlation coefficients of the two models. This suggests that the surface of CAU-1 is homogeneous, and FFC molecules are adsorbed onto the CAU-1 surface in a monolayer manner. The comparison of the adsorption performance between CAU-1 and other adsorbents reported in the literature is listed in Table 4 . The FFC adsorption capacity of CAU-1 is 386 mg/g at 303 K, significantly surpassing most of the reported adsorbents. Although the 3D porous-structured biochar aerogel exhibits a two-fold higher FFC adsorption capacity than CAU-1, its ideal pH value of 3.0 is substantially lower than the original FFC solution’s pH value. In contrast, CAU-1’s optimal pH value is closer to that of the original FFC solution, making it a suitable adsorbent with favorable FFC adsorption properties that do not require pH adjustment. Besides, the adsorption equilibrium time of CAU-1 is shorter than most of the reported adsorbents, suggesting a quick adsorption kinetic. Therefore, CAU-1 can be regarded as a promising candidate for the efficient removal of FFC from aqueous solution owing to its high adsorption capacity and quick adsorption kinetic. Additionally, the values of the parameter 1/ n of the Freundlich model are all less than 1, indicating that the adsorption process is favorable [48] . The adsorption thermodynamic parameters, including the Gibbs free energy change (Δ G 0 ), enthalpy change (Δ H 0 ), and entropy change (Δ S 0 ), were determined based on the adsorption isotherms at different temperatures using the Van’t-Hoff equation. The calculated thermodynamic parameters are listed in Table 5 . Since the values of Δ G 0 are negative, it can be concluded that the adsorption process of FFC on CAU-1 is feasible and spontaneous. The values of Δ G 0 fall in the range of −20 to 0 kJ/mol, suggesting that physisorption predominates in the FFC adsorption process [55,56] . Regarding Δ H 0 , its negative value indicates the exothermic nature of the adsorption process, which is consistent with the results from the adsorption isotherms. The absolute value of Δ H 0 falls in the range of 2 to 40 kJ/mol, indicating significant H-bond interactions between CAU-1 and FFC [56] . This finding aligns with the analysis in section 3.2.1 . The negative value of Δ S 0 suggests that the adsorption process is entropy-decreasing, indicating an increase in the degree of ordering at the solid–liquid interface. 3.3 Regeneration evaluation The recycling performance of the adsorbent is crucial for practical applications in industrial adsorption processes. Therefore, the regeneration ability of CAU-1 was examined using a mixed solution of 0.1 M HCl and ethanol–water (4:1) as an eluent. The cyclic adsorption performance of CAU-1 for FFC is illustrated in Fig. 10 . As previously discussed, FFC molecules were adsorbed onto CAU-1 in a monolayer fashion through physisorption. Consequently, the bound FFC molecules should be readily released to achieve complete regeneration. However, compared to fresh CAU-1, the FFC adsorption capacity of CAU-1 gradually decreased with an increasing number of recycling cycles. After three cycles, the FFC adsorption capacity of CAU-1 was approximately 66 % of its initial value. This suggests that complete regeneration of CAU-1 is challenging. It is hypothesized that the pore structure could provide a reasonable explanation. Although the window size of CAU-1 (∼4 Å) is smaller than the FFC molecule size (∼5 Å), FFC with 3D structure can still pass across the window through molecule rotation, especially at strong driven force from concentration difference [31] . As depicted in Fig. 10 (b), the specific surface area of CAU-1 decreased from 1246.56 m 2 /g to 365.94 m 2 /g after FFC adsorption, indicating that FFC molecules became trapped within the pores of CAU-1. From Table 6 , it is evident that after adsorption of FFC, CAU-1 experiences a much larger decrease in specific surface area (880.62 m 2 /g) compared to the external surface area decrease (73.75 m 2 /g). Additionally, the pore volume reduced from 0.56 cm 3 /g to 0.19 cm 3 /g, indicating that most of FFC molecules can adsorbed in cages of CAU-1. Hence, the close size between windows of CAU-1 and FFC may be beneficial for the high loading of FFC; and for that reason, the release of FFC from cages largely limited. Then, the considerable decrease in FFC adsorption capacity during the recycling process occurred. Besides, the stability of CAU-1 during the adsorption–desorption process was confirmed through comparing the XRD pattern and SEM images of CAU-1 before and after regeneration [57–60] . As shown in Fig. 10 (c), the diffraction peaks of CUA-1 after adsorption and regeneration appear in the same degree as the as-synthesized CAU-1. This indicates that CAU-1 can maintain its structure under adsorption process and after three regeneration cycles. What's more, as illustrated in Fig. 10 (d), the crystal morphology of CAU-1 after regeneration maintain the same regular cube shape as CAU-1, suggesting its excellent tolerance towards the selected eluant. Hence, CAU-1 showed excellent stability during the adsorption and elution process. 4 Conclusions In this study, an aluminum-based MOF with –NH 2 groups, CAU-1, was successfully synthesized, and its FFC adsorption performance was systematically explored. CAU-1 exhibited excellent stability in FFC solution under acidic conditions, with an equilibrium adsorption capacity of 386 mg/g at 303 K and pH 5.0, surpassing that of most adsorbents reported in the literature. Coexisting ions in the aqueous solution had a negligible effect on the adsorption process. The kinetic data of FFC adsorption fitted well to the pseudo-second-order model, with both boundary layer and intra-particle diffusion playing roles. Additionally, the Langmuir isotherm model better described the FFC adsorption process, indicating monolayer adsorption of FFC molecules onto CAU-1. The exothermic and entropy-decreasing nature of the FFC adsorption process was observed. Comparison of FT-IR and XPS spectra of CAU-1 before and after FFC adsorption revealed hydrogen bonding interactions as the primary adsorption mechanism, with electrostatic and π-π stacking interactions also contributing to FFC adsorption. Moreover, CAU-1 could be effectively regenerated using a mixed solution of 0.1 M HCl and ethanol–water (4:1) as eluent, with the adsorption capacity maintained at 66 % of the initial value after three cycles. Overall, this study introduces a promising adsorbent for removing FFC from antibiotic wastewater. The elucidated adsorption mechanism suggests that the functional groups of both adsorbates and adsorbents can enhance adsorption performance by serving as H-bond donors and acceptors, facilitating the preliminary screening of suitable porous adsorbents. Funding This study was supported by the Youth Found of the Education Department of Hebei Province (NO. QN2022142 ). At the same time, we would like to express our gratitude to Qinghai Chaidamu Xinghua Lithium Salt Co., Ltd. for its financial support. CRediT authorship contribution statement Zhengjie Li: Writing – review & editing, Writing – original draft, Validation, Investigation. Miao Liu: Investigation. Chunxia Fang: Writing – original draft. Huanshu Zhang: Writing – review & editing. Tianyi Liu: Writing – review & editing. Yixian Liu: Investigation. Heli Tian: Writing – review & editing. Jilong Han: Writing – review & editing, Writing – original draft, Supervision, Project administration. Zhikun Zhang: Writing – review & editing, Supervision, Conceptualization. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
REFERENCES:
1. COTILLAS S (2018)
2. NGUYEN L (2022)
3. WEI R (2012)
4. ZHOU Z (2023)
5. ZHANG Q (2015)
6. VANBOECKEL T (2017)
7. ZHENG T (2023)
8. REYNS T (2016)
9. GUO X (2024)
10. GUO S (2021)
11. POKRANT E (2018)
12. LI R (2017)
13. KHAN W (2020)
14. CAO Z (2017)
15. JIANG C (2016)
16. LIU H (2019)
17. ZHAO H (2018)
18. TIJANI J (2015)
19. ALMAHRI A (2023)
20. ALQADAMI A (2017)
21. HASAN Z (2015)
22. HUANG L (2016)
23. WEI X (2020)
24. MINHTHANH H (2018)
25. DELPIANO G (2021)
26. JALALZAEI F (2022)
27. ALREFAEE S (2023)
28. ALHAZMI G (2022)
29. XIE L (2014)
30. ZHAO H (2019)
31. SONG S (2022)
32. VANDEVONDELE J (2005)
33. LU T (2012)
34.
35. GOEDECKER S (1996)
36. HARTWIGSEN C (1998)
37. KRACK M (2000)
38. VANDEVONDELE J (2007)
39. PERDEW J (1996)
40. GRIMME S (2010)
41. GUMBER N (2023)
42. AHMED I (2022)
43. KAVAK S (2021)
44. SOMPORNPAILIN D (2020)
45. LV Y (2021)
46. HUANG D (2024)
47. LV Y (2018)
48. CHEN C (2012)
49. ZHAO H (2016)
50. WEI S (2016)
51. ZHANG J (2023)
52. XU J (2019)
53. FENG Z (2023)
54. TANG Z (2023)
55. LI Z (2022)
56. WANG W (2020)
57. ELDESOUKY M (2024)
58. ALSUHAIBANI A (2024)
59. ALTALHI T (2022)
60. ALHAZMI G (2022)
|
10.1254_jphs.fp0030382.txt
|
TITLE: Failure of Repeated Electroconvulsive Shock Treatment on 5-HT4-Receptor-Mediated Depolarization Due To Protein Kinase A System in Young Rat Hippocampal CA1 Neurons
AUTHORS:
- Ishihara, Kumatoshi
- Sasa, Masashi
ABSTRACT:
We previously demonstrated that repeated electroconvulsive shock (ECS) treatment enhanced serotonin (5-HT)1A- and 5-HT3-receptor-mediated responses in hippocampal CA1 pyramidal neurons. The electrophysiological studies were performed to elucidate the effects of ECS treatment on depolarization, which was an additional response induced by 5-HT, and the second messenger system involved in this depolarization of hippocampal CA1 neurons. Bath application of 5-HT (100 μM) induced depolarization of the membrane potential in the presence of 5-HT1A-receptor antagonists. This depolarization was mimicked by 5-HT4-receptor agonists, RS 67506 (1 – 30 μM) and RS 67333 (0.1 – 30 μM), in a concentration-dependent manner. 5-HT- and RS 67333-induced depolarization was attenuated by concomitant application of RS 39604, a 5-HT4-receptor antagonist. H-89, a protein kinase A (PKA) inhibitor, inhibited 5-HT-, RS 67506-, and RS 67333-induced depolarizations, while forskolin (10 μM), an activator of adenylate cyclase, induced depolarization. Furthermore, RS 67333-induced depolarization was not significantly different between hippocampal slices prepared from rats administered ECS once a day for 14 days and those from sham-treated rats. These findings suggest that 5-HT4-receptor-mediated depolarization is caused via the cAMP-PKA system. In addition, repeated ECS-treatment did not modify 5-HT4-receptor functions in contrast to 5-HT1A- and 5-HT3-receptor functions.
BODY: No body content available
REFERENCES:
1. MURPHY D (1990)
2. HOYER D (1994)
3. ISHIHARA K (1999)
4. ISHIHARA K (2000)
5. ISHIHARA K (2001)
6. ANDRADE R (1987)
7. ROPERT N (1991)
8. MCMAHON L (1997)
9. ANDRADE R (1991)
10. TORRES G (1995)
11. AVERY D (1977)
12. STROBER M (1998)
13. EGLEN R (1995)
14. HEGDE S (1995)
15. CHIJIWA T (1990)
16. SEAMON K (1981)
17. TORRES G (1994)
18. BOCKAERT J (1990)
19. DUMUIS A (1988)
20. DUMUIS A (1988)
21. SHENKER A (1987)
22. CHAPIN E (2002)
23. CHAPIN E (2001)
24. BIJAK M (2001)
25. GASPARINI S (1999)
26. BIJAK M (1997)
27. BIJAK M (1997)
|
10.1016_j.ahjo.2024.100448.txt
|
TITLE: Sustainability and cost of typical and heart-healthy dietary patterns in Australia
AUTHORS:
- Cobben, Rachel E.
- Collins, Clare E.
- Charlton, Karen E.
- Bucher, Tamara
- Stanford, Jordan
ABSTRACT:
Study objective
The aim was to quantify and compare the environmental and financial impact of two diets: a heart-healthy Australian diet (HAD) and the typical Australian diet (TAD).
Design
The study involved a secondary analysis of two modelled dietary patterns used in a cross-over feeding trial.
Setting
The evaluation focused on two-week (7-day cyclic) meal plans designed to meet the nutritional requirements for a reference 71-year-old male (9000 kJ) for each dietary pattern.
Main outcome measures
The environmental footprint of each dietary pattern was calculated using the Global Warming Potential (GWP*) metric, taking into account single foods, multi-ingredient foods, and mixed dishes. Prices were obtained from a large Australian supermarket.
Results
The HAD produced 23.8 % less CO2 equivalents (CO2e) per day (2.16 kg CO2e) compared to the TAD (2.83 kg CO2e per day). Meat and discretionary foods were the primary contributors to the environmental footprint of the TAD, whereas dairy and vegetables constituted the largest contributors to the HAD footprint. However, the HAD was 51 % more expensive than the TAD.
Conclusion
Transitioning from a TAD to a HAD could significantly reduce CO2 emissions and with benefits for human health and the environment. Affordability will be a major barrier. Strategies to reduce costs of convenient healthy food are needed. Future studies should expand the GWP* database and consider additional environmental dimensions to comprehensively assess the impact of dietary patterns. Current findings have implications for menu planning within feeding trials and for individuals seeking to reduce their carbon footprint while adhering to heart-healthy eating guidelines.
BODY:
1 Introduction Climate change stands as one of the biggest global issues of our time [ 1 ]. Without decisive action, projections from the Intergovernmental Panel on Climate Change (IPCC) report that temperatures exceeding 1.5 °C and 2 °C above pre-industrial levels will become a reality in the 21st century [ 2 ]. Simultaneously, the current food systems are contributing >30 % of total greenhouse gas (GHG) emissions, up to 80 % of biodiversity loss and 70 % of freshwater losses [ 3 , 4 ]. A major challenge is for food production to meet the nutritional needs of a predicted 10 billion people by 2050 [ 5 ]. A shift towards more sustainable healthy diets will be needed to feed the population while decreasing GHG emissions and broadening climate change adaptation options [ 3 ]. Defined by the Food and Agriculture Organisation (FAO), sustainable healthy diets promote individual health and wellbeing, exhibit low environmental impact, are accessible, affordable, safe and equitable, and culturally acceptable [ 6 ]. Australia grapples with the tangible impacts of climate change, having already experienced an average warming of 1.47 °C since 1910 [ 7 ]. This warming trend has manifested in increasingly severe heatwaves, droughts, acidified oceans and rising sea levels [ 7 ]. Notably, Australia's agricultural sector accounts for 80.7 million metric tons of CO 2 e, ranking food production fourth highest contributor after electricity, energy and transport [ 8 ]. However, this metric fails to account for additional food processing, transport, retail, consumption, and waste emissions. The Global Warming Potential over 100 years (GWP 100 ) has been widely employed metric in previous studies to measure the footprint of dietary patterns [ 9 , 10 ]. This metric evaluates the cumulative contribution of CO 2 e radiation over a century-long timeframe [ 11 ]. However, its application becomes problematic when short-lived climate pollutants are factored in, as it fails to adjust for their varying atmospheric lifetimes and impacts on the climate system over time [ 12 ]. Highlighting this limitation, the IPCC [ 13 ], and Paris Agreement [ 14 ], have indicated that the GWP 100 metric lacks particular significance, meaning that it cannot effectively gauge alignment with climate stabilisation goals. In contrast, the GWP* metric represents a relatively novel approach. It assesses global warming potential from short-lived GHG in comparison to CO 2 . Short-lived GHG from farming and livestock production, such as methane, are responsible for 35 % of food-system GHG emissions and are much more potent than CO 2 [ 15 ]. While methane is the dominant contributor, it breaks down in about 12 years, unlike CO 2 , which can persist for centuries [ 16 ]. Consequently, relying solely on the GWP 100 can lead to a substantial overestimation—up to three to four times—of the observed global warming effect [ 17 ]. Numerous factors influence food choices and dietary habits, including convenience, affordability, taste preferences, nutritional value, accessibility, culinary proficiency, and sociocultural norms [ 18–22 ]. The growing demand for convenience has transformed the food landscape, with availability of ready-to-eat meals and prepackaged products still increasing. The market for ready meals (including ambient, chilled, and frozen products) is growing rapidly, with the number of products increasing by an average of 13 % each year [ 23 ]. However, if not thoughtfully selected or integrated into a well-balanced menu plan, an increased reliance on convenience foods compromises overall dietary quality. Higher consumption of products rich in added fat, sugar, salt, and additives needed for preservation and extended shelf life may contribute to or exacerbate health issues such as obesity and non-communicable diseases. On the other hand, not all ready-to-eat meals or prepackaged products are unhealthy. Techniques such as freezing for preservation allow consumers to purchase healthy foods and meals [ 24 ], including frozen vegetables and fruits, year-round according to their preferences. Additionally, for individuals with limited culinary skills or time constraints, these convenient food choices can play a vital role in meeting dietary needs and maintaining overall well-being. Therefore, the current study evaluates two meal plans: a heart-healthy Australian diet (HAD) that aligns with the Australian Dietary Guidelines [ 25 ] and heart-healthy dietary targets [ 26 ] and a typical Australian diet (TAD), which reflects population-level intakes (less healthy) consistent with the 2020–21 Australian Apparent Consumption report [ 27 ]. These had been used in randomised, cross-over feeding trial [ 27 ]. Both meal plans were intentionally designed for convenience, requiring minimal cooking skills. They each included ready-to-eat meals available from a large Australian supermarket chain and meals needing minimal preparation (sandwiches or wraps) to ensure adherence and consistent intake across participants. This study demonstrates the feasibility of achieving national dietary guidelines for individuals with limited time or food preparation skills, while aiming to explore the financial and environmental impacts of these meal plans. Specifically, the objectives of this study were to (1) quantify the carbon footprint (GWP*) of the two dietary patterns (HAD and TAD) and (2) assess the affordability (financial cost) of both diets. This study provides insight into balancing health, cost, and environmental considerations in dietary choices, which have not been thoroughly examined before. 2 Methods 2.1 Source of dietary data The paper utilises two dietary patterns, which were employed in a randomised, cross-over feeding trial involving 34 healthy Australian adults [ 27 ]. In the feeding trial, a 7-day menu cycle for each diet was repeated over two weeks, where all meals, snacks and selected beverages were provided to volunteers [ 27 ]. The TAD was designed to reflect the common food and nutrient intake patterns in Australian adults at the time of study inception, derived from the Australian's Apparent Consumption report [ 27 ]. This report comprises the quantity of purchased food and non-alcoholic beverages from food and retail sectors from July 2020 to June 2021 [ 28 ]. HAD meal plans align with the Australian Dietary Guidelines [ 29 ], Acceptable Macronutrient Distribution Ranges (AMDR) and key nutrient intake recommendations for adults [ 25 ]. Additionally, these recommendations conform to the heart-healthy eating guidelines [ 26 ]. The meal plans selected were designed to meet the nutritional requirements of a 71-year-old male, aiming to meet estimated energy requirements [EER] of 9000 kJ/day [ 27 ]. These specific targets were based on the mean EER among participants who completed the feeding study, and aligned with reference age and sex outlined in similar modelled studies [ 9 , 30 ]. Nutritional data was first generated using FoodWorks (Professional version 10; Xyris Pty Ltd., Brisbane, Australia). Subsequently, the list of individual food and beverage items was exported from FoodWorks and managed in Microsoft Excel [Version 16.0, Redmond, WA: Microsoft Corporation] to assign GWP* values and calculate financial costs. 2.2 Climate impact assessment The methodology for identifying and calculating GWP* values of foods and beverages that made up the two dietary patterns relied on a published database of Australian food and beverages [ 32 ], which contains GWP* values for 232 Australian food and beverage products. This database utilises a Life Cycle Analysis (LCA) approach, considering land and water use as well as gases produced throughout the food production lifecycle [ 33 ]. However, emissions from food packaging, kitchen storage, and preparation were not factored into the database due to a lack of valid data. To apply the database [ 32 ], a systematic approach outlined below and illustrated in Fig. S1 was followed. This method, informed by published research [ 34 , 35 ], allowed for calculation of each ingredient's GWP* for all products provided in the meal plans. This included multi-ingredient foods, beverages and mixed dishes, which were then aggregated to the food group level. While not without limitations, this approach offers improved accuracy of estimations, which is important given the constraints of the limited number of foods in the GWP* database. Further, it enabled inclusion of all diverse food products provided and allowed for a more comprehensive and representative comparison of both diets. First, food or beverage products that could be directly classified and calculated using existing items in the database were assigned corresponding GWP* values. For example, full cream cow's milk could be coded as ‘Whole Milk’, with a corresponding value of 1.23 kg CO 2 /kg. However, due to the database's limited size ( n = 232), certain assumptions had to be made for missing food items or ingredients, i.e. quinoa was not available in the database; therefore, brown rice, the best alternative, was used. To assess multi-ingredient products or mixed dishes, we initially estimated the proportions of ingredients using the nutrition information panels on food packaging. This data was gathered from specific commercial products acquired from local grocery stores or manufacturer websites. This information was essential for determining the GWP* values for each ingredient. For example, in the case of the product ‘frozen mixed berries’, the composition consisted of 37 % blueberries, 33 % strawberries, and 30 % raspberries. Utilising these proportions, we calculated the total GWP* value by multiplying the portion of each ingredient corresponding to the database [ 32 ] and then summing them to obtain the total GWP value for that product. When none of the above methods were feasible, we determined the GWP* value by either referencing comparable products already available within the database [ 32 ], utilising standard recipes sourced from the AUSNUT 2011–2013 food recipe file [ 36 ], or consulting Australian websites. These sources were selected based on the professional judgment of research dietitians. 2.3 Financial cost analysis The financial cost of each diet was determined using prices from the Coles online supermarket, with data updated as of April 4th, 2024. This ensured consistency in pricing across both dietary patterns and accounted for any product price variations over time. The total price of both diets was calculated over a two-week period. Special price promotions on the day were used and were considered to accurately reflect the true costs incurred. Even for bulk items that might not be fully consumed within the two-week period, such as a single tub of margarine or a jar of sauce, the total price for that product was accounted for and captured in the total cost for each diet. This approach ensured that the smallest necessary quantities required to be purchased to meet the meal plan serving sizes were represented in the total price, regardless of any leftover portions after the two weeks. 2.4 Data synthesis To calculate the GWP* values, individual food and beverage items from the seven-day meal plans for both dietary patterns were combined, resulting in total GWP values per week. These weekly values were then divided by seven to derive the average GWP* values per day. To project these values over a year, we multiplied the average GWP* value and daily costs by 365. Given our consideration of the true costs of foods over two weeks, which included bulk items needing only one purchase (like a jar of peanut butter), we divided the two-week costs by two to determine weekly expenses. Additionally, costs per gram of each food or beverage item were calculated to conduct a more detailed breakdown of expenditure per 1000 kJ. For each dietary pattern, data was categorised according to eight food groups: fruit, vegetables, grains, dairy, meat, meat alternatives, discretionary foods, and oils. To ensure the total cost of mixed dishes or multi-ingredient products was accurately captured, an additional category labelled ‘water’ was added to cover the proportion of water they contained. These food groupings align with those outlined in the Australian Guide to Healthy Eating [ 37 ], which includes discretionary foods and oils. Additionally, a distinct category for legumes, nuts, and seeds, labelled ‘meat alternatives,’ to better represent the trend towards vegetarian and vegan diets, which often rely on these plant-based protein sources known for their smaller carbon footprint was created. 3 Results The nutritional profiles of the two dietary patterns are summarised in Table 1 . The HAD, which focuses on incorporating whole grains, vegetables, fruits, and legumes, provides almost 300 % more dietary fibre than the TAD, exceeding the recommended intake of at least 25 g per day [ 38 ]. The HAD aligns more closely with established nutrient targets [ 39 ] that support lower risk for cardiovascular disease [ 40–44 ]. This includes staying below the recommended targets for trans-fat, saturated fat, added sugars, and sodium/potassium ratio, while exceeding the recommendations for beneficial nutrients such as eicosapentaenoic acid (EPA), docosahexaenoic acid (DHA) and α-linolenic acid (ALA) [ 38 , 45 , 46 ]. However, the HAD did exceed the recommended sodium intake targets [ 38 ], which is not surprising given the reliance on convenience and ready-made meals in our meal plans relevant to the feeding trial. However, the sodium content still remained nearly 25 % lower than that of the TAD. A dietary pattern similar to the HAD can be modified to reduce sodium content to recommended levels through targeted strategies. These include carefully scrutinising healthy products and selecting those with the lowest sodium content, advocating for manufacturers to reduce sodium in their offerings further, minimising the consumption of ready-made meals, and increasing the intake of fresh vegetables, legumes, fruits, and whole grain products. In contrast, the TAD contained higher levels of saturated fat, trans-fat, higher sodium/potassium ratio and more added sugars compared to HAD while falling short of ALA, EPA and DHA. Specifically, the HAD contained 87 % more ALA, and 2.4 % more EPA/DHA than the TAD. The TAD had a higher climate footprint with a total GWP* of 2.83 kg CO 2 e produced per day, compared to the HAD, which had a 23.8 % lower CO 2 emission of 2.16 kg CO 2 e per day ( Table 2 ). In terms of cost, the HAD was 51.3 % more expensive than the TAD diet ( Table 2 ), representing an additional $52.98 per week. Over the course of a year, this would translate to a difference in food costs of $2762.65. The food groups contributing the most to the climate footprint and financial cost varied across the two dietary patterns ( Fig. 1 ). Discretionary foods, which are energy, dense, nutrient-poor foods, emerged as one of the highest contributors to CO 2 e in the TAD ( Fig. 1 A), while in the HAD, it was one of the lowest contributors given the very low amounts included. Similarly, vegetables and meat alternatives had a low impact within TAD (0.53 and 0.20 kg CO 2 e, respectively), whereas these were substantially higher (2.93 and 1.26 kg CO 2 e, respectively) in the HAD. Dairy, vegetables, and meat collectively accounted for over two-thirds of the CO 2 e in the HAD, whereas meat and discretionary foods contributed over 50 % of the total CO 2 e footprint for the TAD. Additionally, oils and grains consistently exhibited low emissions for both diets. Financial costs also differed across the two dietary patterns based on food groups ( Fig. 1 B). For HAD, the main contributors to the total weekly expense were vegetables and fruit, accounting for 48.6 % of the overall cost. Conversely, in the TAD, discretionary foods made up 28.8 % of cost, while grains constituted 19.2 % of total expenditure. The top five food items contributing to climate footprint in the two distinct dietary patterns are presented in Table 3 . Across both diets, the primary contributors, were animal-based products, predominantly from the meat and dairy food groups. Processed beef, cheese, meat pie, processed pig meat, and whole milk emerged as the top contributors for the TAD. Conversely, cheese, yogurt, processed beef, processed chicken meat, and orange juice were the key contributors to the climate footprint for the HAD. 4 Discussion To our knowledge, this is the first study to investigate both the nutritional quality and sustainability along with affordability of two modelled dietary patterns, namely a heart-healthy diet and a typical Australian diet. While neither diet achieved climate neutrality (CO 2 e < 0), the heart-healthy dietary pattern demonstrated a notable environmental advantage, with 23.8 % lower CO 2 emissions compared to the TAD. The potential impact of transitioning from a TAD to HAD on a population-wide scale is important. For instance, if half of the adult population were to adopt the HAD, not only would it meet nutritional targets [ 25 ], but would also lead to a substantial reduction in CO 2 emissions, estimated at approximately 2.6 billion kg annually [ 47 ]. To put this into perspective, this reduction is equivalent to the emissions of around 1.2 million passenger cars per year and would require over 256 million trees to offset the amount of CO 2 e produced [ 48–50 ] (Item S1, Supplementary Materials). However, total costs for the HAD were 51.3 % higher than those for the TAD, suggesting that if convenient, ready-made options are prioritised, as they were in the present feeding trial, financial burden may be a significant barrier. These findings are not only relevant for future menu planning in clinical trials that provide food, but also have broader implications for individuals seeking to reduce their carbon footprint while adhering to current guidelines for heart-healthy eating. This is especially important for those with limited food preparation skills who rely on convenient dietary options. In the current study, we observed that meat and discretionary foods made the largest contributions to CO 2 e emissions in TAD, accounting for over 50 % of the overall footprint. Conversely, in HAD, discretionary foods were the lowest contributors, while dairy and vegetables emerged as the primary contributors due to the high recommended quantities. In the current analysis of individual food products, we found that cheese and beef products consistently ranked among the top three highest contributors to CO 2 e across both dietary patterns. Conversely, oils and grains made minimal contributions to the overall CO 2 e footprint for both patterns. These findings align with previous research, underscoring that diets rich in meat and dairy tend to have a higher carbon footprint compared to those rich in vegetables and legumes [ 51–56 ]. Similarly, other studies have also found that grains, despite being a staple in Australian diets and a major component of national dietary recommendations [ 29 ], contributed relatively less to carbon emissions [ 30 ]. This is likely influenced by specific foods found in higher quantities in HAD, such as rice, and due to how it is produced in Australia where high-yielding Australian rice varieties require less water and contribute to a negative climate impact unlike other varieties [ 30 , 32 ]. The present study infers that consumption of a climate-neutral diet for the Australian population is not currently possible without compromising nutrient quality in diets, which rely on convenience options. The current findings contrast previous studies [ 57 , 58 ], which suggested that less-healthy diets containing more non-core foods such as sweets, snacks, fat, and oils, had a lower environmental impact. This difference is likely due to the various methodological approaches used to calculate the environmental impact, including the use of the GWP100 metric [ 57 ]. Some studies have used the GWP* metric, specifically within the Australian context, which supports the lower climate footprint of recommended/healthier Australian diets compared to typical diets [ 9 , 32 , 59 ]. However, slight differences even among these studies are likely due to variations in food and beverage selection within each food group, aimed at meeting the serving and nutrition recommendations, as well as the reference person and estimated nutritional requirements. For instance, the current study found that the current Australian diet for a 71-year-old male produced 2.83 kg CO 2 e per day (with an EER of 2143 kcal), whereas Clay et al. [ 30 ] reported 2.38 kg CO 2 e per day (EER of 2129 kcal), and Ridoutt et al. [ 32 ] calculated 3.1 kg CO 2 e per day (EER of 2276 kcal) for a male aged 71 years or above, all using the GWP* metric. These data illustrate that dietary choices of specific foods within each food group can achieve small reductions in environmental impact. However, major reductions in the climate impact of diets will require substantial efforts from the agricultural and food processing industries [ 32 ]. Consideration of additional factors influencing consumer food choices, such as financial costs and convenience, is often overlooked in studies on dietary sustainability [ 60 , 61 ]. In Australia, there are marker inequalities in the affordability of a healthy and sustainable diet [ 62 ], with low-income households being more susceptible to diet-related chronic diseases [ 63 ]. Consequently, the recommended diet is often financially out of reach for those from lower socioeconomic groups [ 64–66 ]. Simultaneously, convenient options play a pivotal role in food selection in today's fast-paced world and amidst the evolving landscape of the contemporary food supply [ 67–69 ]. The current study demonstrates that it is possible to reduce the carbon footprint relative to typical dietary intakes while maintaining nutrient density and adherence to national dietary guidelines and accommodating the needs of individuals with limited cooking skills who may also be time-poor. However, this reduction comes with higher costs. In our analysis, the largest expenses in HAD were associated with vegetables and fruits, as they comprised the largest portion of the diet, consistent with existing research [ 63 , 70 , 71 ]. Conversely, for TAD, discretionary foods and grains remained significant expenses, collectively contributing to a substantial portion of the overall cost. Research suggests that if convenience and reliance on ready-made meals were deprioritised, it would be feasible to achieve a healthy Australian diet up to 20 % cheaper than a typical Australian diet, depending on the geographical area [ 64–66 ]. Therefore, strategies aimed at lowering the costs of nutritious foods should be considered. Greater attention should be directed towards the long-term societal benefits and potential cost reductions associated with improved health outcomes and environmental preservation. This may involve implementing policies or creating systems embedded in our food supply, potentially achieved through innovative processes, which incentivise the reduction of costs in healthy foods that also have a lower carbon footprint. For example, if agriculture shifted towards more circular approaches and reduced its reliance on fossil fuels, it could significantly contribute to these goals. The use of precision agriculture may lead to more efficient planting, watering, and fertilisation, reducing waste and increasing yield, thereby lowering production costs. Encouraging local sourcing could reduce transportation costs and emissions, support local economies, and ensure fresher produce. Finally, food recovery programs could also reduce food waste and make healthy foods more affordable and accessible. There are several strengths to this study. Firstly, it addresses a gap by examining the climate footprint of two distinct diets using the GWP* metric while also considering convenience, cost, and adherence to healthy eating guidelines. This holistic approach caters to individuals with limited cooking skills or those seeking ready-made meals, ensuring a broader applicability of the findings. All multi-ingredient foods and mixed dishes in the meal plans were dissected into individual ingredients, allowing a more precise assessment of their environmental footprint. However, limitations arise from the database's limited food options, requiring substitutions that could affect the accuracy of CO 2 emission estimates. Disaggregating foods into basic ingredient components also introduces subjectivity, potentially resulting in the under- or over-estimation of the true CO 2 emission impact. However, the same standardised approach was applied to both diets, allowing for a similar approach to comparisons. Our study modelled diets for 34 healthy Australian adults, encompassing diverse demographics, despite the small sample size. However, the resulting diet plans may not be applicable to individuals with specific medical conditions, unique food preferences, or dietary restrictions, given that our participants were generally healthy and willing to consume the foods provided, potentially reducing the external applicability of the findings. This study also employed the GWP* metric, which is considered the best metric available for assessment of the dietary footprint due to accurate inclusion of short-lived pollutants, e.g., methane [ 12 , 16 , 72 ]. The metric indicates the effect of long and short-term GHG on temperatures, providing a more comprehensive assessment of the dietary footprints. However, this choice limits direct comparisons with other studies, which have often relied on the GWP 100 metric [ 9 , 10 ]. Nevertheless, the GWP* calculator does not include factors such as land change, food loss, waste, or CO 2 emissions from packaging and food preparation, potentially underestimating total CO 2 emissions. Likewise, although the food groupings primarily align with Australian Dietary Guidelines [ 29 ], further clarification of alternative protein sources such as legumes that can be grouped in vegetables or meat and alternatives, may improve the representation of their varying environmental impacts. However, this may also affect the direct comparisons with other studies. Finally, the affordability was assessed through a fortnightly food basket from a major supermarket, adjusted for promotional prices to reflect real-world grocery shopping practices. However, these costs would fluctuate depending on price promotions and seasonal availability of products. Therefore, direct comparability of results to other dietary patterns is limited as this analysis represents one snapshot in time. Thus, interpretation of these results must be approached with caution. 5 Conclusion The current study indicates that a HAD is not only better for human health from a nutrition and disease prevention perspective, but also has a lower environmental footprint. A population shift from TAD to a HAD dietary pattern could have substantial benefits. The major barrier is likely affordability of this type of dietary pattern, particularly in the context of people trying to achieve this using convenience foods. Therefore, creating incentives, policies or systems that can intervene and reduce costs is going to have a widespread benefit. Additionally, the current study presents valuable information for future trials where a HAD can easily be implemented by individuals. Even though climate neutrality of dietary patterns remains elusive in the current Australian food systems, the footprint can be reduced substantially through the promotion of healthier food choices. In future studies, expansion of the GWP* database is needed to accurately and comprehensively assess CO 2 e impact. Moreover, additional environmental dimensions such as water scarcity, land use, food losses and biodiversity should be considered. Funding statement This research did not receive direct funding. However, the data used in this secondary analysis was supported by the National Health and Medical Research Council of Australia's Research Fellowship awarded to CEC ( L3 App2009340 ). CRediT authorship contribution statement Rachel E. Cobben: Writing – original draft, Visualization, Methodology, Investigation, Formal analysis, Data curation, Conceptualization. Clare E. Collins: Writing – review & editing, Conceptualization. Karen E. Charlton: Writing – review & editing, Methodology, Investigation. Tamara Bucher: Writing – review & editing, Methodology, Investigation. Jordan Stanford: Writing – review & editing, Writing – original draft, Visualization, Supervision, Project administration, Methodology, Investigation. Declaration of competing interest All authors declare no conflicts of interest with regard to the research, authorship and publication of this article. Appendix A Supplementary data Supplementary material Image 1 Appendix A Supplementary data Supplementary data to this article can be found online at https://doi.org/10.1016/j.ahjo.2024.100448 .
REFERENCES:
1. SWINBURN B (2019)
2. PORTNER H (2022)
3. ROSENZWEIG C (2020)
4. UN
5. RANGANATHAN J (2018)
6. FAO (2019)
7. CSIRO (2022)
8. STATISTA
9. HENDRIE G (2014)
10. BALTER K (2017)
11. RIDOUTT B (2017)
12. ALLEN M (2018)
13. (2014)
14. RIDOUTT B (2019)
15. CRIPPA M (2021)
16. LYNCH J (2020)
17. COSTA C (2021)
18. JANSSEN H (2018)
19. LIEM D (2019)
20. ZORBAS C (2018)
21. KAMPHUIS C (2015)
22. GRUNERT K (2011)
23. DAVIES A (2022)
24. HASANI A (2022)
25. NATIONALHEALTHANDMEDICALRESEARCHCOUNCIL
26. NATIONALHEARTFOUNDATIONOFAUSTRALIA (2019)
27. FERGUSON J (2023)
28. AUSTRALIANBUREAUOFSTATISTICS
29. NATIONALHEALTHANDMEDICALRESEARCHCOUNCIL (2013)
30. CLAY N (2023)
31. RIDOUTT B (2021)
32. CUCURACHI S (2019)
33. STANFORD J (2023)
34. NIKODIJEVIC C (2019)
35. ZEALANDFSAN (2014)
36. AUSTRALIANGOVERNMENT
37. NHMRC (2017)
38. HEARTFOUNDATION (2019)
39. JACKSON S (2018)
40. JUNG S (2019)
41. YANG Q (2011)
42. ODONNELL M (2014)
43. PEREZ V (2014)
44. C P (2017)
45. NESTEL P (2015)
46. AUSTRALIANBUREAUOFSTATISTICS (2023)
47. AUSTRALIANBUREAUOFSTATISTICS (2020)
48. DEPARTMENTOFINFRASTRUCTURET
49. BERNAL B (2018)
50. GONZALEZGARCIA S (2018)
51. NABIPOURAFROUZI H (2023)
52. ALEKSANDROWICZ L (2016)
53. CLUNE S (2017)
54. BOEHM R (2019)
55. TUKKER A (2011)
56. VEERAMANI A (2017)
57. VIEUX F (2013)
58. HENDRIE G (2022)
59. GLANZ K (1998)
60. TURRELL G (2002)
61. BAROSH L (2014)
62. WARD P (2013)
63. LEE A (2021)
64. LEE A (2020)
65. LEE A (2016)
66. BRUNNER T (2010)
67. ELLIOTT P (2024)
68. YEH M (2008)
69. RAO M (2013)
70. LEWIS M (2021)
71. CAIN M (2019)
72. AMERICANHEARTASSOCIATION
73. WHOGUIDELINESAPPROVEDBYTHEGUIDELINESREVIEWCOMMITTEE (2012)
|
10.1016_j.jocmr.2024.100853.txt
|
TITLE: Kiosk 7R-TC-11 LVOT Obstruction in Hypertrophic Cardiomyopathy: Mechanistic Insights Alternative to Venturi
AUTHORS:
- Vora, Keyur
- Patel, Vyom
- Surana, Deval
- Bhatt, Kinjal
- Christopher, Johann
ABSTRACT: No abstract available
BODY:
Background: Hypertrophic cardiomyopathy (HCM) is a genetic cardiovascular disorder characterized by left ventricular hypertrophy and an array of clinical manifestations. One of the prominent features of HCM is left ventricular outflow tract (LVOT) obstruction, often attributed to the Venturi effect. LVOT obstruction can lead to debilitating symptoms like syncope and, in severe cases, sudden cardiac death. Recent studies have proposed that LVOT obstruction in HCM may involve mechanisms beyond the Venturi effect. This study aims to explore these alternative mechanisms, providing insights into the pathophysiology of LVOT obstruction. Methods: A retrospective analysis was conducted on a cohort of 16 patients diagnosed with HCM. Clinical data, including patient demographics, medical history, and symptoms, were collected. Transthoracic echocardiography (TTE) provided information on LVOT velocity and the presence of systolic anterior motion (SAM) of the mitral valve. Cardiac magnetic resonance (CMR) imaging was utilized to assess left ventricular mass, ejection fraction (LVEF), LVOT diameter, myocardial fibrosis, and the thickness of the mitral valve leaflets. Comorbidities like hypertension, diabetes, and dyslipidemia were also evaluated. The data was analyzed to unravel the potential mechanisms driving LVOT obstruction beyond the traditional Venturi effect. A representative case is shown in Figure 1 and mechanistic insights are illustrated in Figure 2. Results: The study cohort had an average age of 53 ± 10 years, with HCM subtypes distributed as follows: asymmetric septal (43.8%), symmetric (25%), and apical (31.3%). Notably, comorbidities such as hypertension (56.3%) and dyslipidemia (56%) were prevalent. Symptoms ranged from angina (18.8% ± 8.8%) and dyspnea (25% ± 12.1%) to syncope (12.5% ± 7.9%). CMR imaging provided insights into the structural and functional aspects of the myocardium and valvular mass. An average LV mass of 270 ± 24 grams and an LVEF of 62 ± 5% were observed. Myocardial fibrosis manifested as a focal area of late gadolinium enhancement (LGE) in the basal septum, was evident in 9 patients. Interestingly, valvular features include an average AMVL & PMVL thickness of 6.04 mm (± 2.03 mm) and 2.94 mm (± 0.96 mm) respectively, with all cases showing varying degrees of mitral regurgitation and SAM presence, further highlighting the significance of valvular abnormalities in LVOT obstruction. Conclusion: Our findings shed light on alternative mechanisms that challenge the conventional understanding of LVOT obstruction in HCM, going beyond the Venturi effect. The consistent presence of LVOT obstruction in our cohort underscores the complex interplay between the thickened AMVL and the septum during systole. Notably, SAM primarily arises from a robust vertical pushing force applied to the thickened AMVL, rather than the relatively weaker Venturi force. These findings redefine our understanding of LVOT obstruction in HCM, emphasizing a multifaceted approach to improve outcomes. Author Disclosure: K VORA : Nothing to disclose; V Patel : N/A; D Surana : N/A; K Bhatt : N/A; J Christopher : N/A
REFERENCES:
No references available
|
10.1016_j.chbr.2025.100761.txt
|
TITLE: Development of pixel-based facial thermal image analysis for emotion sensing
AUTHORS:
- Tang, Budu
- Sato, Wataru
- Shimokawa, Koh
- Hsu, Chun-Ting
- Kochiyama, Takanori
ABSTRACT:
Thermal imaging technology, known for its noncontact and noninvasive nature, offers distinct advantages in computerized emotion sensing. In the literature, a decrease in nose-tip temperature has been associated with dynamic subjective arousal. However, these studies were limited by their focus on a few regions of interest, neglecting a comprehensive analysis of the entire face, and not accounting for the temporal dynamics of thermal changes. To overcome these limitations, we propose an analytical method for facial thermal images using statistical parametric mapping (SPM), which was developed for functional brain image analysis. We developed semiautomated preprocessing protocols to effectively realign and standardize facial thermal images. To validate these analyses, we recorded the thermal images of participants’ faces and assessed dynamic valence and arousal ratings while they observed emotional films. The proposed SPM analyses revealed significant negative associations with dynamic arousal ratings at the nose tip and forehead. The analyses incorporating temporal disparity revealed more forehead clusters than the analyses assuming no delay. These findings validate the proposed pixel-based facial thermal image analysis method using SPM. The results suggest that computerized pixel-based analysis of facial thermal images can be used to estimate dynamic emotional states, with potential applications in various human behavioral fields, including mental health diagnosis and marketing research.
BODY:
1 Introduction Thermal imaging technology has significant potential for the computerized and objective sensing of emotional states. Noninvasive and noncontact diagnostic methods are highly valued in medicine ( Lahiri et al., 2012 ) and human–computer interaction ( Hossain & Assiri, 2020a ). Thermal imaging can monitor emotional states without direct skin contact by capturing and analyzing infrared radiation emitted from the body. Thermal imaging does not require lighting conditions, thereby enhancing its versatility in various environments. The underlying physiological mechanism involves the autonomic nervous system, particularly its sympathetic nervous system (SNS) branch, which is activated during emotional arousal ( Wang et al., 2018 ). SNS activation primarily induces vasoconstriction, which narrows small arteries and blood vessels in the peripheral areas, such as the skin, which reduces blood flow to these regions ( Alba et al., 2019 ). The reduced blood flow then lowers the skin surface temperature, which can be detected using thermal imaging techniques. Imaging reveals detailed patterns of thermal changes across different body regions, providing insights into the complex interplay between emotional stimuli and physiological responses. In particular, facial thermal imaging is useful for this purpose because several facial regions exhibit SNS activation effects ( Hossain & Assiri, 2020b ; Kastberger & Stachl, 2003 ; Zhao et al., 2011 ). Previous studies have reported that facial thermal signals are associated with subjective emotional states ( Kosonogov et al., 2017 ; Salazar-López et al., 2015 ; Sato et al., 2020 ). These studies primarily used region of interest (ROI) analysis to investigate the relationship between facial thermal signals and emotional states and assessed their correlations. Several studies have consistently shown that among facial regions, the temperature of the nose tip is negatively associated with subjective arousal states ( Eom & Sohn, 2012 ; Jian et al., 2019 ; Sato et al., 2020 ). For example, Sato et al. (2020) measured facial thermal changes and dynamic (second-by-second) ratings of valence and arousal while participants watched emotional films. ROI analysis revealed a negative correlation between nose-tip temperature and arousal ratings. Furthermore, other regions demonstrated thermal responses associated with emotional experiences, although the results appeared to be scarce and mixed. For instance, some studies have observed a decrease in forehead temperature in response to heightened emotional arousal ( Mostafa et al., 2013 , Love, 1980 ). Another study reported that exposure to a startling sound decreased and increased the temperature of the cheek and periorbital region, respectively ( Pavlidis et al., 2001 ). In summary, these findings suggest that temperature changes in various facial regions, including the nose tip, forehead, cheek, and periorbital area, can reflect emotional experiences. However, previous studies have encountered limitations in their analysis of facial thermal images. Although these studies have emphasized ROI analysis, this approach may overlook crucial data across the entire face. Focusing solely on predefined areas may result in an incomplete understanding of the thermal signatures associated with emotional states. This limitation is significant because emotions can manifest subtly across different facial regions and are not strictly within predefined ROIs. By not capturing the full spectrum of thermal variations, researchers may overlook nuanced changes that contribute to accurate emotion detection, potentially reducing the effectiveness of their models in real-world applications. To overcome this limitation, researchers must perform pixel-level analysis ( Ordun et al., 2020 ). Pixel-level analysis provides a more comprehensive view of the thermal patterns across the entire face and captures subtle changes that may be overlooked when focusing on ROIs. However, this approach also introduces statistical challenges, particularly the increased risk of type I errors (false positives) due to multiple comparisons across several pixels. To address this limitation, we propose using statistical parametric mapping (SPM), which was developed for neuroimaging data analysis ( Friston et al., 1994a ). SPM provides pixel-level analysis capabilities that allow for a detailed examination across the entire facial area, rather than just predefined ROIs. This enables a more accurate and comprehensive mapping of thermal changes associated with emotional responses. Furthermore, SPM uses random field theory (RFT) ( Worsley et al., 1996 ) to correct the familywise error (FWE) rate, ensuring that conclusions regarding the correlations between thermal signatures and emotional states are robust and reliable. As our secondary objective of this study, we explored the effectiveness of incorporating the hemodynamic response function (HRF) into our SPM analysis of facial thermal images. In previous studies that analyzed facial thermal images for emotion sensing, temporal disparities were disregarded (e.g., Sato et al., 2020 ). Thermal changes associated with emotional states reflect blood flow alterations associated with SNS activation. Several physiological studies have demonstrated that these changes occur a few seconds after underlying nervous system activity ( Love, 1980 ). Functional magnetic resonance imaging (fMRI) studies have addressed this by applying HRF, which uses Gamma functions to model the delay and undershoot associated with blood flow changes ( Friston et al., 1994b; Lindquist et al., 2009 ). A previous fMRI study reported that blood flow changes in the forehead could be accurately detected by applying the HRF ( Kirilina et al., 2012 ). Some previous studies also reported that peripheral skin-temperature changes measured using thermography exhibited rise times of approximately 3–10 s ( Sagaidachnyi et al., 2017 ; Sonkusare et al., 2019 ), comparable to the ∼6-s peak latency of the HRF. Based on these data, although relevant information remains scarce, we expected that incorporating the HRF can enhance the temporal aspects of mode modeling in facial thermal imaging. To analyze facial thermal images using SPM software, we developed a comprehensive preprocessing protocol. First, a custom software tool was created to identify and track an arbitrary number of facial landmarks in thermal imaging videos. This software employs advanced tracking algorithms to monitor the landmarks' positions across frames, converting these coordinates into a format compatible with Blender ( Soni et al., 2023 ) for further processing. This step corresponds to the realignment process in SPM, ensuring that the facial features are consistently aligned across all frames. Next, we performed nonlinear alignment using Blender to accurately align the thermal images. This step involves UV mapping, a common process in computer graphics where 2D thermal images are projected onto a 3D facial model for texture mapping ( Villanueva & Villanueva, 2022 , pp. 117–149). This process is analogous to the coregistration step in SPM, allowing for precise alignment of thermal data with the anatomical structure of the face, accounting for complex geometries and variations in facial expressions. Once aligned, the images were converted into the Neuroimaging Informatics Technology Initiative (NIfTI) format, which is recognized by the SPM software. Subsequently, the aligned images were smoothed using SPM's inbuilt functions to prepare them for statistical analysis. Furthermore, we incorporated the HRF into the analysis. The HRF is used to model the temporal dynamics of the thermal signal changes, similar to its application in fMRI studies. By convolving the thermal data with the HRF, we can account for the delayed and sustained physiological response to emotional stimuli. The preprocessing leads to in a two-stage random-effect analysis ( Holmes & Friston, 1998 ): a first-stage analysis for individual data, where each participant's thermal data is analyzed separately, followed by a second-stage group analysis to identify common patterns across participants. To validate our analyses, we conducted an experiment involving 21 participants, each of whom watched a series of five videos designed to elicit distinct emotional states. During the experiment, thermal video data were collected, and the participants provided dynamic self-report measures of their emotional valence and arousal using a dynamic reporting tool, offering a moment-to-moment account of their subjective experiences. We utilized the dynamic self-reports to conduct regression analyses on the facial temperature data, aiming to identify pixel-wise correlations between facial thermal signals and valence or arousal levels. Based on prior findings, we predicted that nose-tip temperature would show a negative correlation with arousal. To evaluate this prediction, we conducted an ROI analysis with small volume correction (SVC) to focus on specific facial areas. In addition, we performed whole-face, pixel-level analyses while controlling for FWE to ensure the statistical robustness of the results. Finally, we compared the outcomes of models with and without the inclusion of the HRF to assess its impact on the thermal-emotional associations. Overall, our study is the first to perform pixel-level analysis of facial thermal responses to emotion, offering a novel and detailed perspective on the thermal signatures of affective states. 2 Methods 2.1 Participants This study recruited 23 Japanese volunteers. Data from two participants were excluded due to severe head motion artifacts during recording, resulting in a final sample of 21 participants (11 females; mean age = 22.3 years, SD = 3.17). The sample size was determined based on previous research that examined the relationship between nose-tip temperature changes and dynamic arousal ratings ( Kosonogov et al., 2017 ). The inclusion criteria were as follows: willingness to participate in subjective and physiological measurements; normal or corrected-to-normal vision without the use of glasses; Japanese as the first language; and no neurological or psychiatric issues. The exclusion criterion was previous participation in experiments using the emotional film clips employed in our study. All participants had normal or corrected-to-normal vision. All participants provided written informed consent after being fully informed about the experimental procedures. This study received approval from the local ethics committee of the Unit for Advanced Studies of the Human Mind, Kyoto University and was conducted in accordance with institutional ethical standards and the Declaration of Helsinki. This study is part of a broader investigation into the relationship between subjective experiences and physiological signals, with additional findings regarding dynamic ratings and electrophysiological measures reported elsewhere ( Sato et al., 2020 ). 2.2 Apparatus The experiments were controlled using presentation (Neurobehavioral Systems, Berkeley, CA, USA) running on an HP Z200 SFF Windows computer (Hewlett-Packard Japan, Tokyo, Japan). Visual stimuli were displayed on a 19-inch cathode ray tube monitor (HM903D-A; Iiyama, Tokyo, Japan) with a resolution of 1024 × 768 pixels. An additional Windows laptop (CF-SV8; Panasonic, Tokyo, Japan) and a wired optical mouse (MS116, Dell, Round Rock, TX, USA) were used for cued-recall dynamic ratings. A655sc, an infrared thermal imaging camera, and Research IR Max (v4.40; FLIR Systems, Wilsonville, OR, USA) were used to acquire facial thermal images. The camera was placed next to the monitor and was set to capture the participants’ entire faces at a spatial resolution of 640 horizontal × 480 vertical pixels at a sampling rate of 50 Hz. 2.3 Stimuli Five films were used to evoke a spectrum of emotions: “Cry Freedom” (highly negative, anger), “The Champ” (moderately negative, sadness), “Abstract Shapes” (neutral), “Wild Birds Of Japan” (moderately positive, contentment), and “M-1 Grand Prix The Best 2007–2009” (highly positive, amusement). Gross & Levenson (1995) and Sato et al. (2020) developed the first three and the latter two film stimuli, respectively. Their effectiveness in eliciting target emotions was validated in previous studies using Japanese samples ( Sato et al., 2007; Sato & Kochiyama, 2022 ). The mean ± SD duration of these film stimuli was 168.4 ± 17.8 s (range: 150–196 s). Note that the emotional categories (e.g., anger) were used only to label the films because the films could elicit responses from multiple emotional categories, as reported in previous studies using ratings for multiple emotional categories ( Gross & Levenson, 1995 ; Sato et al., 2007 ). We were interested in individual-specific associations between the dynamic subjective valence/arousal ratings and facial thermal responses. An additional film, “Silence of the Lambs,” (from Gross & Levenson, 1995 ), was used for the practice trial. The stimulus resolution was 640 × 480 pixels, corresponding to visual angles of approximately 25.5° and 11.0°. 2.4 Procedure The experiments were performed in a soundproof, electrically shielded chamber. The temperature was maintained at 23.5°C–24.5 °C and monitored using a TR-76Ui data logger (T&D Corp., Matsumoto, Japan). The participants were informed that the purpose of the study was to measure subjective emotional experiences and physiological signals in the skin while viewing films. The participants were instructed to rate their emotional experiences in terms of emotional valence and arousal. To facilitate their understanding of these ratings, we showed the affect grid ( Russell et al., 1989 ) and explained that the valence, which ranges from positive to negative, represents the qualitative component, whereas arousal, which ranges from low to high, reflects the intensity or energy of either positive or negative emotions. The participants were provided approximately 10 min to acclimate to the chamber. Afterward, the physiological data acquisition session was conducted first, followed by the dynamic subjective rating session ( Fig. 1 ). During the physiological data acquisition session, five test films were presented after one practice film. The film presentation order was pseudorandomized. For each trial, after a 1-s fixation point and 10-s white screen (pre-trial baseline), each film was presented (148–206 s). After another 10-s white screen (post-trial baseline), the affect grid ( Russell et al., 1989 ) was presented to assess emotional valence and arousal on a 9-point scale. The participants were instructed to focus on the fixation point, watch the film, and rate their overall subjective experience (valence and arousal) while watching the film by pressing the keys. After the responses were entered, the screen went black during the inter-trial interval, which was controlled to vary randomly from 24 to 30 s. This inter-trial interval was used based on the notion that the 24-s interval is standard for recording skin conductance responses, which measure SNS activation ( Breska et al., 2011 ); however, see the Discussion for possible carry-over effects. Physiological data were continuously recorded during all trials. During the dynamic subjective rating session, all stimuli were presented on the monitor twice more, and the 9-point scales for valence or arousal were simultaneously displayed on another laptop. The participants were instructed to recall their subjective emotional experiences during the initial viewing and continuously rate their experiences in terms of the valence or arousal dimensions by moving the mouse. The coordinates of the mouse were sampled at 10 Hz. The participants first rated valence, followed by arousal. This cued-recall procedure was used to acquire continuous ratings for valence and arousal, which are difficult to assess simultaneously during the initial viewing. Previous studies have reported that cued-recall continuous ratings were strongly positively correlated with continuous ratings for emotional films ( Mauss et al., 2005 ; Sato et al., 2020 ). 2.5 Data analysis 2.5.1 Workflow In this study, we propose a workflow for analyzing facial thermal images to investigate emotional responses, as illustrated in Fig. 2 . The process begins by acquiring original thermal images, followed by facial registration via UV mapping. The UV map of the initial frame serves as the basis for automatically rendering subsequent frames. These images were then preprocessed by cropping the facial region, converting the 2D thermal images to the NIfTI format by adding an empty dimension, and subsequently applying Gaussian smoothing. After aligning all facial images to a common template via UV mapping, a fixed rectangular region was defined to encompass the facial area. This region was then used to crop all images uniformly, as illustrated in Fig. 2 . The NIfTI format, commonly used in neuroimaging, supports efficient storage and analysis of multi-dimensional image data. This conversion, performed using Python, was necessary for subsequent analysis using SPM, which requires input images in the NIfTI format. The preprocessed images were subjected to individual analysis, in which the thermal image sequences of each subject were compared against their corresponding self-reported data. The accompanying subfigure exhibits the design matrix used in this analysis, where each row corresponds to an image. The three columns, from left to right, represent the HRF-convolved main effect of all images, the HRF-convolved arousal parametric modulator, and the session-mean constant term. The participants’ moment-to-moment arousal ratings were used to construct the parametric modulator to examine the variability of brain activity with subjective emotional intensity. This approach enables modeling of graded neural responses to continuous behavioral variables by modulating the amplitude of event-related regressors ( Zhou et al., 2021 ). Brighter grayscale values in the figure denote larger regressor amplitudes. Contrast images were computed for the regressors of interest. Finally, a group analysis is performed on the first-level contrast image summary statistics, such as one-sample t -tests, and the results are visualized through SPM and effect size plots. The subfigure shows the second-level design matrix for the group analysis: each row corresponds to one of the 21 participants, and the single column of ones implements a one-sample t -test that determines if the mean contrast value across participants differs from zero. This workflow ensures consistency and validity in the analysis of data from individual subjects to the group level. 2.5.2 Preprocessing: facial feature tracking software development Initially, the data were visually inspected, and two participants whose recordings were heavily affected by large head motion artifacts were excluded. Furthermore, a few frames affected by significant occlusions or excessive motion were excluded during preprocessing. Minor head movements typically affected only the peripheral areas of the face and were considered acceptable. We developed a thermal imaging facial landmark tracking software ( Fig. 3 ) that allows the precise identification and automatic tracking of facial points across video frames. The system allows users to set and manually mark landmarks in the first frame, which are then followed automatically in subsequent frames. In our implementation, two landmarks located in the glabella region were selected for tracking. The black bar shown in the image was added to protect participant privacy. This region is relatively stable and is minimally affected by facial expression changes; thus, it is suitable for consistent tracking of thermal images. Moreover, the use of only two landmarks helps avoid potential tracking errors and confusion that may arise from placing too many points. The coordinate data of these landmarks were saved and converted into a format tailored for subsequent processing needs. The software outputs normalized coordinates with inverted y-values, preparing the data for efficient integration into subsequent rendering and analysis workflows. The primary reason for using this software is to facilitate semiautomated alignment of facial features in Blender. Our codes are presented in the Supplementary Material. 2.5.3 Preprocessing: facial registration via UV mapping In our study, the process of facial registration includes mapping 2D thermal imaging data onto a 3D facial template. This is crucial for achieving precise facial alignment, and it was facilitated by Blender, a comprehensive open-source 3D computer graphics software toolset that supports the intricate tasks of 3D modeling and rendering. To reconstruct the facial model in three dimensions, FaceBuilder, a Blender addon, was used. The proposed tool generates a standardized 3D facial template, providing a consistent baseline for facial reconstructions. This consistency is particularly advantageous for comparative analyses of the subjects captured in our thermal videos. To initiate alignment similar to the realignment process in SPM, we began by selecting the facial mesh region of the 3D model and performing UV mapping from the current camera view. UV mapping unwraps the 3D surface of a mesh and flattens it into a 2D coordinate system, allowing each vertex on the 3D model to be assigned a corresponding location on a 2D image ( Flavell, 2010 ). The coordinates U and V define this 2D texture space and are analogous to the X, Y, and Z axes in 3D space, whereas W is typically reserved for quaternion-based rotation representations. By projecting the visible surface of the 3D mesh onto this UV space, a pixel-wise correspondence was established between the geometry of the 3D facial model and the thermal image captured from the current view. The mesh was then manually adjusted—through translation, scaling, and rotation—so that it aligned with the facial contours in the first frame of each video. After the overall shape was roughly aligned, we refined key facial features, such as the eyes, nose, and mouth, using Blender's proportional editing function, which facilitates smooth adjustments by moving neighboring vertices together. This preparation step is critical because it establishes a standardized frontal view of the face ( Fig. 4 ), serving as the foundation for rendering all subsequent frames. Much like the normalization procedures in SPM, this standardization reduces perspective distortions inherent in the original 2D thermal recordings and preserves accurate spatial relationships across participants and sessions. Once the initial UV map was established—analogous to fitting a mask over the face—we used the facial landmark coordinates obtained from tracking software to automatically render the remaining frames. The manually created UV map dynamically adapted to the movement of facial landmarks across time, which allows us to apply frame-by-frame rotations and translations while maintaining a consistent facial orientation. These rotation and translation steps were performed to align each frame to the UV-mapped first frame, not for facial recognition purposes. This alignment ensures that all thermal images share a common spatial reference, which is essential for voxel-wise analysis using SPM. This approach ensures that the “mask” remains properly aligned throughout the rendering process, enhancing both the spatial accuracy and computational efficiency. 2.5.4 Preprocessing: SPM analysis SPM analyses were performed using SPM12 revision 7487 ( http://www.fil.ion.ucl.ac.uk/spm ), implemented in MATLAB R2020a (MathWorks, Natick, MA, USA). The proposed SPM preprocessing begins with the cropping of aligned 2D thermal images to exclusively feature the facial region. This preliminary preprocessing step was necessary for directing the focus of our analysis to the regions of interest by eliminating extraneous background data and reducing computational demand. Following this, the cropped images were converted to NIfTI format to ensure compatibility with the SPM software, thereby enabling their effective incorporation into the SPM analytical workflow. The final image resolution was 1920 × 1080 pixels and the facial width spanned 539 pixels. Considering that the average facial width of Japanese individuals is approximately 157 mm, this corresponds to a pixel representing approximately 0.291 mm. To enhance the quality of the images for statistical examination, we used a smoothing technique with a 2.3-mm full width at half-maximum isotropic Gaussian kernel (the default setting for the analysis of functional magnetic resonance images), aiming to diminish noise and augment the signal-to-noise ratio ( Chen & Calhoun, 2018 ). 2.6 Statistical analysis with SPM Two-stage random-effects analyses were performed to identify significantly activated pixels at the population level, similar to prototypical SPM analyses for functional studies ( Holmes & Friston, 1998 ). First, we performed a single-subject analysis using general linear modeling ( Friston et al., 1994c ). The second-by-second valence or arousal rating-related regressor was modeled by convolution with a canonical HRF ( Lindquist et al., 2009 ). The HRF-convolved regressor was obtained from where T c o n v ( t ) = ∫ 0 ∞ T r a w ( t − τ ) h ( τ ) d ( τ ) is the raw rating time series and h(τ) is the canonical HRF. T r a w ( t ) We used a high-pass filter composed of a discrete cosine basis function with a cutoff period of 128 s to eliminate artifactual low-frequency trends. For comparison, we constructed a model without the canonical HRF. The contrast images of each effect (the positive or negative effects of valence or arousal) from the first-level single-subject analysis were entered into a one-sample t -test model for the second-stage random-effects analysis. Initially, the contrast of the negative association with the arousal ratings was examined. Pixels were identified as significantly activated when they reached the extent threshold of p < 0.05 corrected for multiple comparisons, with a height threshold of p < 0.01 (uncorrected). For the analysis of the nose tip, which was our ROI, we used small volume correction. The region was anatomically defined by 0.87-mm-radius spheres centered on the tip of the nose, which was comparable with that reported in a previous study ( Kosonogov et al., 2017 ). Other effects in other areas were then explored using the extent threshold, and the FWE rate was corrected over the entire face. In particular, we examined additional positive and negative associations with valence and arousal across the entire facial image using the same voxel-wise and cluster-level thresholds. To account for multiple comparisons in this exploratory analysis, a FWE correction was applied across the entire facial surface. The activation area was superposed on a representative normalized face image of the participants in this study to illustrate the anatomical regions in the figures. We conducted preliminary analyses examining the effect of participant sex by constructing the above models with covariates for the main effect of sex and the interaction between the rating and sex. We found that all reported clusters remained significant. This factor was disregarded based on these results and our small sample size, from which sex differences could not be concluded. In addition, we conducted preliminary analyses using the canonical HRF with its first and second temporal derivatives (i.e., delay and dispersion) to assess the appropriateness of HRF fitting ( Friston et al., 1998 ). The results showed the same significant patterns with the analysis using the canonical HRF, and no significant effects of either delay or dispersion. Therefore, we report the results of the analysis using the canonical HRF to accommodate temporal disparities in facial thermal changes. 3 Results 3.1 Subjective ratings Fig. 5 shows the group mean time courses of dynamic valence and arousal ratings. The figures indicate that the emotional film clips elicited dynamic changes in subjective valence and arousal experiences. 3.2 SPM analysis for subjective–thermal associations First, we performed analyses including HRF. To test our prediction of the association between dynamic arousal ratings and nose-tip temperature, we initially performed an ROI analysis. As predicted, a significant negative association with dynamic arousal ratings was observed for the nose tip ( Table 1 and Fig. 6 ). Next, we searched for other regions associated with dynamic arousal ratings after correcting for the entire face. Two clusters in the forehead had significant negative associations with dynamic arousal ratings ( Table 1 ). Other significant activation areas were not detected. For the analysis of associations with dynamic valence ratings, no significant activation areas were observed. To test the effect of the HRF, we analyzed the negative association with arousal ratings in the absence of the HRF. We found similar findings with the aforementioned analysis with the HRF, except that only a significant cluster was found in the forehead ( Table 2 ). 4 Discussion Our ROI analysis using pixel-level SPM with FWE rate correction revealed a negative correlation between emotional arousal and nose-tip temperature. These results agree with those of previous studies showing a negative association between nose-tip temperature and subjective arousal ratings, independent of valence ( Pavlidis et al., 2001 ; Ordun et al., 2020 ; Love, 1980 ). These results are also compatible with anatomical and physiological data indicating the presence of arteriovenous anastomoses in the nose, and cutaneous vasoconstriction via SNS activation reduces skin blood flow and, in turn, skin temperature ( Alba et al., 2019 ; Kosonogov et al., 2017 ). These results indicate the essential role of nose-tip temperature changes in estimating subjective emotional arousal dynamics and validate the SPM analysis of facial thermal images. More important, our pixel-level SPM analysis of the entire face revealed a negative correlation between emotional arousal and forehead temperature. This finding agrees with previous findings showing that forehead temperature decreases during emotionally arousing situations ( Pavlidis et al., 2001 ; Ordun et al., 2020 ). This result is consistent with anatomical findings of rich SNS innervation in the forehead ( Simon et al., 2009 ) and psychophysiological studies showing electrodermal activity in the forehead during emotional arousal ( Mazloumi Gavgani et al., 2017 ; van Dooren et al., 2012 ), reflecting SNS activity. However, to the best of our knowledge, no studies have investigated the association between forehead temperature and dynamic changes during subjective emotional arousal; a previous study that measured facial thermal images and investigated their association with dynamic ratings of emotional arousal did not include the forehead in the ROIs ( Sato et al., 2020 ). Our results provide the first evidence that forehead temperature changes are negatively associated with dynamic changes during subjective emotional arousal. The association between forehead temperature changes and subjective arousal dynamics, observed in our entire face analysis, highlights the advantage of using SPM pixel-level analysis to explore the entire face rather than relying solely on predefined ROIs, as in previous studies ( Hossain and Assiri, 2020b ; Jian et al., 2019 ). This methodological advantage is important because relationships between facial regions and emotional experiences or other psychological activities remain unknown ( de Aguiar Neto & Rosa, 2019 ; Kalaganis et al., 2021 ). Our method allows us to comprehensively explore temperature changes across the entire facial region. Furthermore, the advantage of our proposed method is the use of RFT ( Worsley et al., 1996 ) within our SPM framework to effectively control the FWE rate, which is inevitably generated when numerous pixels in the entire face are explored. RFT provides a robust statistical model suitable for analyzing spatial dependencies among pixels in thermal images, which are treated as part of a continuous field. By applying FWE corrections, SPM adjusts the significance thresholds for all pixels, which maintains the integrity of the results by controlling the likelihood of false positives. This statistical rigor enhances the reliability of our analyses and strengthens our confidence in the correlations between thermal signatures and emotional states, thus improving the scientific validity of our findings. Our analysis incorporating the HRF detected more clusters in the forehead associated with subjective arousal than the analysis in the absence of the HRF. The results suggest that analysis of facial thermal images can be more effective by applying the HRF, which models the onset delay and undershoot of blood flow associated with neural activity ( Kosonogov et al., 2017 ). The results are consistent with the theoretical model that blood flow changes have temporal disparity from the underlying nervous system activity ( Tovar-Lopez, 2023 ) and the empirical finding that blood flow changes in the forehead were appropriately detected in the analysis with the HRF ( Friston et al., 1994b; Lindquist et al., 2009 ). Collectively, these data suggest that applying the HRF allows a more detailed understanding of how emotions influence facial temperatures through vascular changes, capturing subtle yet critical shifts in blood flow that accompany emotional responses—features often overlooked by conventional thermal imaging analyses. However, although it is beneficial for modeling blood flow delays, the application of the HRF to facial thermal data raises concerns that warrant further investigation. For instance, the HRF is influenced by non-neural factors and can vary by region and participant. Its applicability to facial blood flow analysis, based solely on evidence from a limited number of studies, remains an open question. Future research should investigate the HRF's specifics in facial thermography and potentially develop a specialized response function that accounts for the unique physiological properties of facial blood flow. Our method has significant practical implications for various human behavior applications. First, for medical applications, our pixel-level thermal analysis method can effectively assess atypical emotional experiences in individuals with clinical conditions, such as autism spectrum disorder (ASD) and Parkinson's disease. For individuals with ASD, emotional expressions may be subtle or atypical, complicating accurate assessment with conventional methods ( de Aguiar Neto & Rosa, 2019 ; Kita et al., 2011 ). Similarly, Parkinson's disease often reduces facial expressivity, known as facial masking, which further complicates emotional recognition and assessment. Furthermore, the potential of our thermal imaging method to diagnose and monitor conditions associated with emotional dysregulation, such as anxiety and depression ( Bercea, 2012 ; Barrett et al., 2019 ), marks a significant advancement. Traditional diagnostic methods often rely heavily on subjective self-reporting, which can be influenced by various factors, including the stigma associated with mental health issues. Our objective and noninvasive technique can concretely measure emotional states and can serve as a complementary tool for clinicians assessing and managing these conditions. By facilitating early detection and monitoring of treatment responses, our method can lead to more effective and personalized therapeutic strategies, thus improving the quality of life of individuals with these conditions. Second, the proposed thermal imaging technique can be useful in nonmedical applications, including consumer behavior research. Previous research has provided insights into understanding emotion-related consumer responses and decision-making processes by incorporating physiological measures, such as electrodermal activity ( Rawnaque et al., 2020 ; Kalaganis et al., 2021 ; Caruelle et al., 2019 ; Bercea, 2012 ). Thermal imaging can further enhance this field by adding a new dimension of emotional and physiological data. Unlike traditional physiological recording, thermal imaging does not require direct contact with the subject, thereby offering a noninvasive, less obtrusive approach of measuring responses that could be more comfortable for participants, particularly in long-duration studies, and could be used in real-life situations. By integrating our thermal imaging technique, consumer behavior research can gain a more comprehensive understanding of the intricate dynamics of consumer emotions and behaviors. This comprehensive approach captures the subtle complexities of consumer behavior more effectively and refines marketing strategies, leading to more targeted and effective consumer engagement. Several limitations of this study should be acknowledged. Firstly, our proposed preprocessing protocol is not fully automated. In the UV mapping of facial thermal images, the initial frame requires manual alignment to approximately match the facial structure to a 3D model. This step provides the reference geometry for aligning subsequent frames using simple rotations and translations in Blender. Although the procedure may appear straightforward, accurate UV mapping involves resolving complex spatial correspondences between the 3D model and the 2D thermal image and is sensitive to variations in head pose, facial geometry, and camera perspective. Therefore, automating this step remains technically challenging. To address this limitation, we plan to develop a standardized protocol for initial UV mapping to improve consistency across participants. In addition, since the alignment process may introduce geometric transformations that alter the original thermal data, future work should also consider strategies to minimize potential distortions and assess their impact on downstream analyses. These improvements are particularly important for future real-world applications, which will require not only greater automation and robustness across diverse populations and environments, but also standardization across devices and validation beyond controlled laboratory settings. Second, head movements during film viewing occasionally resulted in partial facial occlusion, which may affect the accuracy of thermal signal extraction. Although minor movements were tolerated and interpolated when necessary, substantial motion or occlusion can obscure important thermal features. To address this, future work should consider using multiple thermal cameras from different angles to ensure complete facial coverage and improve the robustness of thermal analysis under naturalistic conditions. Third, electrodes were placed on the foreheads of all participants as part of a concurrent physiological data collection procedure. Although we could detect arousal-related thermal responses in the unobstructed regions of the forehead, the electrodes inevitably occluded part of this area. As a result, any thermal changes occurring directly beneath the electrodes could not be measured. Future studies should consider using experimental setups that avoid obstructing facial regions of interest when full-face thermal mapping is required. Fourth, while we used the inter-trial interval for 24–30 s, as is typical psychophysiological studies recording skin conductance responses ( Breska et al., 2011 ), this may have produced carryover effects. A previous study reported that subjective and physiological emotional responses elicited by films persisted for several minutes ( Kuijsters et al., 2016) . Although we believe that our pseudorandomization of film presentation did not produce systematic carryover effects on our results, further investigation of this issue is an important matter for future facial thermal imaging studies. Fifth, although our results suggested that the HRF can model skin blood-flow responses, it was originally developed for modeling cerebral hemodynamics. To enhance physiological specificity, future work should aim to establish a response function directly derived from the dynamics of facial vasomotor activity. Such a specific kernel may better capture the temporal properties of facial temperature alterations, thereby changing the accuracy of emotion sensing using facial thermal images. Finally, our sample was small; hence, the results should be interpreted with caution. Although our preliminary analyses did not reveal modulatory effects of sex, null effects may be attributable to the lack of statistical power. In fact, prior thermographic studies have suggested that men and women exhibit distinct thermal profiles ( Christensen et al., 2012 ). Future work with larger samples is warranted to clarify sex and individual differences in thermal responses to emotionally arousing stimuli. 5 Conclusion In conclusion, our study confirmed the effectiveness of facial thermal imaging with SPM for emotion analysis. Analysis of dynamic self-reports from 21 participants who watched emotionally evocative videos revealed significant negative correlations between emotional arousal and temperature changes at the nose tip and forehead. These findings are consistent with those of previous studies and physiological evidence. Our detailed pixel-level analysis across the entire face, rather than relying on predefined ROIs, has significant implications for both medical and nonmedical applications. Despite the limitations of our manual preprocessing method, our study emphasizes the potential of a full-face analysis using facial thermal imaging for more accurate and detailed emotion analysis. CRediT authorship contribution statement Budu Tang: Writing – original draft, Software, Investigation, Conceptualization. Wataru Sato: Writing – original draft, Software, Investigation, Data curation, Conceptualization. Koh Shimokawa: Software, Investigation. Chun-Ting Hsu: Writing – original draft, Investigation. Takanori Kochiyama: Writing – original draft, Investigation, Conceptualization. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgments This study was supported by funds from Japan Science and Technology Agency-Mirai Program ( JPMJMI20D7 ). The author thanks Masaru Usami for his technical support. Appendix A Supplementary data The following are the Supplementary data to this article. Multimedia component 1 Multimedia component 1 Multimedia component 2 Multimedia component 2 Multimedia component 3 Multimedia component 3 Multimedia component 4 Multimedia component 4 Multimedia component 5 Multimedia component 5 Multimedia component 6 Multimedia component 6 Appendix A Supplementary data Supplementary data to this article can be found online at https://doi.org/10.1016/j.chbr.2025.100761 .
REFERENCES:
1. ALBA B (2019)
2. BARRETT L (2019)
3. BERCEA M (2012)
4. BRESKA A (2011)
5. CARUELLE D (2019)
6. CHEN Z (2018)
7. CHRISTENSEN J (2012)
8. DEAGUIARNETO F (2019)
9. EOM J (2012)
10. FLAVELL L (2010)
11. FRISTON K (1998)
12. FRISTON K (1994)
13. FRISTON K (1994)
14. FRISTON K (1994)
15. GROSS J (1995)
16. HOLMES A (1998)
17. HOSSAIN M (2020)
18. HOSSAIN M (2020)
19. JIAN B (2019)
20. KALAGANIS F (2021)
21. KASTBERGER G (2003)
22. KIRILINA E (2012)
23. KITA Y (2011)
24. KOSONOGOV V (2017)
25. KUIJSTERS A (2016)
26. LAHIRI B (2012)
27. LINDQUIST M (2009)
28. LOVE T (1980)
29. MAUSS I (2005)
30. MAZLOUMIGAVGANI A (2017)
31. MOSTAFA E (2013)
32. ORDUN C (2020)
33. PAVLIDIS I (2001)
34. RAWNAQUE F (2020)
35. RUSSELL J (1989)
36. SAGAIDACHNYI A (2017)
37. SALAZARLOPEZ E (2015)
38. SATO W (2022)
39. SATO W (2020)
40. SATO W (2007)
41. SIMON R (2009)
42. SONI L (2023)
43. SONKUSARE S (2019)
44. TOVARLOPEZ F (2023)
45. VANDOOREN M (2012)
46. VILLANUEVA N (2022)
47. WANG C (2018)
48. WORSLEY K (1996)
49. ZHAO G (2011)
50. ZHOU F (2021)
|
10.1016_j.inat.2020.101011.txt
|
TITLE: Bulbo-medullary abscess’s management
AUTHORS:
- Lakhdar, F.
- Benzagmout, M.
- Chakour, K.
- Chaoui, M.E.
ABSTRACT:
Background/introduction
Intramedullary spinal cord and brainstem abscesses are a rare pathological entity and they are usually associated with other infections. Bulbo-medullary abscesses are even rarer with a potentially good outcome if treated adequately.
Case description
We present the case of a 34-year-old young woman presented with severe neck and shoulder pain, which progressed over several days to tetraparesis without any systemic infections. An emergent spinal magnetic resonance imaging was performed and revealed an intramedullary cervical lesion extending to the medulla oblongata suggesting bulbo-medullary abscess. The decision was made for an urgent microsurgical draining the purulent material to avoid further decline and completing a course of tailored antibiotics. The patient showed total recovery of his neurologic deficit with post operatory MRI demonstrating intramedullary cervical lesion suggesting dermoid cyst.
Conclusion
Bulbo-medullary abscess is a rare complex life-threatening infection. The diagnosis may be challenging due to the rarity, and a multidisciplinary approach including medical therapy and surgical intervention, leads to a favorable outcome in this potentially lethal condition.
BODY:
1 Introduction Brainstem and intramedullary abscesses albeit widely publicized entity, are extremely rare neurosurgical condition, potentially devastating lesions, historically associated with significant mortality and morbidity with a potentially good outcome if treated adequately. Brainstem abscess accounting for just 0.5% of brain abscesses, and the pathogenesis is not well defined. Different surgical techniques have been recommended in the literature in the treatment of spinal cord and brainstem abscess. Hereby, we present a unique case of cervical medullary abscess extended to the medulla oblongata, revealing intramedullary tumor suggesting a dermoid cyst. The management depends on the symptoms, underlying microorganism, and the disease course. 2 Case presentation A 34-year-old woman presented with severe neck and shoulder pain, was admitted to our hospital with progressive weakness and sensory loss in both hands and legs, and incontinence of urine of two weeks duration. There was no history of fever, rigors, or illness and she denied any drug use, trauma, or weight loss and had no risk factors for infection, no personal or family history of immunosuppressive conditions. A neurologic examination showed that mental functions were undisturbed, she was alert, afebrile with tetraparesis (2/5), and nuchal rigidity was obviously present. The cranial nerves were not affected and there was clonus and Babinski’s sign. No skin abnormalities in the cervical midline region of the neck were observed and no primary site of infection has been found to date. She hasn’t any upper respiratory tract infections or fistulous connections to the central nervous system. Laboratory studies were significant for white blood cell count (WBC) of 16,000/mm 3 and C-reactive protein level of 10 mg/l; the erythrocyte sedimentation rate was slightly elevated at 21 mm/h. Brain and spine MRI ( Fig. 1 ) showed an intramedullary heterogeneous lesion at the cervical level C1C2 extended to the bulbo-medullary junction, isointense and hypointense on T1-weighted images with peripheral enhancement on post-gadolinium series, and hyperintense on T2-weighted images, hypersignal in diffusion associated with high-signal intensity (signs of edema) extending to C3C4. Given the rapidly worsening neurological conditions and imaging features with extensive edema surrounding the cervical lesion made us suspect an infectious disease suggesting spinal cord abscess, mostly with high ADC diffusion. No disco-vertebral lesions suggesting spondilodiscitis associated to the abscess was identified. Microbiological testing for viral including HIV, bacteria and mycobacteria were all negative; and echocardiographic examination provides to rule out bacterial endocarditis yielded no abnormality. The differential diagnoses were spinal cord tumor, or infraction. A midline incision was performed from occiput to C3 with exposure of the occipital bone and cervical spine laminae from C1 to C3. Then, laminectomy C1C2 was done with the opening of occipital hole, to expose and open the dura-mater to access the cervical medulla showing yellowish and expanded appearance of the spinal cord ( Figs. 2a and 2b ). After aspiration puncture, the pus was sucked out completely by introducing a small needle into the abscess and the cavity was completely drained and inspected the spinal cord relaxed using microscope ( Figs. 2a and 2b ). Pus samples were submitted to microbiology for culturing, revealed Staphylococcus aureus. Vancomycin and amikacin were started after the surgical procedure and maintained for six weeks. After a few weeks of treatment, she had complete resolution of her neurological symptoms with decreasing infectious parameters in blood biochemistry after the antibiotic regimen. Follow-up MRI six months after surgery demonstrated cervical intramedullary lesion hyperintense in T1 and T2 suggesting a dermoid cyst ( Fig. 3 ). 3 Discussion A Medline search of the world’s literature was conducted using keywords: “bulbo-medullar” AND “abscess” AND “intramedullar” AND “brainstem AND “spinal cord”. We didn’t found any case published until this date similar to ours with cervical intramedullary abscess extended to medulla oblongata and the patient did not have any congenital abnormalities. 3.1 Epidemiology and pathogenesis IMASC originally described by Hart in 1830 [10] , may be confused with an intramedullary tumor. Also, the infection spreads to the cord from the contiguous vertebral body, disc space, sepsis or a dermal sinus. While 43% of the cases considered cryptogenic in origin, occurred following transient bacteremia originating from mucosal surfaces or from clinically unrecognized extra spinal sites of infection [18] . However, brainstem abscesses also rare, accounting for just 0.5% of brain abscesses [17,20] . Although the pathophysiology of a brainstem abscess is not well defined, three known routes of inoculation are spread from infection of contiguous structures either by direct spreador by retrograde thrombophlebitis, hematogenous spread by distant focus of infection [14] . The ear is the most, common sites of primary infection responsible for local route of spread of infection to the brainstem, followed by the orbit and cervical spine as reported in our case. The risk factors include endocarditis, cyanotic congenital heart disease and compromised immune system [5] . Although 30% of cases are microbiologically sterile, diverse organisms have been isolated. The most common causative microorganisms identified are Streptococcus spp. and Staphylococcus aureus [9] . Also, Anaerobes are rare and often polymicrobial and more rarely monomicrobial anaerobic infection is found 3.2 Clinical presentation The clinical presentation of intramedullary abscess can vary depending on the level of the spinal cord involved, on the abscess size and the stage of infection from only a headache, pain with fever, to rapidly deteriorating neurological conditions and can be acute, subacute, or chronic [9,19] . Classically, patients have a progressive clinical presentation, without history of fever episodes and WBC counts and C-reactive protein may be elevated, as well as the erythrocyte sedimentation rate, regardless of the clinical presentation [1] . While in infants, the clinical diagnosis may be difficult, and the single most objective indicator is the finding of paralysis seen in 58% [2] . However in the brainstem’s abscess, the symptoms are usually a rapidly progressing neurological deficit involving cranial nerves and long tracts without localizing symptoms in the early clinical course [9,19] . 3.3 Imaging The radiological findings can vary from an irregular, enhanced low signal in the early phases of abscess development, to a ring-contrasted lesion in the late stages due to the vascularized capsule. Using diffusion-weighted imaging, a restricted pattern can be observed and indicates the absence of water movement due to the high viscosity of inflammatory cells in pus. However, some tumors such as glioblastoma multiformae or lymphoma with high cellularity may also show areas of restricted diffusion, but patchier than that found in abscesses. Also, dermoid cysts show various and heterogenous intensity, as described in our case, due to the variable contents of the cyst including fat tissue, hair, skin glands and calcifications [4] . Furthermore, low-intermediate T1 intensity, high T2 intensity and ring enhancement, allows for exclusion of necrotic tumor or infarct [2] . So, meticulous study of MRI pictures has a key role in the diagnosis of bulbo-medullary abscess, as well as spectroscopy can be useful for identifying a causative microorganism [9] . 4 Management The best treatment for intramedullary abscesses has yet to be determined, but it currently includes conservative management with systemic antibiotics, surgical drainage of the abscess cavity or stereotactic aspiration in case of solitary brainstem abscess. If the patients do not respond to medical treatment, surgical decompression should be performed and considered to be an important option for large and multiloculated abscesses, close to the surface or content of greater viscosity [7] . Although in a chronic presentation of the disease, good results were also reported with medical management only [8] . Different surgical techniques have been recommended in the literature in the treatment of spinal cord abscess, and many authors adopt the conventional method of draining the abscesses without attend complete removal of the capsule since it was no distinct, and due to the risk of further damage to the cord and delaying the neurological improvement [18] . However, needle aspiration and suction of the purulent content as used in our case, seems to be a good option to reduce the bulging, view abscess decompression, confirming the diagnosis and choosing appropriate antibiotics [21] . Otherwise, it could only evacuate a small quantity of pus from the cavity, and involve a high probability of recurrence [18] . Most published reports ( Table 1 ) continued the treatment for six to eight weeks, or more [19] . 5 Conclusion Bulbo-medullary abscess is a rare complex life-threatening infection. Meticulous study of MRI pictures has a key role in diagnosis, despite the paucity of symptoms initially. A multidisciplinary approach including subsequent adequate antibiotic and surgical drainage is warranted and leads to a favorable outcome in this potentially lethal condition. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
REFERENCES:
1.
2. ALSUBAIE S (2019)
3. ARZOGLOU V (2011)
4. BURAK K (2016)
5. CHEN M (2013)
6. DAKE M (1986)
7. DHO Y (2015)
8. DORFLINGERHEJLEK E (2010)
9. HAMAMOTOFILHO P (2014)
10. HART J (1830)
11. KEPES J (1965)
12. KNISS M (2006)
13. KOZIK M (1976)
14. PATEL K (2014)
15. PAYNE S (2006)
16. RIMALOVSKI A (1968)
17. SCLAR G (2007)
18.
19. VERDIER E (2018)
20. WAIT S (2009)
21. WALKDEN A (2013)
|
10.1016_j.ecolind.2025.113602.txt
|
TITLE: Spatial-temporal pattern and supply-demand balance of land use carbon sequestration from a low-carbon perspective: A case study of Guizhou Province
AUTHORS:
- Luo, Yuanhong
- Zhang, Yi
- Ma, Song
- Hou, Chao
- Zhang, Limin
ABSTRACT:
In the pursuit of “carbon neutrality” and sustainable development, understanding the supply–demand dynamics and sustainability of regional carbon sequestration is vital for balancing ecological and economic growth. This study focus Guizhou Province, a typical karst region in southwestern China characterized by ecological fragility and significant carbon sequestration potential, to analyze the spatial–temporal evolution of carbon sequestration supply and demand from 2000 to 2020. Using the InVEST model, carbon emission factor method, gravity center migration model, and spatial autocorrelation analysis, we integrated multi-temporal land use and socio-economic data to assess the carbon sequestration supply–demand relationship. Key findings include: (1) Guizhou’ s carbon sequestration supply increased by 465 million tons, primarily from forestland and cultivated land, with a notable northwestward shift in supply. (2) Demand surged by 408 million tons, primarily driven by urban expansion in the northeast. (3) The carbon service supply and demandratio (CSSDR) declined from 0.53 to 0.44, revealing a shrinking surplus and significant spatial mismatch, particularly in urbanizing areas. These results provide a scientific basis for spatially differentiated land use policies and ecological compensation mechanisms, offering a replicable framework for other ecologically vulnerable regions pursuing carbon neutrality.
BODY:
1 Introduction The escalating impacts of global warming, including environmental degradation, have raised significant global concerns ( Cai and Li, 2024; Pechanec et al., 2018; Wang et al., 2014 ). Studies suggest that by 2050, the global average surface temperature may rise by over 1.5 °C compared to the 1850–1990 baseline, potentially causing sea-level rise, species extinction, and severe threats to socio-economic and environmental sustainability ( Defries et al., 2002; Jiang and Hao, 2022 ). Greenhouse gases, particularly carbon dioxide (CO 2 ), are key drivers of this warming ( Zhang et al., 2022 ). As the world' s second-largest historical emitter of CO 2 , China faces urgent pressure to regulate emissions ( Zhang and Cheng, 2009; Zheng et al., 2019 ). In response, China has implemented a dual-carbon strategy to peak emissions and achieve carbon neutrality, aligning with national low-carbon development goals and global environmental protection efforts ( Fu et al., 2022 ). Land use and land cover (LULC) is a critical driver of carbon sequestration supply–demand dynamics, influencing both ecosystem carbon sequestration supply and demand ( Li et al., 2023; Zhang et al., 2018 ). Theoretical frameworks such as the Ecosystem Services Cascade Model emphasize the interplay between LULC and carbon sequestration, where supply reflects ecosystem capacity to capture carbon, while demand stems from anthropogenic emissions requiring offsetting ( Burkhard et al., 2014; Costanza et al., 2017 ). Especially, LULC is major contributors to CO 2 emissions, responsible for over 30 % of such emissions ( Redlin and Gries, 2021 ) by altering biomass, disrupting biogeochemical cycles, and modifying microclimates ( Rong et al., 2022 ). While heterogeneous LULC patterns, fragmented governance, and divergent socio-economic priorities always cause spatial mismatches between supply and demand ( Cui et al., 2021 ). Contrasting cases illustrate this complexity: Amazon deforestation reduces carbon stocks, while European agroforestry enhances sequestration through integrated management ( Nobre et al., 2016; Schulze et al., 2020 ). Similarly, China' s Loess Plateau afforestation boosts carbon storage but triggers ecological-livelihood tradeoffs ( Feng et al., 2023 ). These disparities underscore the necessity of context-specific strategies to reconcile supply–demand imbalances while addressing socio-ecological synergies ( Zhang et al., 2018; Cai and Li, 2024 ). While substantial research examines LULC impacts on carbon sequestration, critical gaps remain in understanding their interplay. Thematically, most studies focus exclusively on either carbon sequestration supply or demand ( Tordoni et al., 2020 ), with a meta -analysis by Li et al. (2024) revealing that fewer than 15 % of investigations in ecologically fragile zones integrate supply–demand synergies. This unilateral approach risks incomplete ecological and economic assessments, potentially hindering the development of effective carbon neutrality strategies that require comprehensive understanding of both supply–demand dynamics in regional carbon cycles. Geographically, despite multiscale analyses, few investigations examine the synergistic effects of LULC on carbon supply–demand dynamics in fragile karst ecosystems characterized by complex topography and unique geological constraints. The Integrated Valuation of Ecosystem Services and Trade-offs (InVEST) model has become prevalent for multiscale carbon assessments due to its cost-effectiveness, parameter flexibility, and spatial visualization capabilities ( He et al., 2016; Zhao et al., 2019 ), particularly in capturing landscape heterogeneity ( Sharp et al., 2020 ). However, its dependence on static carbon density coefficients may underestimate dynamic feedbacks in rapidly changing karst systems, with Xu et al. (2021) demonstrating 12–18 % prediction deviations in Southwest China's karst when compared to eddy covariance measurements. Meanwhile, the carbon emission factor method's simplicity and data accessibility facilitate cross-scale applications ( Hu et al., 2022; Liu et al., 2025; Wang et al., 2020 ), yet its coarse spatial resolution (typically 1–10 km grids) obscures microscale variations in karst sink-source relationships, with Smith et al. (2020) reporting 20–30 % accuracy loss in heterogeneous landscapes compared to process-based models. Furthermore, reliance on indices like CSSDR ( Wang et al., 2021 ) and carbon neutrality metrics ( Bai et al., 2023 ), model limitations may introduce uncertainties in results. Crucially, spatial heterogeneity in carbon sequestration—particularly understudied in karst mountains where LULC patterns create distinct spatial carbon patterns ( Allaire et al., 2012 )—demands prioritized attention, as spatial accounting is vital for accuracy in heterogeneous landscapes ( Sun et al., 2020 ). Karst ecosystems, covering 15 % of global land area, play a disproportionate role in terrestrial carbon sequestration but face unique vulnerabilities (Yuan, 2024). Karst regions in China, shaped by monsoons, tectonic activity, and intense dissolution, exhibit a dual surface-underground structure with rugged terrain, steep slopes, and severe soil erosion—hallmarks of fragile ecosystems ( Qiu et al., 2021 ). Shallow, calcium-rich soils with low organic content constrain vegetation productivity and carbon supply ( Wang et al., 2019 ), while rapid water infiltration through fractured bedrock exacerbates nutrient leaching and carbon loss ( Liu et al., 2016 ). This creates a “leaky” carbon sink vulnerable to destabilization, where surface vulnerability and subterranean drainage synergistically amplify carbon flux ( Chen et al., 2022 ). Comparatively, in non-karst tropical regions like the Congo Basin, deep lateritic soils and stable hydrological conditions support higher carbon retention capacity, with soil organic carbon stocks exceeding 150 Mg C ha −1 in undisturbed forests ( Ciais et al., 2011; Lewis et al., 2013 ). Such disparities underscore the need for tailored carbon management strategies in karst zones. Guizhou Province, a vital ecological barrier in the upper Yangtze-Pearl River basins and a National Ecological Civilization Pilot Zone, faces dual challenges in sustainable development. Its karst-dominated terrain hosts complex ecosystems with significant carbon storage capacity ( Wang et al., 2019; Luo et al., 2022 ), yet severe rocky desertification and fragile soils create a “supply-demand trap” where ecosystem carbon supply struggles to offset urbanization-driven demand ( Liu et al., 2021 ). Hydrological dynamics in fractured karst landscapes exacerbate carbon flux instability through rapid nutrient leaching ( Qiu et al., 2021 ). Since 2000, national poverty alleviation programs have spurred remarkable economic growth—287 % GDP increase by 2020—but intensified infrastructure expansion. This development paradox highlights the critical need to reconcile ecological conservation with economic priorities, particularly as Guizhou transitions into a regional economic hub while pursuing carbon neutrality. The province' s unique biogeochemical constraints, coupled with competing land-use pressures, demand innovative strategies to balance carbon sink preservation with sustainable growth ( Sun et al., 2020 ). This study aims to clarify the dynamics and spatial–temporal distribution of carbon sequestration supply–demand in Guizhou, contextualized by land use changes amidst rapid economic development and extensive ecological restoration. The objectives are twofold: (1) to analyze the spatial–temporal attributes of carbon sequestration supply–demand driven by land use changes in Guizhou from 2000 to 2020 and (2) to evaluate the alignment of carbon sequestration service supply–demand at the grid level within the province, providing informed guidance for strategic land use planning and sustainable carbon sequestration policy formulation in Guizhou and similar ecologically sensitive areas. 2 Materials and methods 2.1 Study route The basic ideas for the assessment of the supply–demand relationship of carbon sequestration in Guizhou Province from 2000 to 2020 include the following steps: LULC data from 2000 to 2020 were prepared. The supply and demand of carbon sequestration were analyzed respectively using the InVEST model and the carbon emission factor method. The supply–demand relationship of carbon sequestration in the study area over the past 20 years was quantified based on the CSSDR index. To further analyze the spatial movement trends of carbon supply and demand, the gravity center migration model was applied to simulate spatial shifts. Additionally, given Guizhou' s complex topography, spatial autocorrelation analysis was employed to investigate the spatial distribution patterns of the supply–demand relationship (see Fig. 1 ). 2.2 Study area Guizhou Province, situated on the Yunnan-Guizhou Plateau in southwestern China, comprises six prefecture-level cities—Guiyang, Zunyi, Bijie, Tongren, Anshun, and Liupanshui—and three autonomous prefectures: Qiandongnan, Qianxinan, and Qiannan. The province covers an area of approximately 176,200 km 2 , with geographical coordinates extending from 103°36′–109°35′E and 24°37′ to 24°37′–29°13′N ( Fig. 2 ). Characterized by a typical low-latitude plateau climate, the region receives an average annual precipitation of 1100–1300 mm, maintains an average temperature of around 15°C, and benefits from 1100 to 1400 h of sunshine annually ( Jiao et al., 2022 ). The terrain generally slopes from the higher northwest to the lower southeast, with an average elevation of approximately 1100 m. Over 90% of the province is mountainous or hilly. The predominant soils are zonal, including yellow soil, yellow–brown soil, and red soil, whereas the landscape features significant and karst formations, comprising over 60 % of the area ( Niu and Shao, 2020 ). The forest coverage rate in Guizhou is close to 60 %, and it has an extremely high potential for carbon sequestration ( Sun et al., 2020 ). Guizhou, historically underdeveloped with fragile ecosystems, has experienced sustained high economic growth. Since the 2000 “Western Development” strategy, its urbanization rate rose from 23.87 % to 53.15 %, and GDP surged from 102.99 billion to 1.78 trillion yuan by 2020 ( Jiao et al., 2022 ). As an "ecological civilization construction demonstration zone", Guizhou must balance ecological preservation with economic growth. 2.3 Data sources and preprocessing The data for this study were sourced from several key datasets: (1) Land use data from the China Land Cover Dataset ( https://irsip.whu.edu.cn ) published by Jie Yang and Xin Huang ( Yang and Huang, 2021 ) from Wuhan University, covering the years 2000, 2005, 2010, 2015, and 2020. Following the current land use classification standard (GB/T 21010-2017) and the specifics of the study area, these data were reclassified into seven categories: forestland, grassland, shrub land, cultivated land, water body, unused land, and construction land. The missing values of land use in the CLCD dataset were processed using the SetNull function. All data were resampled and projected into the CGCS2000_3_Degree_GK_Zone_36 projected coordinate system with a spatial resolution of 30 m × 30 m ( Fig. 3 ). (2) Fossil energy data used for calculating carbon emissions from construction land were obtained from the Guizhou Provincial Statistical Yearbook. (3) Socio-economic data, including population and GDP metrics, were sourced from the Center for Resource and Environmental Science and Data of the Chinese Academy of Sciences ( https://www.resdc.cn/ ). 2.4 Carbon sequestration supply The InVEST model estimates carbon storage in each spatial unit by combining land use maps with carbon density values for each land cover type ( Deng et al., 2021a ). This approach quantifies carbon stocks across four key pools: aboveground biomass, belowground biomass, dead organic matter, and soil organic matter ( Hu et al., 2020 ). The carbon density data were obtained from empirical studies conducted in the research area ( Deng et al., 2021a; Wu, 2021 ) ( Table 1 ). The assessment is conducted using the following formula ( Deng et al., 2021a ): where (1) C = C a b o v e + C b e l o w + C s o i l + C d e a d C denotes total ecosystem carbon sequestration (t), separately represents the aboveground carbon density of vegetation (t·hm C a b o v e , C b e l o w , C s o i l , C d e a d −2 ), belowground carbon density (t·hm −2 ), soil organic carbon density (t·hm −2 ), and carbon density of dead organic matter (t·hm −2 ). Referring to the measured carbon density data of related scholars and related studies ( Deng et al., 2021a ) to obtain the carbon density of Guizhou Province ( Table 1 ). To better capture dynamic changes in carbon sequestration, this study adjusted carbon density values using annual meteorological data (mean annual temperature and precipitation), following methodologies established by Giardina and Ryan (2000) and Alam et al. (2013) . The final adjusted carbon density values for different land use types across study years are presented in Table 2 . The carbon density correction method is as follows: (2) C SP = 3.3968 × M A P + 3396.1 ( R 2 = 0.11 ) (3) C BP = 6.798 × e 0.0054 × M A P ( R 2 = 0.70 ) (4) C BT = 28 × M A T + 398 ( R 2 = 0.47 ) (5) K B = K BP × K BT = C B P i C B P mean × C B T i C B T mean where (6) K S = C S P i C S P mean is the soil carbon density (t·hm C SP −2 ) corrected for mean annual precipitation, and C BP are the biomass carbon density (t·hm C BT −2 ) corrected for mean annual precipitation and mean annual temperature in the study area; MAP and MAT represent mean annual precipitation (mm) and mean annual temperature (°C). and K B are the correction coefficients for biomass carbon density and soil carbon density, K S and K BP are the correction coefficients for biomass carbon density corrected based on mean annual rainfall and temperature; K BT C and i C represent the mean annual precipitation and the mean annual rainfall for year mean i . 2.5 Carbon sequestration demand Carbon sequestration demand is determined by the total carbon emissions ( Yuan et al., 2023 ). The carbon emission factor method can calculate the carbon emissions from regional land use through the emission coefficient and standard coal conversion coefficients. These emissions from LULC are categorized into direct and indirect types. Direct carbon emissions specifically refer to those directly resulting from LULC activities, whereas indirect emissions arise from human production and living activities conducted on the land ( Cai and Li, 2024 ). The disparity in carbon sources and sinks among different land cover types is critical in calculating direct carbon emissions. For Guizhou Province, direct carbon emissions were computed for forestland, cultivated land, grassland, water bodies, unused land, and shrub land, whereas indirect emissions were estimated from construction land, particularly focusing on energy consumption derived from fossil fuels. 2.5.1 Direct carbon emission accounting The carbon emission coefficients method was employed to calculate direct carbon emissions from LULC. The formula used to summarize these emissions is as follows: where (7) E k = ∑ e i = ∑ A i × μ i is the direct carbon emission; E k is each land use type; i indicates the carbon emission/absorption of the e i land type; i indicates the area of the land of the A i land type; and i is the carbon emission coefficient for each land type. According to the findings of μ i Li et al. (2023) and considering the specific conditions of Guizhou Province, the carbon emission coefficients for each land use type were detailed in Table 3 . 2.5.2 Indirect carbon emission accounting Indirect carbon emissions are not calculated solely based on the land area but influenced by the socio-economic activities occurring on construction land. These emissions include those from fossil energy consumption, electricity usage, and population respiration, as identified in the methodology of Cai and Li (2024) . The calculation formula is as follows: (8) C f = ∑ n i × ε i × φ i (9) C p = P × β where (10) C b = C f + C p represents the indirect carbon emissions, C b represents the carbon emission s from fossil fuel consumption, C f represents carbon emissions from human respiration, C p indicates the annual consumption of different energy sources, n i and ε i represent the different energy conversion factors for standard coal and carbon emissions, φ i P represents the permanent population of Guizhou Province, represents the carbon emission coefficient for human respiration in Guizhou Province. Previous research shows that its value is 79 kg/person ( β Zhou, 2011 ). The selected energy sources include raw coal, coke, fuel oil, gasoline, kerosene, diesel, natural gas, and electricity, with specific coefficients detailed in Table 4 , based on the IPCC 2006 inventory and “China Energy Statistical Yearbook” for standard coal conversion. 2.6 Supply and demand of carbon sequestration services Due to the significant spatial heterogeneity in carbon sequestration capacities among various LULC types, mismatches frequently occur between the supply and demand of carbon sequestration services across different geospatial locations ( Syrbe and Grunewald, 2017 ). The CSSDR was selected to quantify the surpluses and deficits in carbon sequestration supply–demand relationships in Guizhou Province from 2000 to 2020. This indicator effectively characterizes the spatiotemporal heterogeneity of the supply–demand balance for carbon sequestration at individual locations or regional scales ( Wang et al., 2021 ). The calculation formula is as follows: where (11) CSSDR i = 2 ( S i - D i ) ( S max - D max ) denotes the ratio of supply and demand of sequestration services of grid CSSDR i i , and S i represents the supply and demand of sequestration service functions of grid D i i , and S max indicate maximum supply and demand values of carbon sequestration services for grid D max i. A CSSDR greater than 0 indicates a surplus, with supply exceeding demand. Positive, zero, and negative values indicate surplus, equilibrium, and deficit states, respectively, reflecting whether carbon sequestration service supply exceeds, matches, or falls below regional demand. 2.7 Gravity center migration model We employed the Gravity center migration model, utilizing the ArcGIS software platform to examine the spatial–temporal dynamics of carbon sequestration supply and demand in Guizhou Province from 2000 to 2020 ( Cao et al., 2018; Liu et al., 2022 ). This model incorporates the 1-fold Standard Deviational Ellipse (SDE) to analyze the direction and concentration trends of spatial changes in carbon sequestration ( Liu et al., 2022 ). The SDE is an analytical tool used to delineate the spatial distribution patterns of geographic elements, comprising four key parameters: center of gravity coordinates, rotation angle, and the lengths of the long (Y) and short (X) axes. The size of the ellipse indicates the concentration of spatial data; the Y-axis represents the direction of data distribution, the X-axis describes the data' s spatial range, and the ellipse' s flatness provides insights into the clarity of data direction and the intensity of centripetal force.The model is specified as follows: Center of gravity coordinates: (12) L on = ∑ i = 1 n U ti × L o n i ∑ i = 1 n U ti (13) Lat = ∑ i = 1 n U ti × L a t i ∑ i = 1 n U ti Azimuth: (14) θ = a r c ∑ i = 1 n x i ′ 2 - ∑ i = 1 n y i ′ 2 + ∑ i = 1 n x i ′ 2 - ∑ i = 1 n y i ′ 2 2 + 4 ∑ i = 1 n x i ′ 2 y i ; 2 / 2 ∑ i = 1 n x i ′ 2 y i ′ 2 Axis standard deviation: (15) δ x = ∑ i = 1 n x i ′ 2 x i ′ cos θ - y i ′ sin θ 2 / n where (16) δ y = ∑ i = 1 n x i ′ 2 x i ′ sin θ - y i ′ cos θ 2 / n Lon and Lat are the longitude and latitude coordinates of the centroid in the t -th year, Lon and i Lat are the longitude and latitude coordinates of the centroid of the i t -th carbon sequestration patch, U is the carbon sequestration patch in the ti t -th year. is the weight; ω i is the azimuth angle, θ , x i ′ are the relative coordinates of each point from the regional center, y i ′ and δ x are the standard deviations along the δ y X -axis and Y -axis, respectively. 2.8 Spatial autocorrelation Spatial autocorrelation quantifies the distribution patterns and interrelationships of spatial data, revealing the clustering characteristics of CSSDR at both global and local scales through visualization (Chen et al., 2023). This study analyzes Guizhou Province' s CSSDR using GeoDa to perform global autocorrelation on carbon sequestration supply–demand vector data, identifying clustering patterns across different periods. Local autocorrelation further clarifies the spatial distribution of cluster types. A comprehensive assessment of carbon sequestration supply–demand relationships in Guizhou was conducted. 2.8.1 Global spatial autocorrelation analysis Moran' s I index quantifies the degree of correlation, or similarity, between values of a specific attribute in one region and the same attribute in neighboring regions ( Zhang and Zhang, 2007 ). This global spatial autocorrelation is crucial for identifying the overall distribution patterns of an attribute across space, characterized as aggregated, discrete, or random. The global Moran' s I index was employed in our analysis to elucidate the spatial correlations in carbon sequestration supply and demand across Guizhou Province. The calculation formula is as follows: where (17) I = ∑ i = 1 n ∑ j = 1 n w ij x i - x ¯ x j - x ¯ S 2 ∑ i ∑ j w ij I represents the Moran' s I index; n represents the total number of grid cells, x ( i x ) represents the measured value of grid cell j i ( j ), ( x - i ) represents the deviation of the measured value in the x ¯ i grid cell from the mean value; w indicates the standardized spatial weight matrix; ij S represents the variance. 2 2.8.2 Local spatial autocorrelation analysis The inherent randomness of the data often leads to local imbalances, necessitating the introduction of the local spatial autocorrelation index to assess relationships within specific areas. Local spatial autocorrelation analyzes the decomposition of global spatial autocorrelation for each spatial unit. This process aims to capture the spatial correlation between the attribute values of an element in a localized area and those of adjacent elements, thereby revealing spatial heterogeneity across the region. The calculation formula is: (18) I i = x i - x ¯ ∑ j = 1 n w ij x i - x ¯ S 2 where (19) S 2 = 1 n ∑ i = 1 n x i - x ¯ 2 I represents the Moran' s I index; n represents the total number of grid cells, x ( i x ) represents the measured value of grid cell j i ( j ), ( x - i ) represents the deviation of the measured value in the x ¯ i grid cell from the mean value; w indicates the standardized spatial weight matrix; ij S represents the variance. 2 3 Results and analysis 3.1 Spatial-temporal changes in carbon sequestration supply From 2000 to 2020, Guizhou Province experienced a fluctuating yet overall increase in carbon sequestration. The total sequestered carbon was recorded at 6220 million tons in 2000 and reached 6684 million tons by 2020, culminating in a net increase of 465 million tons over two decades, which represents a growth rate of 7.46 %. Analysis of carbon sequestration supply and carbon density across different LULC revealed that forestland was the primary contributor, accounting for approximately 69.00 % of the total capacity. Cultivated land was the second largest contributor, providing 25.04 % of total sequestration. Both grassland and shrub land showed a decline, with reductions totaling 342 million tons and 728 million tons, respectively. Unused land contributed the least, limited by its spatial distribution ( Fig. 4 a). Notably, the carbon density of forestland consistently exhibited the highest values, showing its efficiency in carbon sequestration per unit area. Except for grassland and shrub land, all other LULC types in Guizhou showed an increasing trend in carbon density from 2000 to 2020 ( Fig. 4 b–g). The spatial distribution of carbon sequestration services was predominantly higher in the eastern regions of the province and lower in the west ( Fig. 5 ). Lower values were typically observed in highly urbanized areas such as Guiyang City and near water bodies. The highest carbon sequestration supply was found in Qiandongnan. The gravity center analysis highlighted that while the overall spatial pattern of carbon sequestration supply in Guizhou remained relatively stable through the years, there was a slight contraction towards the center, with the long axis slightly increasing from 205.47 km to 205.72 km and the short axis decreasing from 149.95 km to 149.34 km over the two decades ( Table 5 and Fig. 6 ). The gravity center of carbon sequestration has always been within Guiyang City, which remains the core area for carbon sequestration. However, a noticeable shift towards the northwest suggests a higher-than-average growth rate of carbon sequestration supply in the northeastern parts of the province. 3.2 Spatial-temporal changes of carbon sequestration demand As urbanization progressed, accompanied by increased consumption of fossil fuels and changes in LULC types, the demand for carbon sequestration in Guizhou Province showed a notable increase. Over the two decades, the demand rose by 407 million tons, reflecting a substantial growth rate of 424.27 % ( Fig. 7 f). Construction land emerged as the primary driver of this increased demand, with carbon sequestration demand escalating from 100 million tons to 207 million tons, exhibiting a consistent upward trajectory. Conversely, the demand for carbon sequestration from cultivated land declined significantly, with its share of total demand dropping from 2.77 % in 2000 to just 0.49 % in 2020 ( Fig. 7 a-e). Electricity and coal were the predominant contributors to carbon demand from construction land, accounting for 40.98 % to 83.74 % and 13.01 % to 50.52 % of the total, respectively ( Fig. 7 g). From 2000 to 2020, the geographical distribution of carbon sequestration demand in Guizhou Province displayed significant shifts, characterized by a pattern of “low in the surroundings and high in the middle” ( Fig. 8 ). High-demand areas were predominantly in urban centers such as Guiyang City and Zunyi City, whereas regions like Tongren and Qiandongnan Prefecture exhibited lower demand. The overall spatial pattern of carbon sequestration demand expanded outward from these central built-up areas. During this period, the spatial distribution of carbon sequestration demand in Guizhou Province developed a consistent “northeast-southwest” orientation, with notable changes in the axis lengths of the standard deviation ellipse ( Table 6 , Fig. 9 ). The long axis extended from 178.01 km in 2000 to 180.30 km in 2020, indicating a centripetal accumulation of demand in the northwest. Simultaneously, the short axis expanded from 111.32 km to 114.12 km, reflecting a divergence in the “southeast-northwest” direction. Despite these shifts, the gravity center of carbon sequestration demand remained in Guiyang City, underscoring its role as the core of carbon emissions in the province. However, there was a noticeable shift of demand towards the northeast, suggesting a higher growth rate in northeastern Guizhou compared to the average. 3.3 Spatial-temporal changes in carbon sequestration supply and demand from land use In Guizhou Province, the carbon sequestration supply consistently exceeded demand. The average values of the CSSDR for the years 2000, 2005, 2010, 2015, and 2020 were 0.53, 0.49, 0.56, 0.55, and 0.44, indicating a fluctuating but generally decreasing trend. This suggests notable changes in supply relative to demand. A clear spatial imbalance was observed; areas showing a deficit in supply and demand were primarily in built-up regions with intense anthropogenic activities. Conversely, regions with a CSSDR surplus were typically in areas with high vegetation cover, such as the eastern and northern parts of Guizhou ( Fig. 10 ). Moran' s I for the CSSDR from 2000 to 2020 was greater than 0, indicating a positive spatial correlation between carbon sequestration supply and demand, and passed the significance test at the 95 % confidence level (p < 0.05), showing significant aggregation patterns across the province ( Fig. 11 ). Spatial clusters with high-high aggregation were relatively concentrated, mainly in the east and north of Guizhou, particularly in Qiandongnan Prefecture, which benefits from extensive forest cover maintaining a consistently high carbon supply, but with a declining trend. The low-low aggregation spatial division group was more dispersed, predominantly in the western part of the province, yet continued ecological initiatives such as natural forest protection have impacted this trend. Notably, despite annual increases in carbon emissions, built-up areas did not form a pattern of low-low spatial aggregation, indicating that carbon sequestration supply and demand have maintained balance amidst economic development. 4 Discussion 4.1 Characteristics of spatial and temporal evolution of land use carbon sequestration supply Human intervention and utilization of natural ecosystems directly impact the structure and pattern of land use, reflecting the extent of human disturbance and its effects on ecosystem carbon sequestration functions and services ( Hasan et al., 2020; Wang et al., 2022 ). Adjustments to land use structures have proven effective in enhancing carbon stocks ( Yu and Wu, 2011 ). The spatiotemporal dynamics of carbon sequestration supply in Guizhou Province were driven by a combination of human interventions and natural mechanisms, as evidenced by the fluctuating yet overall upward trend in total carbon stock ( Fig. 4 h). The initial decline (977 million t C loss from 2000 to 2005) coincided with rapid urbanization, which reduced forestland and cultivated land ( Fig. 4 a), collectively accounting for 94.04 % of total carbon sequestration capacity. This loss reflects the direct impact of land use conversion on ecosystem services, particularly through the fragmentation of high-carbon-density ecosystems ( Gao and Wang, 2019 ). However, post-2005 recovery was driven by ecological restoration policies, including programs for returning cultivated land to forestland and grassland, natural forest protection, and protection projects in the middle and upper reaches of the Yangtze River ( Qiu et al., 2021 ), which increased forestland area by 2871 km 2 and enhanced soil carbon storage via reduced erosion ( Deng et al., 2021b ), not only reversing deforestation trends but also amplifying forestland' s carbon density advantage ( Fig. 4 b-g), where biomass-driven sequestration remained dominant ( Chuai et al., 2013 ). Forestland and cultivated land exhibited the strongest carbon sequestration capacities in Guizhou Province ( Fig. 4 ), consistent with prior studies ( Xu et al., 2016; Yang et al., 2024 ). Forestland expanded its carbon supply through preservation and growth, particularly in karst-dominated regions like the Dalou Mountains, where steep slopes and natural forests enhance biomass accumulation ( Wang et al., 2022; Zhang et al., 2015 ). Cultivated land, however, showed fluctuations: declines in 2000–2005 were driven by urban encroachment, while subsequent gains (2005–2015) stemmed from intensive farming practices. However, 2015–2020 reductions highlighted the unsustainable balance between land loss and carbon sink efficiency ( Fig. 4 d). Unused land and construction land contributed minimally due to low vegetation cover and urban carbon density ( Wang et al., 2022 ). Topography and land-use interactions further shaped these patterns. Karst landscapes, prevalent in Guizhou, enhance carbon storage via high biomass retention ( Guo et al., 2019; Danardono et al., 2019 ). However, steep-slope croplands and degraded woodlands in karst regions are prone to soil erosion, indirectly reducing carbon sequestration ( Plangoen et al., 2013 ). Provincial soil erosion reports (2016–2020) and Green et al. (2019) identified these areas as hotspots, where land use modifications exacerbated carbon loss. Spatially, carbon sequestration supply exhibited pronounced heterogeneity, with high-value clusters concentrated in the eastern and northern regions (e.g., Qiandongnan) ( Fig. 5 ). These areas benefited from dense vegetation cover and karst topography, where high rainfall and complex terrain sustained elevated carbon densities ( Zhou et al., 2022 ). In contrast, western Guizhou and urban cores (e.g., Guiyang) showed lower values due to anthropogenic pressures. The stability of the carbon sequestration gravity center within Guiyang City ( Table 5 , Fig. 6 ) highlights the resilience of central forest reserves, while its slight northwestward shift highlights faster carbon stock growth in northeastern regions—likely linked to targeted afforestation in karst degradation zones ( Wang et al., 2022 ). Spatial correlations further clarified these patterns. The positive global Morans’ I (p < 0.05) ( Fig. 11 ) confirmed clustering of high-high carbon supply areas in eastern Guizhou, driven by synergistic effects of policy-driven forest expansion and karst-enhanced carbon storage ( Guo et al., 2019 ). Low-low clusters in the west correlated with steep-slope croplands and soil erosion hotspots (Soil and Water Conservation Bulletin of Guizhou, 2016–2020), where land degradation offset vegetation gains. Notably, despite rising carbon demand in urban centers ( Fig. 7 f), the absence of low-low clusters in built-up areas suggests localized balancing of emissions through green infrastructure, aligning with China’ s ecological civilization goals ( Qiu et al., 2021 ). Guizhou’ s carbon sequestration trajectory reflects a policy-nature feedback loop: while human-driven land use changes initially degraded carbon stocks, targeted restoration leveraged the province' s karst biome advantages to achieve net gains. Spatial autocorrelation analysis bridges these mechanisms to geographical outcomes, emphasizing the need for region-specific strategies in heterogeneous landscapes. 4.2 Analysis of spatio-temporal evolution characteristics of carbon demand from land use Changes in LULC, driven by both natural phenomena and human activities, significantly impact the demand for ecosystem carbon sequestration. The escalating carbon demand in Guizhou Province (407 million t·C increase) ( Fig. 7 f) was mechanistically driven by urbanization-induced land use transitions and energy consumption patterns. Rapid urban expansion, directly amplified carbon emissions through fossil fuel reliance (coal: 40.98–83.74 %, electricity: 13.01–50.52 %) ( Fig. 7 g) and reduced carbon sinks via vegetation loss ( Deng et al., 2021b ). This aligns with the dominance of construction land in carbon demand ( Fig. 7 a–e), where urban sprawl fragmented high-sequestration ecosystems, accelerating soil organic carbon decomposition ( Cai and Li, 2024 ). Concurrently, intensive agricultural practices offset declining cultivated land area, sustaining per-unit emissions despite overall cultivated land reduction: a paradox highlighting the limits of land-intensive carbon management ( Li et al., 2023 ). Spatially, carbon demand exhibited a “high-central, low-peripheral” pattern ( Fig. 8 ), concentrated in urban hubs like Guiyang and Zunyi. The SDE revealed a northeast-southwest orientation ( Table 6 ), reflecting centripetal demand accumulation in northwestern Guizhou. This spatial expansion correlates with infrastructure development under the 10th Five-Year Plan, which prioritized energy projects (e.g., coal-fired power plants) in resource-rich northeastern zones ( Jiao et al., 2023 ). The persistent gravity center within Guiyang ( Fig. 9 ) underscores its role as an emissions hotspot, while its northeastward shift signals faster demand growth in less regulated areas, contrasting with neighboring provinces like Sichuan and Yunnan that leverage hydropower for lower emissions ( Zhu et al., 2024 ). Spatial correlations further contextualize these trends. The absence of low-low clusters in urban cores ( Fig. 11 ) suggests localized mitigation through green infrastructure, yet high-high clusters in eastern Guizhou correlate with forest conservation policies buffering demand pressures. Conversely, western regions, marked by steep-slope croplands and soil erosion, faced compounded carbon losses from land degradation and limited policy intervention. Guizhou’ s carbon demand trajectory reflects a dual-force dynamic: urban-industrial growth drives emissions, while regional energy policies and topography modulate spatial inequities. Targeted strategies—reducing coal dependency in northeastern hotspots and scaling erosion control in the west—are critical to balancing development with carbon neutrality goals. 4.3 Carbon sequestration supply and demand dynamics in Guizhou province The imperative to manage carbon neutrally in the face of global climate change has made it essential to accurately quantify the balance between carbon sequestration supply and demand ( Wang et al., 2022 ). The spatiotemporal mismatch between carbon sequestration supply and demand in Guizhou Province arises from divergent LULC dynamics and energy consumption patterns ( Fig. 10 ). Forestland and cultivated land dominate carbon sinks ( Fig. 4 a), while construction land expansion and coal-dominated energy systems drive demand growth ( Fig. 7 g). In 2000–2005, ecological restoration policies reversed carbon losses by expanding forest area and enhancing soil carbon storage ( Deng et al., 2021b ). However, aging forests in eastern regions like Qiandongnan risk transitioning from sinks to sources due to declining photosynthetic activity ( Zhu et al., 2024 ), revealing vulnerabilities in natural carbon retention under prolonged stress. High CSSDR clusters in southeastern Guizhou ( Fig. 10 ) correlate with dense vegetation and karst topography, where abundant rainfall and limited human activity sustain high carbon densities. In contrast, urban cores like Guiyang and Zunyi exhibit deficits due to construction land expansion and fossil fuel reliance ( Fig. 8 ). The standard deviational ellipse analysis further reveals a northeast-southwest orientation of demand ( Table 6 ), expanding towards resource-rich northeastern zones under energy infrastructure projects ( Jiao et al., 2023 ). Moran' s I results ( Fig. 11 ) confirm spatial aggregation of high-high supply–demand clusters in policy-protected eastern regions, while low-low clusters in the west align with soil erosion hotspots (Soil and Water Conservation Bulletin of Guizhou, 2016–2020), emphasizing topography-mediated vulnerabilities. The current sequestration supply–demand relationship in Guizhou Province is generally stable, showing self-sufficiency; however, uncertainties loom over future relationships amidst rapid socio-economic development and growing contradictions between human activity and land, as well as climate change ( Lu et al., 2023 ). This study indicates a declining trend in the supply–demand ratio of carbon sequestration services in Guizhou Province, signifying that the supply of carbon sequestration is negatively growing relative to demand. Socio-economic development and urban expansion have heightened human demand for carbon sequestration, adversely affecting the balance between supply and demand and the sustainable provision of ecosystem services. Without disruptive measures, supply is likely to exceed demand in the future, exemplified by the current deficit in the built-up areas of various cities in Guizhou Province. In response, it is recommended to enhance management for intensive and efficient land use or implement engineering measures to direct the flow of ecosystem services ( Yu et al., 2021 ), which could mitigate the worsening supply–demand contradiction for carbon sequestration. 4.4 Ways to balance carbon sequestration supply and demand for ecosystem services The carbon sequestration supply–demand dynamics in Guizhou Province highlight a critical tension between ecological preservation and urban expansion. To address expanding deficits in built-up areas such as Guiyang and Zunyi while maintaining regional surpluses in eastern high-vegetation zones like Qiandongnan, ecological and urban governance strategies must be strategically decoupled. Ecological strategies should focus on stabilizing carbon stocks in vulnerable karst landscapes. Targeted afforestation in northeastern Guizhou, including regions like Bijie and Liupanshui, could transform degraded shrublands and eroded slopes into mixed-species forests. This approach capitalizes on high rainfall and karst soil adaptability, building on successes from the Grain-for-Green Program that expanded forest area by 2871 km 2 between 2000 and 2020 ( Fig. 4 a) ( Deng et al., 2021b ). However, challenges include soil erosion during reforestation on steep slopes, requiring terracing or bioengineering interventions ( Plangoen et al., 2013 ). Additionally, aging monoculture plantations in eastern Guizhou risk carbon saturation ( Zhu et al., 2024 ), necessitating diversification with native species to ensure long-term carbon retention. Urban governance strategies must prioritize decoupling demand growth from coal dependency. In high-emission hubs like Guiyang, strict land-use zoning could limit construction land expansion, which grew by 175.78 % from 2001 to 2017, while incentivizing renewable energy adoption. Retrofitting coal-fired power plants with carbon capture technologies or subsidizing rooftop solar installations in new developments could reduce emissions from electricity, which account for 40.98–83.74 % of construction land demand ( Fig. 7 g). However, economic reliance on coal mining in areas like Liupanshui creates political and social barriers, as transitioning to renewables may disrupt local livelihoods ( Jiao et al., 2023 ). Phased subsidies for green job training and small-scale hydropower investments in river-rich zones such as Qiandongnan could align with regional resource advantages. Integrating Moran' s I clusters ( Fig. 11 ) into policy design is essential. High-high supply–demand zones in eastern Guizhou should enforce ecological redlines to prevent urban encroachment, while low-low clusters in the west require erosion control combined with agroforestry to stabilize carbon flows. Fragmented land tenure in rural areas and competing provincial economic targets may hinder centralized implementation ( Lin et al., 2022 ). A pilot carbon credit trading system between deficit and surplus municipalities, supported by real-time GIS monitoring ( Fig. 5 and Fig. 8 ), could foster cross-regional collaboration while addressing equity concerns. Balancing carbon sequestration in Guizhou requires precision governance—tailoring ecological restoration to karst biome constraints and urban policies to energy transition realities—while navigating socio-political complexities inherent in regional development. 4.5 Limitations and prospects This study has limitations that warrant consideration. The InVEST model reliance on static carbon density values, adjusted using meteorological data ( Table 2 ), may oversimplify dynamic interactions between vegetation growth, soil carbon turnover, and climate variability, particularly under extreme weather events or abrupt land use changes. Regional carbon density parameters ( Table 1 ) carry uncertainties due to localized variations in karst soil properties and forest biomass. Furthermore, Model validation relied on statistical yearbooks and prior studies, lacking independent field measurements (e.g., LiDAR) to verify absolute carbon stock estimates. While 30 m × 30 m LULC data ( Fig. 3 ) offer reasonable resolution, finer-scale data could better resolve microtopographic effects on carbon storage in Guizhou’ s heterogeneous karst landscapes. The CSSDR index quantifies supply–demand mismatches but does not link imbalances to ecosystem resilience thresholds. Additionally, spatial autocorrelation analysis ( Fig. 11 ) identified clustering patterns, However, it is still necessary to analyze the driving factors that affect the CSSDR index (such as policy implementation, natural environment, changes in land structure, etc.) and rank their contribution degrees. Future research should integrate process-based models with high-resolution remote sensing to simulate dynamic carbon-nutrient feedbacks in karst soils and validate spatial patterns. Machine learning approaches could rank the impacts of urbanization, policies, and climate on CSSDR trends, while scenario analyses should evaluate strategies like afforestation in degraded zones ( Fig. 4 a) or renewable transitions ( Table 4 ) in closing supply–demand gaps. Addressing these gaps will enhance mechanistic understanding of driver contributions and inform spatially targeted governance in ecologically fragile, rapidly developing regions. 5 Conclusion This study examined the spatial–temporal dynamics of land use carbon sequestration supply and demand in Guizhou Province from 2000 to 2020, revealing the following key findings: (1) Carbon sequestration supply increased by 465 million t C over 20 years, driven primarily by forestland, which contributed 443 million t C. While supply was concentrated in the southeast, a northwestward shift is anticipated in the future. (2) Carbon sequestration demand rose significantly by 408 million t C, displaying a “low in surrounding areas, high in the center” pattern, with central regions like Guiyang City as hotspots and the northeast emerging as a potential future emissions hub. (3) The average supply–demand ratio during the study period was 0.53, 0.49, 0.56, 0.55, and 0.44, indicating an overall surplus. However, urban expansion and coal dependency threaten to destabilize this balance, particularly in built-up zones. In the future, the carbon sequestration capacity of Guizhou Province can be promoted from three aspects: optimizing the land use structure, accelerating energy transition, and strengthening cross-regional governance. By integrating ecological restoration with targeted urban decarbonization, Guizhou can balance its unique karst biome resilience with socio-economic development, offering a blueprint for fragile yet rapidly developing regions. CRediT authorship contribution statement Yuanhong Luo: Conceptualization, Investigation, Formal analysis, Methodology, Visualization, Writing – review & editing. Yi Zhang: Data curation, Writing – review & editing. Song Ma: Software. Chao Hou: Software. Limin Zhang: Supervision, Writing – review & editing, Funding acquisition. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgments This work was supported by the Guizhou Provincial Major Scientific and Technological Program (Grant number ZK[2024]important 092 ); and the Doctoral research project of Guizhou Science Academy (Grant number [R (2023)03] )
REFERENCES:
1. ALAM S (2013)
2. ALLAIRE S (2012)
3. BAI X (2023)
4. BURKHARD B (2014)
5. CAI Y (2024)
6. CAO S (2018)
7. CHEN Y (2022)
8. CHUAI X (2013)
9. CIAIS P (2011)
10. COSTANZA R (2017)
11. CUI X (2021)
12. DANARDONO H (2019)
13. DEFRIES R (2002)
14. DENG C (2021)
15. DENG C (2021)
16. FENG X (2023)
17. FU J (2022)
18. GAO J (2019)
19. GIARDINA C (2000)
20. GREEN S (2019)
21. GUO C (2019)
22. HASAN S (2020)
23. HE C (2016)
24.
25. HU X (2022)
26. JIANG M (2022)
27. JIAO L (2023)
28. JIAO L (2022)
29. LEWIS S (2013)
30. LI W (2024)
31.
32. LIN T (2022)
33. LIU G (2022)
34. LIU H (2025)
35. LIU J (2021)
36. LIU X (2016)
37. LU H (2023)
38. LUO Y (2022)
39. NIU L (2020)
40. NOBRE C (2016)
41. PECHANEC V (2018)
42. PLANGOEN P (2013)
43. QIU S (2021)
44. REDLIN M (2021)
45. RONG T (2022)
46. SCHULZE E (2020)
47.
48. SMITH P (2020)
49. SUN L (2020)
50. SYRBE R (2017)
51. TORDONI E (2020)
52. WANG S (2014)
53. WANG S (2019)
54. WANG Y (2020)
55. WANG Z (2022)
56. WANG Z (2021)
57. WU W (2021)
58. XU X (2021)
59. XU X (2016)
60. YANG J (2021)
61. YANG S (2024)
62. YU D (2011)
63. YU H (2021)
64. YUAN Y (2023)
65. ZHANG C (2022)
66. ZHANG J (2015)
67. ZHANG L (2018)
68. ZHANG S (2007)
69. ZHANG X (2009)
70. ZHAO M (2019)
71. ZHENG J (2019)
72. ZHOU J (2011)
73. ZHOU X (2022)
74. ZHU M (2024)
|
10.1016_j.heliyon.2023.e16827.txt
|
TITLE: RETRACTED: SqueezeNet for the forecasting of the energy demand using a combined version of the sewing training-based optimization algorithm
AUTHORS:
- Ghadimi, Noradin
- Yasoubi, Elnazossadat
- Akbari, Ehsan
- Sabzalian, Mohammad Hosein
- Alkhazaleh, Hamzah Ali
- Ghadamyari, Mojtaba
ABSTRACT:
This article has been retracted: please see Elsevier policy on article withdrawal (https://www.elsevier.com/about/policies-and-standards/article-withdrawal).
This article has been retracted at the request of the Editor.
Post-publication, an investigation conducted on behalf of the journal by Elsevier's Research Integrity & Publishing Ethics team discovered substantial changes in authorship between the original submission and the revised version of this paper. During revision, Ehsan Akbari, Elnazossadat Yasoubi, Hamzah Ali Alkhazaleh, Mohammad Hosein Sabzalian, and Mojtaba Ghadamyari were added to the author list without adequate explanation. The Editor does not have confidence that all of the stated authors of the article qualify for authorship and has therefore lost confidence in the validity/integrity of the article and made the decision to retract.
The authors have not responded to the retraction notice.
BODY: No body content available
REFERENCES:
No references available
|
10.1016_j.asjsur.2022.11.029.txt
|
TITLE: Effect of da Vinci robot-assisted versus traditional thoracoscopic bronchial sleeve lobectomy
AUTHORS:
- Jin, Dacheng
- Dai, Qiang
- Han, Songchen
- Wang, Kai
- Bai, Qizhou
- Gou, Yunjiu
ABSTRACT:
Objective
To analyze the short-term effect of Da Vinci robot-assisted thoracoscopic (RATS) bronchial sleeve lobectomy, so as to summarize its safety and effectiveness.
Methods
It was a retrospective single-center study with the inclusion of 22 cases receiving RATS lobectomy and 49 cases of traditional thoracoscopic surgery. Further comparison was performed focusing on the baseline characteristics and perioperative performance of the two groups.
Results
Compared with the traditional thoracoscopic surgery group, RATS group had more advantages in the number of lymph nodes dissected (P = 0.003), shorter postoperative length of stay in the hospital (P = 0.040), shorter drainage time (P = 0.022), reduced drainage volume (P = 0.001). Moreover, this study found for the first time that there was a shortening in the operation of sleeve lobectomy by using Da Vinci robot-assisted surgical system (P = 0.001). The operation cost of RATS group is more expensive (96000 ± 9100.782 vs 63000 ± 5102.563 yuan; P<0.001).
Conclusion
Compared with the traditional thoracoscopic bronchial sleeve lobectomy, RATS lobectomy shows advantages of higher operating sensitivity, shorter operation time, faster postoperative recovery, and more lymph nodes dissected. Collectively, RATS bronchial sleeve lobectomy is safe and effective in operation.
BODY:
1 Background At present, surgical treatment is still one of the preferred therapeutic options for early non-small cell lung cancer (NSCLC). Thoracotomy is the common choice by most surgeons for tumors that invade the bronchus and hilar vessels due to their narrow spatial structure and great difficulty in surgery. Significantly, sleeve lobectomy for the treatment of central NSCLC can not only radically remove the tumor, but also maximize the protection of patients' lung function, leading to an extensive application in the clinical setting. 1 2 , 3 There have been reports on totally video-assisted thoracoscopic bronchial sleeve resection of central-type lung cancers. 4 , Its advantages lie in less trauma, less pain than open surgery, shorter postoperative length of stay in the hospital stay, etc., while it has the disadvantage of the great difficulty of surgical operation, especially in the key steps such as bronchial anastomosis and reconstruction. 5 6 , In view of the clear vision and flexible operation of the Da Vinci robot-assisted surgical system, Our Treatment Center carried out Da Vinci robot-assisted thoracoscopic (RATS) bronchial sleeve resection for non-small-cell lung cancer, associated with the comparison of the data of patients who underwent video-assisted thoracoscopic bronchial sleeve lobectomy by the same operator in the past. Findings in our study are expected to evaluate whether robot-assisted surgery can produce more benefits clinically. 7 2 Clinical data and methods 2.1 Subjects of study Our study retrospectively analyzed the patients who underwent sleeve resection of lung cancer in Our Treatment Center from March 2017 to March 2022. Enrolled patients were divided into two groups according to different surgical methods. Patients without NSCLC, who underwent R1 resection, lobectomy and those who lost follow-up were excluded from this study. The clinical data were reviewed and approved by the Ethics Committee of Gansu Provincial hospital. 2.2 Data collection Baseline characteristics and perioperative parameters of the enrolled patients were obtained from electronic medical records. All tumors were re-staged according to the Eighth Edition of the TNM Classification of Lung Cancer. All patients were followed-up through outpatient review or telephone calls. Overall survival (OS) was defined as the time interval from the date of surgery to the date of death or the last follow-up. In this study, all patients were followed up until the last time on March 30, 2022. 2.3 Preoperative evaluation All patients were comprehensively evaluated before operation, including chest X-ray, high-resolution chest computed tomography (CT), abdominal ultrasound, bronchoscopy, pulmonary and cardiac function examination and evaluation, blood gas analysis and basic laboratory examination. The enrolled patients also received positron emission tomography (PET). Bone imaging and MRI were recommended in case of potential distant metastasis. 2.4 Surgical procedure The enrolled patients were lying on right lateral or left lateral. Patients were anesthetized by double-lumen endotracheal intubation under general anesthesia. Then, with the positioning of the incision, 12 mm port was placed in the incision at the 8th intercostal space on the posterior axillary line of the affected side and delivered into the camera port. The other ports with a radius of 4 mm were placed in the incision at the 5th intercostal space on the anterior axillary line and at the 8th intercostal space on the infrascapular line, both of which were then delivered into the port of the instrument, followed by the connection with the instrument arms. The incision at the 7th intercostal space on the midaxillary line was 3–4 cm in length. Firstly, No. 1 and No. 2 robot manipulators were connected with the unipolar electric coagulation hook and bipolar electrocoagulation forceps respectively. The assistant then used the oval forceps to compress the diaphragm and pull the lower lobe. The affected pulmonary artery and vein were dissociated and severed with a cutting stapler until the left side of the azygos vein reached the level of the main pulmonary artery window, and the lymph nodes in groups 7, 8, and 9 were dissected; 6 groups of lymph nodes, right group 2, 3, 4 and group 10 lymph nodes; the distal pulmonary veins were exposed by upturning, and the bronchus was fully dissociated after dissection of the 11th group lymph nodes. After that, the bronchus was fully dissociated. After disconnection, part of the bronchial stump was collected and sent for pathological examination, after which sleeve anastomosis was performed when no tumor residue was confirmed by the pathological examination. The end-to-end anastomosis of the bronchus was performed with a 3-0 prolene suture, and the anastomosis was performed with a double-ended needle for an end-to-end suture( Fig. 1 ). Meanwhile, an additional suture was performed for further fixation at the junction of the membranous part and cartilaginous part of the bronchus. Afterwards, check airleak under water. A chest tube was placed with close observation of the drainage volume and postoperative re-examination ofCT scan for every patient. The position of the operating port and surgical method of the thoracoscopic surgery group were the same as those of the RATS group. 2.5 Statistical analysis Categorical variables were exhibited as frequency and percentage and were analysed by Pearson χ 2 test and Fisher's exact test. Normally distributed continuous variables were described as mean ± standard deviation and were compared by Student's t-test. OS were analysed using the Kaplan–Meier method and the log-rank test. Univariable and multivariable.P-value <0.05 was considered as statistically significant difference. Analysis methods mentioned above resorted to SPSS software version 23.0 (IBM,Armonk, NY, USA). 3 Results 3.1 Subjects of study From March 2019 to March 2022, a total of 110 patients continuously underwent bronchial sleeve lobectomy in Gansu Provincial People's hospital. Among them, 5 patients underwent R1 resection (thoracoscopic surgery for all), 16 patients without NSCLC tumors and 18 patients lost follow-up. Finally, there were 22 cases in the RATS group and 49 cases in the thoracoscopic surgery group. There was no significant difference in the baseline characteristics between the two groups ( Table 1 ). 3.2 Perioperative results No cases in the two groups changed to thoracotomy. The time of operation was 194.86 ± 25.9min and 214.98 ± 20.63min in the RATS group and thoracoscopic surgery group, respectively, with a statistically significant difference (P = 0.001). Significantly, we did not include docking time in RATS group.Meanwhile, RATS group had more advantages in the number of lymph nodes dissected (12.91 ± 1.26 vs 11.69 ± 1.91; P = 0.003), shorter postoperative length of stay in the hospital (5.73 ± 0.827 vs 6.20 ± 0.912; P = 0.040), and less volume of drainage (1409.9 ± 336.522 vs 1726.53 ± 385.56; P = 0.001), yet without statistical significance in blood loss (163.64 ± 50.67 vs 180.00 ± 67.18 ml; P = 0.312). The operation cost of RATS group is more expensive (96000 ± 9100.782 vs 63000 ± 5102.563 yuan; P<0.001).In addition, no statistically significant difference was observed in complications between the two groups ( Table 2 ). There was no significant difference between the 2 groups in 3-year OS ( Fig. 2 ). 4 Discussion Concerning the cause for the selection of bronchial sleeve resection for the treatment of central NSCLC, the fact is that for patients with poor lung function, total pneumonectomy will undoubtedly lead to a significant decrease in lung function postoperatively and sharply reduce the quality of life of patients consequently. Significantly, bronchial sleeve resection can preserve as much lung tissue as possible, which can maximally preserve lung function, and hence play a positive role in the postoperative recovery of patients. 8 9 , Indeed, open surgery can create wider fields of vision to facilitate more flexible operation, which, however, may result in obvious surgical trauma, poor postoperative recovery of patients, etc. 10 Furthermore, totally video-assisted thoracoscopic bronchial sleeve resection has advantages such as less trauma, less pain and good short-term postoperative quality of life. Nevertheless, this procedure requires bronchial anastomosis and reconstruction under endoscopy, which greatly increases the difficulty of surgery, especially when suturing the membranous part of the bronchus, which in turn has extremely high technical requirements for surgeons. 11 The emergence of the Da Vinci robot-assisted surgical system provides an opportunity for the aforementioned difficult surgery. The visual system of the Da Vinci robot is realized by three-dimensional imaging technology. The treatment of shadow and smoke can provide the surgeons with a clear picture of anatomic structures, and the field of vision can be enlarged by 10–15 times. It can help to reduce the visual fatigue of surgeons during the operation, and can avoid operational failure caused by the confusion of the anatomic structures. Moreover, its flexible robot manipulator can complete precise sutures and knotting in a narrow anatomical space, leading to the reduced difficulty of surgery. 12 13 Currently, there are rare reports of RATS bronchial sleeve resection. The first robot-assisted bronchial sleeve resection was not performed for lung cancer patients, but adopted by Ishikawa N et al to complete bronchial sleeve resection, anastomosis and reconstruction on cadavers in 2006. The true total RATS sleeve lobectomy was reported by Nakamura et al 14 in 2013. Furthermore, Schmid et al 15 completed sleeve lobectomy of the upper lobe of the right lung by combining traditional thoracoscopic procedure and robot-assisted system, starting with hilar lymph node dissection and vascular dissociation through VATS, and then bronchial sleeve anastomosis based on the robot-assisted system. Meanwhile, Zhao et al 16 documented quite prominent advantages of robot-assisted sleeve suture according to the application of three robot manipulators in a case of bronchial sleeve resection of the lower lobe of the left lung. While Jo et al 17 used four robot manipulators to complete sleeve resection, and they believed that this approach could effectively reduce the extended operation time caused by the replacement of instruments by assistants, which was inspired by the lobectomy based four robot manipulators reported by Cerfolio et al 18 However, different from the latter report, Jo et al 19 created an auxiliary incision (3.5 cm) in the 8th intercostal space, forming a triangle with the camera port and the hole of the No.1 robot manipulator, rather than being on the same level horizontally. It was consistent with our selection of operating holes. In our opinion, the triangular relationship formed by the selection of operating holes could realize an optimal effect of cooperation among manipulators. 18 Central-type lung cancers can invade not only bronchi, but also pulmonary arteries, highlighting the requirement for bronchoplasty/pulmonary angioplasty, also known as double sleeve lobectomy. Even for traditional thoracotomy, double sleeve lobectomy is complex in operation. There are increasingly more reports of the proposed surgery in VATS in recent decades, yet few reports of using RATS for surgery. The method of pulmonary artery blockage is similar to that in thoracoscopic double sleeve lobectomy. For instance, He et al 10 reported a method of making an incision in the 2nd intercostal space of the anterior chest wall. While in our operation, the incision was made in a lower and more posterior region. Hence, it was in the 3rd intercostal space, which was located near the midaxillary line that could avoid the interference of the No.2 robot manipulator. 20 During lymph node dissection, it is recommended to thoroughly dissect mediastinal lymph nodes first, especially the 4th and 7th groups of lymph nodes, so as to realize a good exposure of the principal bronchus. Moreover, the difficulty and time of operation depend primarily on the anatomy and exposure of the hilar structure. It is difficult to operate when there is an invasion or adhesion of hilar lymph nodes and peribronchial lymph nodes. Therefore, in our experience, mediastinal lymph node dissection in advance can also promote the completion of surgery smoothly. Bronchopleural fistula is one of the serious complications of sleeve resection. To prevent this complication, it is necessary to ensure a smooth incisal margin of the bronchus and moderate anastomotic tension. Our operation used the end-to-end and full-thickness continuous suture using 3-0 prolene suture, with the needle distance of 2–3 mm and the edge distance of 3–5 mm, which could ensure the exact end-to-end alignment of the trachea to provide sufficient local blood supply. At the same time, it is difficult for the endoscope to complete continuous sutures. In our study, this problem was addressed based on the flexible operation of the robot manipulators, which was also time-saving for sutures. In addition, the remaining lungs, including the lower pulmonary ligament, hilar or inferior carina, can be released to reduce the tension of the anastomosis. Our results show that the RATS group has advantages in the number of lymph node dissection, but the postoperative drainage volume is less, which is different from the conclusions of Tao et al. They concluded that the RATS group has more lymph node dissection numbers than the thoracoscopic group, but the postoperative drainage volume is significantly increased. We analyzed the possible causes. Our surgical skills have matured. There was little intraoperative bleeding. The patient did not have other diseases, such as diabetes, hypoproteinemia and so on. Therefore, although the number of lymph nodes dissected increased, the drainage volume did not increase. And the otherreasons for our results may that the number of samples included in our study is too small, and is not a randomized controlled study.The result may have some deviation. We will include more samples in the following studies, or carry out relevant randomized controlled studies. 21 To sum up, RATS bronchial sleeve resection and anastomosis can be a safe and effective surgical option, which reduces the operational difficulties caused by small operative space to a certain extent. Simultaneously, its application marks that minimally invasive surgery is developing in a more humanized and modern direction. Multiple large-scale studies are expected to uncover and verify additional advantages of the Da Vinci robot in the sleeve resection of lung cancer. 5 Limitations First, an inevitable limitation of this study was the selective bias since it was designed as a single-center retrospective study. Meanwhile, the main confounding factors in this study were surgical indications and surgeons. The difference in surgical expertise and the influence of the learning curve may also induce heterogeneity in the intraoperative performance of surgeons. Therefore, multicenter randomized controlled trials are expected to confirm the findings of this study. In addition, our study was performed based on a relatively shorter period of follow-up. Thus, more samples undergoing robot-assisted surgery are planned to be included in the later work to support our conclusions. 6 Conclusions Robot-assisted bronchial sleeve lobectomy was shown to have better perioperative outcomes, with no significant difference in overall survival. We look forward to the publication of higher quality randomized controlled studies. Funding This work was supported by the Gansu Clinical Medical Research Center ( 21JR7RA673 ); National Health Commission Key Laboratory of Diagnosis and Treatment of Gastrointestinal Tumors ( NLDTG2020023 ); Gansu Health Industry Scientific Research Management Project ( GSWSKY2020-50 ); Natural Science Foundation of Gansu Province ( 20JR10RA388 ). Author contributions All authors searched the literature, designed the study, interpreted the findings and revised the manuscript. Yunjiu Gou, Dacheng Jin, Qiang Dai, carried out data management and statistical analysis and drafted the manuscript. Songchen Han, Kai Wang, Qizhou Bai conducted patients’ follow-up. Declaration of competing interest The authors have no conflicts of interest to declare.
REFERENCES:
1. ROSSI D (2017)
2. LIU L (2014)
3. HUANG J (2015)
4. ROBERT J (2006)
5. YU S (2016)
6. LIU L (2014)
7. KAUR M (2018)
8. ETTINGER D (2006)
9. LOCOCO F (2012)
10. TAPIAS L (2015)
11. LYSCOV A (2016)
12. GERACITRAVIS C (2021)
13. DIANA M (2015)
14. ISHIKAWA N (2006)
15. NAKAMURA H (2013)
16. SCHMID T (2011)
17. ZHAO Y (2016)
18. JO M (2017)
19. CERFOLIO R (2011)
20. HE H (2018)
21. SHAOLIN T (2021)
|
10.1016_j.dcan.2024.12.005.txt
|
TITLE: A lightweight dual authentication scheme for V2V communication in 6G-based vanets
AUTHORS:
- Feng, Xia
- Wang, Yaru
- Cui, Kaiping
- Wang, Liangmin
ABSTRACT:
The advancement of 6G wireless communication technology has facilitated the integration of Vehicular Ad-hoc Networks (VANETs). However, the messages transmitted over the public channel in the open and dynamic VANETs are vulnerable to malicious attacks. Although numerous researchers have proposed authentication schemes to enhance the security of Vehicle-to-Vehicle (V2V) communication, most existing methodologies face two significant challenges: (1) the majority of the schemes are not lightweight enough to support real-time message interaction among vehicles; (2) the sensitive information like identity and position is at risk of being compromised. To tackle these issues, we propose a lightweight dual authentication protocol for V2V communication based on Physical Unclonable Function (PUF). The proposed scheme accomplishes dual authentication between vehicles by the combination of Zero-Knowledge Proof (ZKP) and MASK function. The security analysis proves that our scheme provides both anonymous authentication and information unlinkability. Additionally, the performance analysis demonstrates that the computation overhead of our scheme is approximately reduced 23.4% compared to the state-of-the-art schemes. The practical simulation conducted in a 6G network environment demonstrates the feasibility of 6G-based VANETs and their potential for future advancements.
BODY:
1 Introduction Vehicular Ad-hoc Networks (VANETs) have emerged as a pivotal technology for enhancing the efficiency and safety of the intelligent transport systems [1] . Specifically, VANETs connect vehicles, Roadside Units (RSUs), and other traffic infrastructures through the wireless communication technology, establishing a dynamic network for the real time interaction [2] . The emergence of 6G wireless communication technology enables ultra-high speed and low latency communication, which can effectively enhance the functionality of VANETs and meet the real-time interaction requirements [3] . As a standout aspect of 6G wireless communication technology, the Ultra-Reliable Low-Latency Communication (URLLC+) is well-suited for facilitating real-time V2V communication [4] . In addition, the 6G-based VANETs are built with an improved architecture, which enables communicating not only on the ground, but also in space, ocean, and underwater [5] . The deployment of 6G wireless communication technology holds the potential to enhance intelligence, security, and efficiency, rendering the exploration of the 6G-based VANETs a topic deserving attention in research. The 6G-based VANETs are undergoing extensive research from both academic and commercial viewpoints. Fig. 1 illustrates the participants engaged in the 6G-based VANETs and the scenarios that necessitate communication. The vehicles are connected to the RSUs and other vehicles via the wireless connection, whereas the RSUs are connected to the Trust Authority (TA) and other RSUs via the wired connection. In the 6G-based VANETs, the RSUs act as 6G base stations, providing the 6G communication environment. In the event of a rockfall on the roadway, the vehicle detecting the hazard will transmit a warning message to other vehicles within the VANETs, allowing them to promptly adjust their routes to avoid the obstruction. However, the messages are transmitted over a public channel, which makes the messages vulnerable to interception and manipulation by malicious adversaries. Many security issues are associated with the V2V communication within the VANETs [6] , including the risk of leaking sensitive information and susceptibility to various malicious attacks. Adversaries may execute diverse attacks to disrupt communication between vehicles and expose private information, such as impersonation attacks, modification attacks, message link attacks, etc. Furthermore, it is imperative to safeguard the identity of the vehicle from being inadvertently revealed during message transmission. Hence, it is crucial to implement a privacy-preserving authentication scheme for V2V communication. Researchers have proposed various identity verification protocols to enhance security in VANETs. The majority of current protocols rely on a centralized Public Key Infrastructure (PKI) and TA to issue certificates for validating vehicle identities [7] . In the registration phase, the TA distributes public and private keys, as well as certificates, to individual vehicles and stores these messages in its memory. Nevertheless, this solution lacks adequate flexibility and incurs significant computation and communication overheads. Tzeng et al. [8] introduced an advanced identity-based batch authentication approach aimed at improving the efficiency of the protocol. Although the batch authentication process necessitates only a constant number of bilinear pairing and elliptic curve point multiplication, it still results in high communication and computation overhead, along with demanding significant memory costs. Tiwari et al. [9] proposed an identity-based authentication mechanism utilizing the elliptic curve digital signature algorithm for VANETs. This signature scheme enables anonymous authentication of vehicles and identification of malicious vehicles. However, the ID-based scheme is susceptible to replay attacks and lacks non-repudiability. The two main problems involved in the current schemes can be summarized in the following: (1) the existing schemes are not lightweight enough to support the real-time message interaction among vehicles within the dynamic network; (2) sensitive information, such as identity and position, is at the risk of being compromised when the vehicles frequently transmit messages. The existing approaches consistently face trade-offs between safety and efficiency, highlighting the pressing necessity for developing a scheme to address these prevailing issues. Responding to existing problems, we propose a lightweight dual authentication scheme for V2V communication based on PUF with the combination of Zero-Knowledge Proof (ZKP) and MASK function. The identity verification protocols based on Physical Unclonable Function (PUF) have attracted widespread attention from a large number of scholars. The PUF-based protocols are effective in dealing with physical security threats by preventing adversaries from unauthorized access to on-board memory. Our scheme effectively strikes a balance between safety and efficiency, addressing the critical concerns associated with both aspects. The manifold contributions of the proposed scheme are highlighted below: • The proposed scheme exploits the MASK function to guarantee that the authentication mechanism remains lightweight while meeting the stringent real-time interaction requirements of the V2V communication, owing to the minimal computation time required by the MASK function. • The proposed scheme enables bidirectional authentication through incorporating the ZKP authentication method. This scheme can offer resilience against malicious attacks, thereby safeguarding the confidentiality and unlinkability of sensitive information. In addition, the PUF-based scheme can withstand physical attacks and provide layered protection for On Board Units (OBUs). The signature generation mechanisms ensure the integrity and accuracy of messages transferred during the identity verification. • The performance analysis proves that the computation overhead and communication overhead of our scheme are dramatically reduced compared to the state-of-the-art schemes. Moreover, simulation experiments indicate that the computation overhead of the proposed scheme is reduced by approximately 23.4% compared to the other schemes. The practical simulation conducted in a 6G network environment demonstrates the feasibility of 6G-based VANETs and their potential for future advancements. Our paper is organized as follows. The related works are presented in Section 2 . The preliminaries are illustrated in Section 3 . Section 4 introduces the proposed scheme. Section 5 presents the verifiable security analysis of our scheme. The performance analysis is provided in Section 6 , and Section 7 describes the simulation experiment of the proposed scheme. Finally, the paper is concluded in Section 8 . 2 Related work VANETs are an expansion and enhancement of the Internet of Things (IoTs) in the transportation system, and they have been widely used to accelerate communication among vehicles and other entities [10] . The vehicles in the VANETs enable to establishment of communication with other vehicles, RSUs, and the cloud servers, facilitating seamless message sharing among all the interconnected vehicles. However, the messages transmitted directly on the public channel are susceptible to tampering and deletion, leading to potential disruption for vehicles due to false messages. Therefore, authentication plays a crucial role as the primary defense against malicious attacks, ensuring the security of identity verification, information integrity, and privacy security. Numerous researchers have devoted their efforts to authentication schemes and have achieved notable outcomes. In this section, we will introduce the traditional authentication schemes and the PUF-based authentication schemes, respectively. 2.1 Traditional authentication schemes A large number of scholars have presented authentication schemes that implement attribute-based signatures, blockchain technology, and cryptographic functions to enhance the security of V2V communication. While the authentication schemes incorporating encryption methods aim to address security issues during the communication process, they may lead to significant communication overhead. On the other hand, lightweight authentication schemes cannot meet the required security standards. Therefore, The main issue with current authentication schemes is to strike a balance between security and efficiency. Liu et al. [11] proposed a dual authentication and key negotiation approach, which can potentially enhance the security of system without the need for additional key management. However, the scheme exhibits notable weaknesses, including the requirement for TA involvement during the identity verification phase and the relatively high computation overhead stemming from the utilization of bilinear pairing. Imghoure A et al. [12] proposed an efficient Conditional Privacy-Preserving Authentication (CPPA) scheme that operates without the need for certificates and resolves the custody issue in the authentication process. Additionally, the scheme supports multi-signature and batch verification. Liu et al. [13] presented a scheme abbreviated to PTAP uses semi-honest RSUs to achieve interaction between vehicles without the help of a trusted third party, allowing vehicles to enjoy remote services with identity anonymity protection and location privacy. Vasudev et al. [14] presented a cryptographic method for key exchange and dual authentication. Their protocol employs only one-way hash functions and bitwise Exclusive-OR (XOR) functions to minimize the computation overhead and communication overhead compared to the scheme in [11] . While this method does not require bilinear pairing, it still necessitates the involvement of the TA during the identity verification phase. Liu et al. [15] suggested an authentication solution leveraging the Elliptic Curve Discrete Logarithm Problem (ECDLP) and blockchain technology. Vehicles can generate the dynamic pseudonym key by using a Tamper-Proof Device (TPD) for offline anonymous authentication, which also introduces a dependency on TPD. Javaid et al. [16] designed a blockchain-based scheme that uses smart contracts, authentication, and consensus algorithms. The blockchain combined with smart contracts could offer a robust security architecture for enrolling legitimate vehicles and disrupting malicious ones, and each vehicle got a certification issued by RSU for authentication. However, the management of the certification imposed a storage burden on both vehicles and RSUs. Rasheed et al. [17] proposed an enhanced certificate-less aggregated signature encryption method, which can solve the public key substitution attack and key escrow problem. However, the generation and transmission of pseudo-identities between vehicles and RSUs are time-consuming, which hinders the support for real-time communication in VANETs. Feng et al. [18] introduced a batch authentication approach in bilinear groups. The vehicles sent the traffic-relevant messages and their blinded certificates to the surrounding RSUs, then the RSUs would check the validity of messages utilizing the non-interactive ZKP. Liu et al. [19] presented a reputation-updating method for cloud-supported VANETs exploiting the Elliptic Curve Cryptography (ECC) and paillier algorithms. This proposed scheme provides robust privacy protection and reputation administration with tolerable computation overhead and communication overhead. 2.2 PUF-based authentication schemes In addition to the traditional encryption authentication methods, various other approaches are employed in authentication protocols. The concept of PUF was originally introduced in the security field of digital integrated circuits. With the development of this technology, it has gradually attracted attention in the fields of IoTs and VANETs [20] . PUFs are expected to become key hardware primitives that can supply distinct identities to potentially tens of thousands of associated devices in IoT, and can also help the authentication systems get rid of dependence on TPD devices. There are some schemes that utilized PUF for authentication in IoT scenarios. Chatterjee et al. [21] developed an authentication and key exchange scheme by integrating Identity Encryption (IDE) and PUFs. This proposed scheme raised the outstanding concern of how to design a scheme that eliminates the need for storing the private Challenge-Response Pairs (CRPs) on the verifier end. Gope et al. [22] developed a lightweight practical anonymous authentication mechanism employing one-time PUF for IoT, which is resistant to modeling and machine learning attacks. Since the PUF behaved differently in each session, it will be difficult for the adversaries to speculate on past or future Challenge and Response Pairs. In order to implement Peer-to-Peer (P2P) lightweight straightforward two-way authentication and key exchange between peers in IoT, Zheng et al. [23] suggested a dual authentication and key exchange method utilizing PUFs. It enables two endpoint devices with limited resources to directly verify their identity, eliminating the requirement of storing the CRP locally, while simultaneously establishing a session key for secure data exchange. In the VANETs scenario, researchers are considering how to use unique hardware identifiers generated by PUF to ensure security during Vehicle-to-Everything (V2X) communication. Jiang et al. [24] introduced the PUF as a physical fingerprint generator that provides identity authentication between two parties for the VANETs system. This researchers utilized the combination of password and PUF as the cryptographic primitives to prevent the vehicle from being occupied by anyone else and authenticate as a legitimate owner successfully. Furthermore, this method can be used to protect user privacy and resist desynchronization attacks. Based on this study, Jiang et al. [25] proposed a methodology combining user bioinformation with vehicular PUF responses to improve the precision of characteristic inspection and identification. In addition, this scheme can withstand the noise of PUF responses by employing fuzzy commitment. Aman et al. [26] presented a PUF-based authentication mechanism for VANETs, which consists of a three-layered structural framework including the RSU, RSU Gateway (RG), and the trusted authority (TA). This scheme only employed lightweight hash functions and symmetric encryption functions, resulting in minimal computation overhead. Nevertheless, the entire authentication process necessitated the continuous online presence of TA, which could potentially lead to a single point of disruption. Xie et al. [27] introduced a batch verification mechanism that employs ECC for V2I and V2V identity authentication. Vehicles and RSUs can mutually authenticate and securely exchange messages that do not necessarily require the involvement of the TA. When multiple vehicles need to send messages and authenticate simultaneously, batch authentication is enabled for efficient processing. In addition, the protocol utilized PUF and biometric keys to avoid RSU acquisition attacks and OBU penetration attacks and demonstrated efficient performance through the batch authentication approach and low computation overhead of PUFs. The previous discussion outlined a range of schemes proposed to address the security challenges currently impeding the development of VANETs. Despite these efforts, many of these authentication schemes have proven insufficient in terms of both efficiency and stability. The complexity of balancing security with the need for rapid, real-time communication, coupled with the resource constraints of vehicular networks, further exacerbates this issue. The proposed scheme creates a PUF-based authentication mechanism with the help of ZKP and MASK function, which provides lightweight authentication while ensuring privacy protection at the same time. The MASK function is lightweight enough to support real-time communication between vehicles, thereby enhancing the feasibility of deploying this scheme in practice. In addition, we validate the feasibility of this scheme through simulation experiments in Section 7 . 3 Preliminaries This section introduces the preliminary elements of the proposed protocol, encompassing physically unclonable function, design of PRNG , MASK function, and zero-knowledge proof, respectively. Additionally, this section also introduces the adversary models and design goals of the proposed scheme. 3.1 Physically unclonable function Physically Unclonable Function (PUF) is a physically implemented electrical circuit with uniformity and stochasticity, and its output depends on the fabrication process and chip characteristics. The unique correspondence between challenge and response is generated through parameter deviations in the chip fabrication process [28] . When attackers attempt to analyze the PUF-encrypted chip, the response of the PUF alters, making it impossible to retrieve the encrypted data. The PUF is regarded as a component of the integrated circuit that accepts a random string, referred to as a challenge ( ), as input and generates a corresponding string output, known as a response ( C i ) through internal circuitry, which can be expressed as R i =PUF( R i ). The proposed scheme adopts the characteristics of PUF hardware safety to develop a physically privacy-preserving protocol that prevents attackers from stealing private information stored in OBUs. In addition, the computation time of a single PUF operation is typically less than 1 nanosecond (ns), rendering it negligible in practical terms. C i More specifically, PUFs possess unique properties that render them distinct. It is impossible to generate two identical PUFs because of the manufacturing process of PUFs. In the practical production process, the discrepancy between the mapping of inputs and outputs is stationary and impossible to predict. In addition, any intent to manipulate the OBUs containing the PUF will affect the internal structure of the PUF, resulting in a significant deviation in the output and ultimately causing its invalidation. 3.2 Design of PRNG In the proposed scheme, PRNG serves as the pseudo-random number generator, which is utilized to create sets of integers for bit shuffling in the MASK function. We use PRNG based on Linear Feedback Shift Register (LFSR) for their low-overhead hardware and almost identical computational performance. The LFSR is a type of shift register used in digital circuits, which is capable of generating a sequence of binary numbers through a linear feedback function. The feedback function is expanded over the finite field called primitive , which XORs specific bits within the shift register and inserts the XOR result at the leftmost end of the LFSR. p x The seed of PRNG during the MASK phase is and the real input is defined as s 0 . Afterward, the I p = s 0 | | y ′ PRNG function can be expressed as , where the length of the input binary vector PRNG ( y ) = L F S R ( s 0 | | y ′ ) y to the PRNG function is m bits. Thus, the length of is also I p = s 0 | | y ′ m bits, both the and s 0 are y ′ in length. We can randomly choose m / 2 bits from y to generate m / 2 during the design phase and use it as a part of the seed to the LFSR. It is worth noting that both TA and vehicles have to agree on a matching value for y ′ , or else the masked outcomes would be inconsistent. y ′ 3.3 MASK function MASK function is utilized to encrypt sensitive information necessary for authentication. This approach has previously been employed for point-to-point identity authentication in the IoTs [29] . Consequently, it is incorporated into the proposed scheme for vehicle-to-vehicle identity authentication within the VANETs. The MASK function includes three parts: 1) Integer Set Generation; 2) Range Adjustment; 3) Bit Shuffling. Integer set generation is accomplished using the PRNG described in the previous subsection to create the integer sets. Given an n-bit binary vector as input, the PRNG produces a continuous output through its internal linear feedback shift register, resulting in the creation of an integer set. RANGE function is a linear mapping transformation that is defined as follows: given a m -bit integer k which belongs to the integer set K, with the value in the interval . Then generates a new integer set U with [ 0 , 2 m − 1 ] n -bit integers in the interval , where [ 0 , 2 n − 1 ] . The equation governs the linear interval mapping: n ≤ m where (1) N n e w = ( N o l d − N o l d M i n ) × ( N n e w M a x − N n e w M i n ) N o l d M a x − N o l d M i n is the input of the RANGE function, N o l d ϵ K and N o l d M i n are the minimum and maximum value in the old interval N o l d M a x , which are 0 and [ 0 , 2 m − 1 ] , correspondingly. 2 m − 1 and N n e w M i n are the minimum and maximum value in the new interval N n e w M a x , [ 0 , 2 l − 1 ] is the constant 0 and N n e w M i n is variant which can be the second input of the RANGE function. By taking these variations into account, the updating of the interval formula turns out to be: N n e w M a x (2) N n e w = N o l d × N n e w M a x N o l d M a x In the proposed scheme, the bit shuffling is essentially a variant of the Durstenfeld version of the Fisher-Yates shuffler [30] . For a given set of n different permutations, this operation can generate n ! permutations, and all permutations are equally likely. The detailed implementation process of the MASK function is summarized as follows. We first generate an integer set K consisting of m integers through the PRNG as where (3) K : { k 1 , k 2 , . . . , k m } ← PRNG ( y ) y is an n-bit binary vector. Then, we calculate through the RANGE function which is the first bit to be replaced as where (4) N n e w ← RANGE ( k i , m + 1 − i ) i represents the sequence number of each integer in the integer set K . Finally, the Durstenfeld shuffler swaps corresponding bits as (5) x 1 ← S W A P ( x , x N n e w , x m − i + 1 ) Following the aforementioned steps, the Durstenfeld shuffler can shuffle the m-bit binary vector x in place and produce an output m-bit binary vector , which can be expressed as x m . In addition, as illustrated in the equation, the output m-bit binary vector x m ← MASK ( x , y ) could be converted into x m x by executing the UNMASK function, denoted as . x ← UNMASK ( x m , y ) The practical application of the MASK function to our scheme involves utilizing the generated integer set to encrypt the unique response k j produced by the vehicle, which can be expressed as r j . Then, the vehicle M j = MASK ( r j , k j ) sends the masked message to another vehicle A V j , the vehicle A V i utilizes the information it has acquired to unmask this message as A V i . Finally, the vehicle r j ¯ = UNMASK ( M j , k j ) sends A V i back to r j ¯ , vehicle A V j verifies the vehicle A V j identity by checking whether the equation A V i holds true. r j ¯ = r j 3.4 Zero-knowledge proof Zero-Knowledge Proof (ZKP) is an interactive identification protocol allowing a prover to demonstrate to the verifier P that they possess the necessary information without disclosing any hidden messages. In our protocol, ZKP is utilized to complete the initial step of dual authentication, making sure that vehicle V is legitimate and has it own unique response A V j . Alternatively, as defined by r j [31] , ZKP is a cryptographic protocol that verifies the correctness of the proposition without revealing any information beyond the fact that the proposition is indeed true. ZKP exploits difficult mathematical problems to guarantee safety against the disclosure of information within the proof, which is a property known as ‘zero knowledge’. ZKP is an extremely prevalent and broadly adopted concept in cryptosystems, involving both the prover and the verifier. ZKP enables the prover to show that they have the relevant information and credentials with no need to reveal the verifier the true information. In addition, ZKP is extensively employed in authentication due to the following characteristics: • Reliability: If the statement is false, the prover does not need to deceive the verifier into accepting the false assertion with the exception of low-probability events. • Integrity: If the statement is correct, then the verifier can demonstrate that the statement is correct at any time. • Zero knowledge: If the statement is true, there is nothing more to know than the truth. This is proved by a simple animation that shows the statement is correct and that the verifier obtains no results. Given a group G and the generator g, suppose the prover intends to prove to the verifier P that he knows the hidden information V . The basic process of implementing zero-knowledge authentication by means of ZKP is as follows: x = h ( p a s s w o r d ) • The prover : He first performs an encryption operation on P x as . Then, the prover y = g x picks an arbitrary number P and computes v ϵ Z q ⁎ . Finally, the prover t = g v computes P and c = h ( y | | t | | g ) , then sends the message r = v − c x to the verifier { y , t , r } . V • The verifier : Upon receipt of the message, the verifier V calculates V and checks whether c = h ( y | | t | | g ) . t = g r y c (6) g r y c = g v − c x ( g x ) c = g v − c x g c x = g v = t 3.5 Adversarial model We utilized the Canetti and Krawczyk's adversary model proposed in [32] (CK-adversary model), which has widely emerged as the present standard model for modeling authenticated key exchange protocols. In the CK-adversary model, adversaries have the ability to not only inject counterfeit messages, but also tamper with the signatures of messages and session assertions. Based on the assumptions of the CK-adversary model, the properties of an adversary denoted as are as follows: A • may be either internal or external. The external adversary could target messages transmitted on the public channel, and the internal adversary could collude with malicious vehicles. A • is presumed to be able to retrieve messages transmitted on the public channels and may be able to eavesdrop, resend, intercept, or modify the shared messages. A • would carry out a physical layer attack that refers to obtaining sensitive traffic information or controlling the vehicle by directly manipulating the hardware of the vehicle or hacking the software of the vehicle. A • may potentially collude with malicious vehicles and disrupt regular communications among other vehicles. A 3.6 Design goals The proposed scheme is designed to safeguard user privacy and guarantee anonymity in V2V communication through the use of pseudonymous identities, which are only revealed by TA. In addition, the proposed scheme is capable of defending against a range of common attacks and offers several benefits, such as ensuring the anonymity and traceability of vehicle identity, as well as unlinkability of information. The PUF-based scheme designed in this paper is expected to meet the following security constraints: • Anonymity: Adversaries and legitimate vehicles are both unable to extrapolate the real identity of other vehicles in VANETs. Furthermore, any third party cannot analyze the intercepted messages to disclose the real identity of the vehicle. • Traceability: TA would recover the real identity of a vehicle by calculating the content of the messages broadcast by the vehicle during the registration phase. For instance, the TA could identify the real identity of a malicious vehicle using the aforementioned method and prevent the vehicle from disseminating false messages that could mislead other vehicles. • Un-linkability: Adversaries and malicious entities are unable to infer or track the activities of legitimate vehicles based on the messages transmitted on the public channel. • Resilience to malicious attacks: Our scheme demonstrates robust security measures that effectively protect against various malicious attacks, including the physical layer attacks, impersonation attacks, modification attacks, and man-in-the-middle attacks et. 4 The proposed scheme In this section, we present a lightweight dual authentication scheme based on PUF. This scheme integrates both ZKP and the MASK function to ensure enhanced security and efficiency in the authentication process. The proposed scheme is designed for V2V communication in VANETs. There are five phases in the proposed scheme: (1) System initialization. (2) Vehicle registration. (3) Vehicle identity verification. (4) Malicious vehicle identification marker. (5) Key and Vehicle's Identity Update. The notations employed in the proposed scheme are succinctly outlined in Table 1 for clarity and reference. Fig. 2 and Fig. 3 illustrate the process of vehicle registration and identity verification, respectively. 4.1 System model In the proposed scheme, we will only discuss the authentication of V2V communication. The following are three basic types of entities that employed for the transmission of messages in V2V communication of the proposed system model: • Trusted Authority (TA): A trusted register authority provides registration services for all vehicles on a secure communication channel in the system. TA is a trusted third party that would not collaborate with adversaries. It is mandatory for vehicles to register as legitimate entities through the TA in order to participate in the system. In addition, TA plays a crucial role in system initialization, encompassing the generation of the elliptic curve, derivation of its pertinent parameters and the subsequent issuance of these system parameters to the vehicles. • Autonomous Vehicle (AV): In our scheme, the autonomous vehicle is equipped with an OBU and a unique PUF, which enables the vehicle to authenticate and establish secure communication with other vehicles. Autonomous vehicles are responsible for transmitting and receiving traffic messages within the VANETs. • Roadside Unit (RSU): In our scheme, RSUs serve as 6G base stations installed on the roadside and play a pivotal role as the primary communication channel facilitating interactions between vehicles. RSUs function autonomously, strengthening system security measures to mitigate the impact of a potential breach in one RSU on the overall integrity of the system. 4.2 System initialization In this phase, TA conducts the generation of a series of public and private parameters, subsequently disseminating the public parameters to the other entities in VANETs. The following are the operations executed by TA: 1) TA selects an elliptic curve E denoted by the mathematical formula , where y 2 = x 3 + a x + b ( m o d p ) , a , b ∈ F p p is a large prime number. Then, TA picks a point on the elliptic curve E , known as the generator g , to establish a multiplicative cyclic group G with the order q . 2) TA chooses a random number as the secret key of the system and calculates the public key s ∈ Z q ⁎ . Then, TA broadcasts the public parameters P p u b = g s to other entities in VANETs. { E , G , p , q , g , P p u b } 4.3 Vehicle registration 1) Vehicle transfers its real identity A V i to TA. Upon receiving the I D i , TA verifies the legitimacy of the vehicle identity. Subsequently, TA generates a primitive polynomial I D i used in the design of p x PRNG , ensuring that the primitive polynomial is suitable for all vehicles. 2) TA selects a m -bit random number , then uses it as the seed for N i PRNG to generate the challenge . Then, the challenge c i is transmitted to the vehicle c i . A V i 3) generates the corresponding unique response A V i through the PUF component, r i , and sends it back to TA. r i ← P U F ( c i ) 4) TA selects a random number as the secret key for the vehicle α i ∈ Z q ⁎ and calculates the corresponding public key A V i . Then, TA generates the timestamp Q i = g α i and computes the pseudonyms identity of T 1 as A V i . TA stores these messages P I D i = E P p u b ( h ( T 1 | | r i ) ⨁ I D i ) in its storage. { α i , Q i , c i , r i , p x , P I D i , T 1 } 5) TA sends the following messages to the vehicle { r i , α i , Q i , p x , P I D i , T 1 } for the subsequent authentication. Subsequently, the vehicle A V i saves these messages in its local memory. Finally, the vehicle A V i broadcasts the messages A V i to the other entities within the system. { P I D i , Q i , T 1 } 4.4 Vehicle identity verification 1) Upon receipt of the message , the vehicle { P I D i , Q i , T 1 } verifies the freshness of the timestamp A V j . Subsequently, vehicle T 1 selects a random A V j m -bit number and produces a set of n j m integers utilizing the random number k j as the seed for n j PRNG , represented as . Vehicle k j ← PRNG ( n j ) then masks its own response A V j by applying the masking function r j MASK , defined as , where M j = MASK ( r j , k j ) is the set of integers generated by k j PRNG . 2) Vehicle encrypts t he input A V j of n j PRNG using the hash function and XOR operation, denoted as , where N j = n j ⨁ h ( Q i α j ) is the secret key of vehicle α j , and A V j is the public key of vehicle Q i , respectively. Then, vehicle A V i calculates A V j , and y = g N j , t = g M j , c = h ( y | | t | | g ) for the subsequent verification. Finally, vehicle x = M j − c ⋅ N j generates a timestamp A V j and a digital signature T 2 for the message to be sent σ 1 . Vehicle σ 1 = h ( y | | t | | x | | c | | N j | | P I D j | | T 2 ) then sends the message A V j to the vehicle M s g 1 = { σ 1 , y , t , x , N j , P I D j , T 2 } . A V i 3) Upon receipt of the message , the vehicle M s g 1 validates the freshness of the timestamp A V i and the correctness of the signature T 2 . Vehicle σ 1 then calculates A V i to verify that whether y = g N j , c = h ( y | | t | | g ) t satisfies the equation . If t = g x y c , it indicates that vehicle t = g x y c authenticates vehicle A V i . Otherwise, vehicle A V j broadcasts A V i of P I D j as malicious vehicle. A V j 4) Afterwards, vehicle calculates A V i , where n j = N j ⨁ h ( Q j α i ) is the secret key of vehicle α i , A V i is the public key of vehicle Q j . Next, vehicle A V j calculates A V i as M j . Subsequently, vehicle M j = x + c ⋅ N j unmasks the masked response A V i through the M j UNMASK function, expressed as , where the r j ¯ = UNMASK ( M j , k j ) is obtained by k j . Finally, vehicle k j ← PRNG ( n j ) generates a timestamp A V i and a digital signature T 3 for the message to be sent, where σ 2 . Vehicle σ 2 = h ( r j ¯ | | c | | T 3 ) then sends the message A V i to the vehicle M s g 2 = { σ 2 , r j ¯ , T 3 } . A V j 5) Vehicle validates the freshness of the timestamp A V j and the accuracy of the signature T 3 . Then vehicle σ 2 checks if A V j satisfies the equation r j ¯ . If this condition holds true, it indicates that vehicle r j ¯ = r j has successfully authenticated vehicle A V j . Otherwise, vehicle A V i broadcasts the pseudonyms identity A V j of the vehicle P I D i as the malicious vehicle. A V i 4.5 Malicious vehicle identification marker Upon receiving the broadcast containing the pseudonyms identity of the malicious vehicle, TA will access its internal memory to retrieve information related to the vehicle P I D i . Subsequently, the TA marks the challenge A V i corresponding to the vehicle c i as A V i . Then, TA calculates the real identity of the vehicle c i = i n v a l i d i t y utilizing the stored information, represented as A V i . In order to prevent deceptive activities by the malicious vehicle I D i = D s ( P I D i ) ⨁ h ( T 1 | | r i ) , the TA masks identity A V i of the vehicle I D i as A V i , thereby inhibiting the potential malicious attacks through re-registration. I D i = i n v a l i d i t y 4.6 Key and vehicle's identity update When the vehicle's pseudonym identity expires, TA will update the pseudonym identity and regenerate a new pair of system keys for pseudonym generation. TA chooses a random number as the new secret key of the system and calculates the new public key s ′ ∈ Z q ⁎ . Then, TA generates a new timestamp P p u b ′ = g s ′ and calculates the new T 4 of the vehicle P I D i ′ as A V i . TA stores the messages P I D i ′ = E P p u b ′ ( h ( T 4 | | r i ) ⨁ I D i ) in its memory and replaces the original messages { P I D i ′ , T 4 } . Finally, TA send the messages { P I D i , T 1 } to vehicle { P I D i ′ , T 4 } , vehicle A V i stores them in its storage and broadcasts them to the other entities within the VANETs. A V i 5 Security analysis In this section, we describe the formal and informal security analysis of the designed scheme. 5.1 Formal security analysis We use the Random Oracle Model (ROM) to demonstrate the formal security analysis of the designed scheme. Then ROM is commonly defined as below: Definition 1 Participants and partnering The participants represent the vehicles in VANETs. In the i -th communication, the participants can be referred to as . The Accept state indicates that an oracle has received a compliant message. I n V i i ( I n V j i ) If two oracles are in Accept state and the session keys have been negotiated, the oracles will receive the session identities. Then, the two oracles can be regarded as partners if their session identities and keys are the same. Definition 2 Query The query represents the ability of attacker. Execute : The messages transmitted on the public channel can be intercepted by the adversary ( I n V i i , I n V j i ) . A Send : An adversary ( I n V i i , I n V j i , m ) falsifies and transmits the message A m to or I n V i i , if I n V j i m is correct, or I n V i i responses I n V j i . A Reveal : Adversary ( I n V i i , I n V j i ) can get the session keys between A and I n V i i . I n V j i Test : This query can be executed no more than once, which randomly produces a bit ( I n V i i , I n V j i , r ) r . If r = 1, the true session key will be retrieved, otherwise a random number will be returned. Corrupt : This emulates the side-channel attack to the smart card of OBUs, and reveals the encrypted information ( I n V i i , I n V j i ) . { α i , Q i , p i , P I D i , T 1 } Definition 3 Freshness An example instance can be considered fresh when it conforms: • and I n V i i are in Accept. I n V j i • The query Reveal has not been carried out. ( I n V i i , I n V j i ) • The query Corrupt has been performed no more than once. ( I n V i i , I n V j i ) Then, the prove of the formal security analysis is as below: Theorem 1 The advantage of recovering the session key in finite time by A is (7) A d v P A ≤ q 2 H A 2 l H A + ( q S E + q E X ) 2 n + q S E 2 l S K − 1 + 2 q S E A d v P U F A + 2 A d v E C D L P A where q H A , q S E , and q E X indicate the time consumed to perform Hash, Send, and Execute, separately. l H A , l S K , and n denote the length of hash, secret key of vehicle, and transcriptions, separately. The advantage of decomposing PUF and ECDLP by A are donated as A d v P U F A and A d v E C D L P A , separately. Proof The games are configured to emulate the attacking behavior launched by G a m e i ( 0 ≤ i ≤ 4 ) . A implies W i n i ( 0 ≤ i ≤ 4 ) estimates the probability of the arbitrary bit r in the A . The games are described as: G a m e i : This game is designed to replicate the original attack performed by G a m e 0 . Following the description, we will get that: A (8) A d v P A = | 2 P r [ W i n 0 ] − 1 | : This game is meant to emulate the interception attack. G a m e 1 captures all messages transferred on public channel. Subsequently, A estimates the arbitrary bit r. However, because of the ECDLP, the attacker A is unable to estimate the correspondence between the intercepted messages and the session keys. Hence, we will get that: A (9) P r [ W i n 0 ] = P r [ W i n 1 ] : According to the description of the birthday paradox, this game is designed to emulate the collision attack on transcriptions and hash results, the probability of hash collision is less than G a m e 2 , and the collision probability of other transcripts is less than q 2 H A 2 l H A + 1 . Therefore, we have: ( q S E + q E X ) 2 2 n (10) P r [ W i n 2 ] − P r [ W i n 1 ] ≤ q 2 H A 2 l H A + 1 + ( q S E + q E X ) 2 2 n : This game simulates G a m e 3 executes Corrupt A to obtain the stored information ( I n V i i , I n V j i ) in the smart card. If { H e l p i , α i , Q i , p i , s , P I D i , T 1 } intends to retrieve the valuable parameters, he/she will have to approximate A or crack the PUF. Assume that the stochastic probability of α i to crack PUF is A . Thereby we will get that: A d v P U F A (11) P r [ W i n 3 ] − P r [ W i n 2 ] ≤ q S E ( 1 2 l S K + A d v P U F A ) : This game will imitate that G a m e 4 recalculates the session keys depending on the transcripts. We will get that: A (12) P r [ W i n 4 ] − P r [ W i n 3 ] ≤ A d v E C D L P A The session keys are created individually in a random order. Therefore, the advantage of estimating r is the same as estimating the session key. We will get that: By combining and simplifying the above equations, we can get that: (13) P r [ W i n 4 ] = 1 2 1 2 A d v P A = | P r [ W i n 0 ] − 1 2 | ≤ q 2 H A 2 l H A + 1 + ( q S E + q E X ) 2 2 n + q S E 2 l S K + q S E A d v P U F A + A d v E C D L P A (14) A d v P A ≤ q 2 H A 2 l H A + ( q S E + q E X ) 2 n + q S E 2 l S K − 1 + 2 q S E A d v P U F A + 2 A d v E C D L P A 5.2 Informal security analysis This section presents an informal security analysis of the proposed scheme. We evaluate multiple attack strategies that an adversary could employ to undermine the proposed scheme. 5.2.1 Physical layer attack An adversary would attempt to obtain sensitive information or take control of the vehicle by directly manipulating the OBUs of the vehicles. In the proposed scheme, if A attempts to manipulate the OBU of the vehicle A , such interference could affect the operational accuracy of the PUF. Subsequently, the PUF would fail to generate accurate outputs, thereby rendering the PUF ineffective and irrelevant for its designated purpose. The TA can promptly identify such malicious activities by adversary A V i and prevent the compromised vehicles from establishing communication with other vehicles within the VANETs. Moreover, the real identity of A is securely stored within OBU in the form of ciphertext, represented by A V i . This approach ensures that the adversary P I D i cannot retrieve the confidential personal information of the vehicle, even in the event of successfully infiltrating the vehicle's OBU. A In addition, the adversary would try to compromise the vehicle A by compromising its software in order to access sensitive information. The proposed authentication scheme incorporates a blend of ZKP and A V i MASK function, which can effectively counter the adversary 's endeavors to breach the software of vehicle A , consequently protecting its privacy. Furthermore, owing to the characteristics of ZKP, no entities other than the vehicle itself can access the private information, while still being able to carry out identity authentication. In conclusion, the proposed scheme demonstrates robust resilience against physical layer attacks, affirming its effectiveness in safeguarding the security of V2V communication. A V i 5.2.2 Impersonation attack Assume an adversary intends to masquerade a legitimate vehicle in order to establish communication with other vehicles, he/she needs to forge . Here, x = M j − c ⋅ N j represents the masking of M j as r j , M j = MASK ( r j , k j ) , and c = h ( y | | t | | g ) . The message N j = n j ⨁ h ( Q i α j ) transmitted on the public channel does not contain M s g 1 c and , information exclusive to legitimate vehicles. The value of M j c is computed as , where the generator c = h ( y | | t | | g ) g is issued by the TA to each legitimate vehicle during the initialization phase. In addition, the verification process is carried out by means of ZKP, determining that remains undisclosed to anyone except the vehicle r j . Consequently, the adversary A V j is unable to execute the impersonation attack. A By leveraging ZKP for secure verification, the protocol ensures that no sensitive information, such as is leaked, making it computationally infeasible for the adversary to forge the values needed for masquerading as a legitimate vehicle. Furthermore, the utilization of masking and nonces, combined with cryptographic hashes, reinforces the security, effectively thwarting any attempt by the adversary to impersonate a vehicle. Consequently, without access to the private parameters and the information exchanged in the initialization phase, the adversary is unable to execute a successful impersonation attack, safeguarding the integrity of V2V communication. r j 5.2.3 Man-in-the-middle attack In the proposed scheme, anyone can access the messages and M s g 1 = { σ 1 , y , t , x , N j , P I D j , T 2 } that transmitted on the public channel, which can be accessed by anyone, including potential adversaries. However, if a malicious adversary attempts to tamper with these messages and use the altered data to exploit the scheme, they would need to obtain critical information such as M s g 2 = { σ 2 , r j ¯ , T 3 } and the generator r j g to succeed. Despite having access to the transmitted messages, the adversary is unable to acquire this essential information. The values of and r j g are securely embedded within the system and are either masked or protected by cryptographic mechanisms. Without these vital components, any attempt by the adversary to manipulate the scheme is rendered ineffective. Furthermore, the scheme employs cryptographic protections and secure verification mechanisms like zero-knowledge proofs to ensure that any tampering with the messages is detectable, thus preventing the adversary from successfully executing a Man-In-the-Middle (MIM) attack. Even if the adversary intercepts or alters the public messages, they cannot reconstruct or modify the necessary parameters to impersonate a legitimate participant or compromise the system. 5.2.4 Replay attack Replay attacks occur when an attacker intercepts a legitimate communication between vehicles, and retransmits it at a later time in an attempt to trick the system into accepting it as a valid message. In such attacks, the intercepted data (such as authentication tokens or vehicle identity information) is reused to gain unauthorized access or to spoof the identity of the original sender. To defend against replay attacks, our proposed scheme incorporates a timestamp as a critical element in the authentication process, ensuring that each message is tied to a specific point in time. In the vehicle identity verification phase, vehicle T i generates signature A V j for the message being sent. This signature is constructed as: σ 1 , where h is a cryptographic hash function, and σ 1 = h ( y | | t | | x | | c | | N j | | P I D j | | T 2 ) is the current timestamp. The signature incorporates various parameters such as the vehicle's pseudonym T 2 , random nonce P I D j , and other session-specific information. Upon receiving the message, the vehicle N j calculates the signature A V i as σ 2 , where σ 2 = h ( r j ¯ | | c | | T 3 ) is the current timestamp on T 3 's side. These parameters A V i change with the timestamp σ i , which prevents the outdated message and resists the replay attack. T i 5.2.5 Uniqueness of PUF The primary security presumption relies on the assumption that an adversary cannot duplicate the behavior of the PUF, in spite of duplicating the complete construction of the PUF. This presumption is motivated by the mathematic property and the inherent physical unclonability of PUF construction. Two strategically identical PUF samples, A and P U F A , on distinct chips offered the identical n-bit challenge C: P U F B , can generate two m-bit responses, { 0 , 1 } n and R 1 : { 0 , 1 } m , where the probability that R 2 : { 0 , 1 } m and R 1 are the exact equal is infinitesimally low, which is shown as below: R 2 (15) P [ P U F A ( C : { 0 , 1 } n ) = P U F B ( C : { 0 , 1 } n ) ] = ε Extensively published references have demonstrated that the fundamental arbiter PUF and numerous of its derivatives exhibit notably deficient uniqueness properties. The protocols that utilize these PUFs are susceptible to impersonation attacks, the adversary could replicate the publicly accessible PUF construction and masquerade as a legal entity. In order to overcome these problems, a variation of the APUF has been proposed which is called the m-n DAPUF. This PUF construction has strong uniqueness characteristics and is a suitable contender for the implementation of authentication mechanisms. 5.2.6 Vehicle anonymity and traceability The ability to trace vehicles is advantageous for trusted party administrators, as it allows them to effectively identify and address malfunctioning vehicles. During the vehicle identity verification phase, the message is transmitted to the vehicle { σ 1 , y , t , x , N j , P I D j , T 2 } , which includes the pseudonym identity A V i of P I D j in the place of the real identity A V j . The pseudo-identity I D j is encrypted using the secret key P I D j of the TA, represented as P p u b . As only the TA holds the secret key, the adversary P I D j ← E P p u b ( h ( T 1 | | r j ) ⨁ I D j ) cannot retrieve A of I D j from A V j , which guarantees only the TA can trace malicious vehicles and ensures the security of the overall system. P I D j The proposed scheme ensures vehicle anonymity by utilizing pseudonyms in the communication and authentication process. As only TA is capable of deriving the real identity through computing I D j after receiving the I D j = D s ( P I D j ) ⨁ h ( T 1 | | r j ) of malicious vehicle broadcast by other vehicles. Subsequently, the TA marks the P I D j of vehicle I D j as A V j to prevent malicious vehicle I D i = i n v a l i d i t y from re-registering and continuing to carry out malicious attacks. Furthermore, the delivered messages are appended with timestamps and arbitrary random numbers, ensuring that the vehicles remain untraceable for the adversary A V j . This security measure bolsters the anonymity and integrity of the communication. A 6 Performance analysis This section introduces the performance analysis of the proposed scheme, alongside a comparison with other related schemes [33] , [34] and [27] . We evaluate the effectiveness of our scheme and other relevant schemes by analyzing both computation overhead and communication overhead. 6.1 Computation overhead In our authentication system, the computation overhead is represented by the cumulative time required to perform cryptographic functions. To calculate the computation overhead of these schemes, we have taken into account the following cryptographic functions: • : The required time of performing one-way hash function. T H • : The time of executing symmetric encryption or decryption. T S E • : The time of accomplishing scalar multiplication in ECC. T S M • : Pseudo-randomly generates a set of integers required time. T PRNG • : the time desirable to accomplish T MASK MASK cryptographic operation. To obtain the precise timing required for executing these cryptographic functions, we conducted simulations on the following hardware platform, which includes an Intel i5-12500H 2.50 GHz CPU, 16.0 GB of memory, and a 64-bit Windows 11 operating system. We performed 100 random simulations to obtain accurate results and compared the computational performance of the proposed scheme with several other related schemes. From our simulations, the time of , T H , T S E , T S M and T PRNG are 0.5471 ms, 0.4113 ms, 0.3012 ms, 0.0204 ms and 0.014 ms, respectively. T MASK In an authentication protocol, it is important to note that the registration phase precedes the initiation of a vehicle's authentication, thereby it should be excluded from the computation overhead of the V2V communication. Consequently, the crucial aspect in calculating the computation overhead is the vehicle identity verification phase. In this phase, the vehicle performs the hash function six times, so the computational time is . Moreover, the vehicle executes ( 6 × 0.5471 ) = 3.2826 m s PRNG and MASK functions twice, respectively. The overall computation overhead in the vehicle end is . As the proposed system does not depend on the involvement of TA in the vehicle identity verification phase, the total computation overhead of this scheme amounts to 3.3514 ms. The computation overhead of other relevant schemes is calculated using the identical methodology. The overall computation time of the scheme 3.2826 + ( 2 × 0.0204 ) + ( 2 × 0.014 ) = 3.3514 m s mentioned in L P P S A [33] are calculated as . In the 11 T H + 14 T S M = ( 11 × 0.5471 ) + ( 14 × 0.3012 ) = 10.2349 m s scheme mentioned in P S I B [34] , the total computational time is . In the 13 T H + 2 T S E = ( 13 × 0.5471 ) + ( 2 × 0.4113 ) = 7.9349 scheme mentioned in P S A V [27] , the required computational time is . The overall computation overhead of these schemes is illustrated in 6 T H + 6 T S E = ( 6 × 0.5471 ) + ( 6 × 0.4113 ) = 5.7504 m s Table 2 . In addition, we use images to more intuitively show the computation overhead of different schemes. Fig. 4 indicates that the computation overhead increases linearly with the number of vehicles, and the computation overhead of our scheme has always been significantly lower than the other relevant schemes. 6.2 Communication overhead In this subsection, we evaluate the communication overhead of the proposed scheme and other relevant schemes. The communication overhead represents the amount of messages transferred by the entities involved in executing the authentication procedure. We calculate the size of identity, timestamp, and random number are 64 bits, 32 bits, and 128 bits, respectively. The output lengths of the Hash (SHA-256), one block symmetric encryption (AES-128), and feedback function are 256bits, 128bits, and 160bits, respectively. The communication overhead of the vehicle end is the messages transmitted by vehicles. The and M s g 1 = { σ 1 , y , t , x , N j , P I D j , T 2 } are 928 bits and 416 bits, respectively. Thus, the total communication overhead for the vehicle side is (928+416) =1344 bits. Furthermore, the communication overhead of TA is generated by sending the message M s g 2 = { σ 2 , r j ¯ , T 3 } to vehicles, where the message is 576 bits. Hence, the overall communication overhead of our designed scheme is (1344+576) =1920 bits, which is the sum of the vehicle end and the TA end. The communication overheads of the state-of-the-art schemes are similarly calculated in the described manner, as indicated in { α i , Q i , p x , P I D i , T 1 } Table 3 and Fig. 5 . This diagram illustrates the communication overhead at various system endpoints during the authentication process. The uncolored column corresponds to the RSU's communication overhead. Since our scheme does not require the RSU's involvement in the authentication process, the communication overhead for the RSU is effectively zero. By excluding the RSU from the authentication, our approach reduces overall system complexity and minimizes unnecessary communication costs. This design choice not only enhances the efficiency of the authentication process but also demonstrates the streamlined nature of our solution in comparison to other schemes that rely on RSU participation The previous analysis and comparison demonstrate that our scheme incurs lower computational and communication overhead compared to the latest existing approaches. This highlights the exceptional communication efficiency of our method. Not only does it optimize communication by reducing overhead, but it also achieves this without compromising on security. By balancing performance and security effectively, our solution provides a robust framework that ensures secure interactions while maintaining minimal resource consumption. This makes it highly suitable for scenarios requiring both efficient communication and stringent security measures, offering a significant improvement over competing solutions. 7 Practical simulation on NS-3 We conducted the simulation experiments on NS-3, which is an open-source environment designed for discrete event network simulations [35] , to compare our scheme with the state-of-the-art schemes. NS-3 can support multiple models, such as UDP protocol, LTE(Long-Term Evolution)network, WiFi module, etc. NS-3 contains several libraries that provide various operations in multiple types of networks and protocols. Users can also leverage other tools such as Python/C++ to obtain simulation results. 7.1 Simulation environment The simulation was performed on Ubuntu 20.04 LTS platform with NS-3 3.35 simulation tool. Our scheme is specifically tailored for implementation in a 6G environment, thus necessitating that the simulation experiment be conducted within a 6G communication framework as well. The routing protocol utilized in this simulation experiment is the commonly employed OLSR (Optimized Link State Routing) protocol, paired with the 802.11p wireless protocol. The data-rate and delay of the communication protocol are and 100 M b p s , respectively. A more specific explanation of the parameters employed in the NS-3 simulation is given in 0.5 m s Table 4 . In the proposed scheme, our primary focus is on V2V communication. The entities simulated within our topology primarily consist of vehicles and TA. The RSUs function exclusively as a 6G base station, facilitating a 6G wireless communication network. We suppose that each vehicle has a gap of 50 meters between each other and moves in a seemingly random direction. Assume that the road network is divided into hexagonal zones. The vehicle broadcasts message A V i to the zone it currently occupies as well as to its six neighboring zones, covering a total of seven zones. During the vehicle identity verification phase, the sizes of the messages exchanged between two authenticated vehicles, { P I D i , Q i , T 1 } and M s g 1 = { σ 1 , y , t , x , N j , P I D j , T 2 } , are 928 bits and 416 bits, respectively. M s g 2 = { σ 2 , r j ¯ , T 3 } Here we consider three performance indicators to measure the effectiveness of the proposed scheme: Packet Deliver Rate(PDR), End-to-End Delay(EED), and Throughput(T). For the simulation scenarios, we take 3 RSUs and 30 vehicles divided into three clusters. Each cluster consists of ten vehicles, where one of them is designated as the Cluster Head(CH) and the remaining nine vehicles serve as members within the cluster. In addition, each stage adds 30 vehicles until the total number of vehicles reaches 180. 7.2 End-to-end delay (EED) End-to-end delay represents the average time required for messages to be transmitted from a specific source to a destination. EED can be calculated using the formula , where D = ∑ i = 1 n T r i − T s i n is the packet sending time, T s i is the packet receiving time, and T r i n is the total number of the packets. The values of EED changes with the number of vehicles, distance traveled, and the presence of traffic congestion. The increased delays result from a combination of queuing processing delays associated with traffic congestion and propagation delays related to distance. Fig. 6 (a) plots the EED values of the proposed scheme and the other related schemes. The results indicate that the value of EED escalates with the increase in the number of vehicles. The increase in the number of vehicles will lead to a higher volume of messages that require transmission, consequently resulting in increased delays. As depicted in Fig. 6 (a), the proposed scheme exhibits significantly lower EED values in comparison to alternative schemes. Furthermore, this diminished delay is maintained even with an escalation in the quantity of vehicles. 7.3 Throughput Throughput is a performance metric that quantifies the amount of data transmitted within a communication network. It is typically measured as the volume of data successfully transmitted per unit of time. We calculate it using the formula , where T = ∑ N i × Len i T m is the number of packets, N i is the size of the packet, and Len i is the sum of time. Therefore, throughput is typically measured in bits per second (bps). The evaluation of throughput can help researchers understand the performance of the network under different conditions in VANETs. T m Fig. 6 (b) clearly demonstrates that with an increasing number of vehicles, there is a corresponding rise in throughput, as indicated by the throughput formula. Additionally, as the throughput is primarily related to the length of the transmitted messages, our protocol has lower throughput compared to the state-of-the-art protocols. 7.4 Packet deliver rate (PDR) Packet Delivery Rate (PDR) is a crucial metric for assessing network communication performance. It represents the proportion of successfully transmitted packets out of the total number sent. PDR provides an effective method of evaluating performance under diverse conditions, including congestion, interference, and other factors that may affect communication quality. In Fig. 6 (c), it is demonstrated that the PDR decreases with the increase in the number of vehicles. This outcome is largely attributed to increased congestion caused by the addition of more vehicles to the system. The NS-3 simulator transmits data packets and energy through a wireless medium, and when communication failure occurs due to vehicle congestion, data packets may be lost at the receiving end. In addition, it can be seen from the results that our solution is more stable and the PDR is significantly higher than other solutions. 7.5 Simulation summary This experiment simulates different traffic flows on the road by dividing the vehicles and RSUs into control groups with different numbers. The simulation is set up with a varying number of vehicles and RSUs in order to closely replicate real-world road traffic conditions. By comparing with state-of-the-art schemes, our scheme demonstrates superior overall performance and the highest stability. The experimental results showcase different variations in the performance of the individual properties as the volume of traffic on the road fluctuates. Furthermore, the experiments have shown that the scheme can satisfy the demands of practical deployment and is indeed feasible in real-world scenarios. 8 Conclusion In this paper, we present a lightweight dual authentication protocol for V2V communication based on PUF. The inherent property of the PUF determines that it can resist physical attacks and provide layered protection for physical devices. Our scheme enables dual authentication between vehicles through the combination of ZKP and MASK function, which can evade communication with malicious vehicles and provide protection against malicious attacks. According to the security analysis, the proposed scheme can not only ensure anonymity for vehicles but also effectively mitigate major security threats. The performance comparisons indicate that the proposed scheme is more lightweight and efficient than state-of-the-art schemes. Finally, the simulation experiments conducted in a 6G network using NS-3 show that the proposed scheme outperforms all three performance indicators compared to state-of-the-art schemes. Anticipated advancements in 6G technology are poised to introduce remarkable modifications in VANETs, enabling them to deliver better performance than preceding technologies through higher data rates and increased network capacity. Through experiments and performance analysis, we can see that the proposed scheme is superior to other schemes in many aspects. In the future application, the proposed scheme will achieve low-latency and high-efficiency communication while ensuring the privacy and security of user information. The characteristics of PUF can resist most future physical attacks. Additionally, we considered various potential attacks in our security analysis, ensuring that our scheme is well-equipped to resist future threats. The simulation conducted in a 6G environment further demonstrates the significant potential for the future development of our scheme. In the simulation experiment, we modeled vehicle communication under optimal network conditions. Consequently, the delay and other performance metrics obtained are idealized, reflecting the best-case scenario. However, it is important to acknowledge that in real-world environments, the actual delay may be significantly higher due to a variety of unpredictable factors such as network congestion, interference, and varying hardware capabilities. These elements were not fully accounted for in the controlled conditions of our simulation. Additionally, our simulation focused on a relatively straightforward vehicle communication scenario, which may not fully capture the complexity of real-world vehicular networks. Future research will aim to address these limitations by modeling more intricate and diverse communication networks, potentially incorporating multi-hop communication, varying vehicle densities, and more advanced mobility models. CRediT authorship contribution statement Xia Feng: Writing – review & editing. Yaru Wang: Writing – original draft. Kaiping Cui: Writing – review & editing. Liangmin Wang: Writing – review & editing. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
REFERENCES:
1. VIJAYAKUMAR P (2021)
2. CHENG G (2024)
3. XU H (2020)
4. RONG B (2021)
5. SHAH K (2022)
6. WANG M (2020)
7. BRECHT B (2018)
8. TZENG S (2015)
9. TIWARI D (2016)
10. MA M (2019)
11. LIU Y (2017)
12. IMGHOURE A (2023)
13. LIU X (2023)
14. VASUDEV H (2020)
15. LIU Y (2019)
16. JAVAID U (2020)
17. RASHEED I (2020)
18. FENG X (2021)
19. LIU Z (2023)
20. LIANG Y (2023)
21. CHATTERJEE U (2018)
22. GOPE P (2021)
23. ZHENG Y (2022)
24. JIANG Q (2019)
25. JIANG Q (2021)
26. AMAN M (2020)
27. XIE Q (2023)
28. SATAMRAJU K (2020)
29. QURESHI M (2021)
30. HAZRA T (2015)
31. MOHR A (2007)
32. CANETTI R (2002)
33. YADAV K (2022)
34. UMAR M (2021)
35. COUDRON M (2017)
|
10.1016_j.fsisyn.2024.100462.txt
|
TITLE: Mass fatality and disaster response preparedness across medical examiner and coroner offices in the United States
AUTHORS:
- Ascolese, Micaela A.
- Keyes, Kelly A.
- Ropero-Miller, Jeri D.
- Wire, Sean E.
- Smiley-McDonald, Hope M.
ABSTRACT:
With the rise of mass fatalities and disasters, access to mass fatality and disaster planning trainings and resources available to medical examiners and coroners (MECs) in the United States should be reviewed. This paper provides a necessary update on the extent of access to these resources by analyzing data from the 2018 Census for Medical Examiner and Coroner Offices (CMEC). Results show that a high percentage of respondents have access to mass fatality and disaster planning trainings/resources; however, the access is disproportionate. Respondents in the Midwest and South—and those with smaller populations—have less access to resources, while agencies with larger budgets and more full-time staff have more access to resources. This paper discusses potential contributing factors for these disparities, but the data only begin to elucidate gaps in access to mass fatality and disaster planning trainings/resources for MECs and where further research should be conducted.
BODY:
1 Introduction In mass fatality and disaster response, medical examiners and coroners' (MECs') primary role includes identifying decedents, determining cause and manner of death, and certifying death certificates [ 1 ]. Recent research has focused on MEC access to mass fatality and disaster planning trainings/resources by examining factors such as MEC office/jurisdictional demographics (e.g., type of office, rural vs. urban), but the true level and scope of MEC access to these resources on a national scale is unknown [ 2–4 ]. One cross-sectional study found that emergency preparedness levels did not differ based on the type of office (medical examiner vs. coroner), jurisdictional characteristics (size of population and rural vs. urban), or the number of staff [ 4 ]. Other literature shows differences in access to resources between medical examiner versus coroner systems depending on the qualifications and training of those leading each system, with rural and smaller MEC jursidictions, many of which are coroner systems, having less access to operational resources and training compared with those serving larger, urban jurisdictions [ 5–7 ]. Gershon et al. notably found that 42% of MEC offices reported they would not be able to respond adequately if there were more than 25 fatalities in 48 h [ 4 ]. Common lessons learned emerging from research and after action reports following actual mass casualty events—including shootings (e.g., the 2007 shooting at Virginia Polytechnic Institute and State University [33 fatalities]; the 2015 Inland Regional Center event in San Bernardino, California [14 fatalities]; Las Vegas's October 2017 event [68 fatalities]; the 2022 Uvalde Robb Elementary School event [19 fatalities]); natural disasters (e.g., Butte County, California, Camp Fire wildfire in November 2018 [84 fatalities]; Hurricanes Irma [>129 fatalities], Harvey [103 fatalities], and Maria [2975 fatalities]); and the COVID-19 pandemic, which caused 7 million deaths worldwide as of January 21, 2024 [ 8 ]—underscore the critical importance of pre-incident planning, preparation, and training exercises that lay out clear roles, communication plans, and cross-agency protocols among all first responders, including the MEC community [ 9–16 ]. Thus, there has been a push for increased preparedness capacity across all systems [ 7 , 17 ]. Levels of access to mass fatality and disaster planning trainings/resources could be attributed, at least in part, to the lack of standardization of those resources. Table 1 summarizes selected standards, best practice and guidance documents, and other resources for mass fatality incidents (MFIs) that are published or currently in development. Gershon et al. noted that although some MECs have an MFI plan in place, it may lack completeness [ 4 ]. A 2013 national review of states' established MFI plans revealed many were inadequate or not actionable [ 18 ]. This points to the importance of training: MECs that provide staff training on mass fatality plans have higher levels of emergency preparedness than those reporting no staff training [ 4 ]. Merrill et al., ‘s 2016 study showed MECs had the greatest proportion of participation in jurisdiction-wide drills compared with other response sectors (e.g., death care industry, health departments, faith-based organizations, offices of emergency management) [ 19 ]. CDC: Centers for Disease Control and Prevention; DHS: Department of Homeland Security; FEMA: Federal Emergency Management Agency; IACME: International Association of Coroners and Medical Examiners; INTERPOL : The International Criminal Police Organization; OSAC: Organization of Scientific Area Committees for Forensic Science; NAME: National Association of Medical Examiners; NIJ: National Institute of Justice. Even with more standardization and resources, MECs’ access level to mass fatality and disaster planning training/resources could reflect their lack of knowledge about these resources or overall willingness to use them. It is also important to note that with more than 2200 MEC offices nationally, less than one in five MEC offices are accredited and 3 MEC offices have self-declared their use of standards on the OSAC Registry [ 5 , 37 , 38 ]. With the vast majority of MEC offices serving rural areas with limited staffing and resources [ 7 ], accreditation and standards implementation are difficult goals to achieve. Although standards exist and are being developed, MECs may lack awareness, infrastructure, capacity, ability, or willingness to adopt resources, which prevents them from seeking training opportunities (e.g., conferences, regional workshops) or implementing MFI/disaster victim identification (DVI) standards or best practices. With a wide variety of incidents to respond to—natural disasters, mass shootings, transportation accidents, infectious disease pandemics, drug overdose crises, incidents with chemical, biological, radiological, nuclear, or explosive (CBRNE) agents—it is crucial to determine MECs' access to resources in preparation and response to such incidents. The extent of MEC access to mass fatality and disaster planning trainings/resources in the United States is not well-known. Gershon et al.‘s and Merrill et al.‘s studies are just two examples using national survey data to evaluate preparedness levels among MECs in the United States [ 4 , 19 ]. However, regional variation in preparedness levels for some types of mass casualty events that are more common in certain areas of the country (e.g., hurricanes in southern and Gulf Coast states; wildfires in western states) is not fully delineated. The Bureau of Justice Statistics' (BJS’) 2018 Census of Medical Examiner and Coroner Offices (CMEC) was the first national census collection to include questions about agency access to mass fatality and disaster planning trainings and resources and agency participation in emergency response drills [ 5 ]. Although the 2021 BJS CMEC report summarized some of the mass fatality data collected in the 2018 CMEC, it did not include an-depth analysis of results, which this paper provides [ 5 ]. In addition, we examine the relationship between different access levels and contributing factors to access levels, such as jurisdiction region, population, budget, and staffing. Although the literature uses many terms (e.g., critical incident, MFI), this paper will use the terminology, “mass fatality and disaster planning trainings and resources” and “emergency response drills” to be consistent with 2018 CMEC language. Table 2 summarizes selected terms from the literature. 2 Methods The CMEC is part of BJS’ portfolio of forensic and law enforcement data collections, which all focus on staffing, budget, caseload, capacity, and access to resources. Two data collections have been administered, including the 2005 collection that referenced 2004 information and the 2019 collection that referenced 2018 information (hereafter called “2018 CMEC”). Both CMEC collections were designed to focus on the U.S. MDI system, providing a national picture of MEC offices, including personnel, expenditures, workloads, capabilities and procedures, and access to resources and technology. The present analysis uses the 2018 CMEC public dataset [ 49 ] to provide a national picture of MEC disaster planning and emergency response resources, including access to such trainings or resources and participation in emergency response drills based on agency characteristics, such as agency size and geographic location. The present analysis draws from the 2018 CMEC data collection RTI International performed for BJS (contract number 2017-MU-CX-K052). Approvals from the Office of Management and Budget and RTI's Institutional Review Board were obtained before any data collection activities began. 2.1 2018 CMEC In 2019 and 2020, RTI administered the CMEC referencing 2018 MEC information for BJS. BJS and RTI coordinated with a forensic expert panel review to design the census questionnaire and tested the draft survey across a selected pool of MECs before finalizing the instrument. RTI utilized a mixed‐mode data collection approach that included email, mail, web, and computer‐assisted telephone interviewing response options to bolster response, ultimately achieving an 81.4% response rate [ 5 ]. More information regarding the data collection methodology can be found in the 2021 BJS report [ 5 ]. For the present analysis, the 2018 CMEC public dataset was acquired through the National Archive of Criminal Justice Data at the University of Michigan (NACJD) [ 49 ]. As the 2018 CMEC BJS report describes, both a long and shortened version of the 2018 were fielded [ 5 ]. The shortened version was fielded in the last few months of the survey and was designed to bolster response by only including critical item capture. Within the current analysis, only data from the long version of the 2018 CMEC [ 50 ] are included, because only the long version of the survey included the mass fatality and disaster planning resources questions. The long version respondent pool was comprised of 1340 responding offices out of the enumerated 2112 offices [ 5 ]. Item response across the long and short surveys ranged from 0 to 18% overall. For the present analysis, item nonresponse percentages—that is, the percentage of missing data by survey item—across the measures defined below ranged from 1.4% (disaster planning resources) to 2.4% (mass fatality investigations) [ 5 ]. In the analysis, the rate of question nonresponse was determined to be less than 25% for the 2018 CMEC data, including across variables listed in Table 3 . Out-of-range or missing data were reviewed and recoded as well. The data collection team performed data quality follow-up with the survey respondents to rectify question nonresponse during the active data collection period. If the data were still outstanding, the team used a hot deck imputation technique. This analysis technique is where individual values are secondary to inferences of a larger population's parameters; a missing value of one respondent is replaced with the value from a similar respondent within the same dataset. The 2021 BJS report contains further information regarding the imputation procedures used for the 2018 CMEC administrations [ 2 , 5 , 51 ]. 2.2 Measures To determine access to mass fatality and disaster planning training and resources available to and used by MECs, the present analysis chiefly drew from Section A (Administrative) and Section F (Resources and Operations) of the 2018 CMEC survey, which are summarized in Table 3 . This analysis focused primarily on mass fatality and disaster preparedness and response questions. Data on access to drills were drawn from Question F4, which included a yes/no response to the question, “Does your office participate in county/statewide emergency response drills?” To provide a nuanced and meaningful view of these variables of interest, we examined the variables of interest by jurisdiction size, census region, agency budget, and agency staffing size. Jurisdiction size was drawn from Question A5, which asked, “What jurisdictions does your office serve?” The data from this question were subsequently matched to 2018 Census population data. Using the same methodology as BJS’ 2021 report, jurisdiction size was classified across three categories, including large jurisdictions serving 250,000 populations or more, medium jurisdictions with populations between 25,000 and 249,999, and small jurisdictions with populations of less than 25,000. Census regions were derived using the state associated with the MEC address listed (See Table 4 ). Agency budget data were derived from question B1, which asked “For the most recently completed fiscal year, what was your total budget?” Respondents entered their response into a numeric field. The budget data ranged from $0 to greater than $20,000,000; offices reporting budgets greater than $20,000,000 were excluded as outliers in the present analysis. Offices that reported a $0 budget operated on a fee-for-service basis or did not have a dedicated budget. The measure of staff size was drawn from Question A8 of the 2018 CMEC instrument, which asked respondents to enumerate the number of full-time, part-time, consultants/contractors, and on-call employees across eight different types of staffing roles (e.g., autopsy pathologists, coroners/non-physicians, death investigators, etc.). The staff size measure is a composite measure of full-time employees (i.e., 1.0 equivalent) and part-time employees who are valued at 0.5 of a full-time equivalent, such that an office with four part-time employees would have an employee count of two. 2.3 Data analysis These analyses rely primarily on descriptions of frequency distributions, measures of central tendency (e.g., mean, median), and cross-tabulations to explore the bivariate relationships. The data were analyzed using IBM SPSS version 28.0.1.1 (Armonk, NY) to group results by general MEC characteristics, emergency preparedness characteristics (e.g., direct training vs. training through a partner), and policies/procedures around evidence retention and recordkeeping. For the 2018 analysis period, chi-square tests were used to determine significant differences in whether an agency reported on a specific emergency preparedness-related question. For continuous variables, Mann-Whitney U tests were conducted to determine whether there were any statistically significant differences for emergency preparedness in 2018 because of the zero-inflated non-normal distribution of emergency preparedness use and the potential influence of extreme outliers. For these inferential tests, the alpha level was set to 0.05, and any cases with missing values for that bivariate pair were omitted. 3 Results Over nine in 10 agencies reported access to mass fatality trainings/resources (90.5%) and disaster planning trainings/resources (91.0%), with a higher percentage of respondents reporting access through a partner ( Fig. 1 ). Over three-quarters (78.7%) of MECs participated in county/statewide emergency response drills ( Fig. 2 ). With respect to geographic location, the Midwest and South had lower percentages of access to mass fatality and disaster planning trainings/resources and lower participation in emergency drills than the Northeast and West (see Fig. 3 ). For access to mass fatality trainings/resources, the Midwest had the lowest proportion at 87.9%, while the South had the lowest percentages for access to disaster planning trainings/resources (89.3%) and participation in emergency response drills (75.7%). Chi-squared tests for the relationship between region and resource access is significant for both mass fatality and disaster planning trainings (χ = 17.90; p < .001; χ = 13.53; p = .004, respectively) at the 0.05 level. Although variability is observed across region, the difference in participation in emergency drills is not statistically significant (χ = 6.63; p = .085). Note, the District of Columbia (DC) and Puerto Rico (PR) are included in the CMEC dataset with PR and Massachusetts (MA) being the only state and territory that did not respond to the three outcome measure questions analyzed. 3.1 Population Respondent agencies were categorized by the jurisdictional population size they serve to determine the relationship between population size and access to trainings/resources and participation in emergency response drills ( Table 3 ). Overall, the MECs serving the smallest jurisdictions reported the least amount of access and participation levels, ranging from 78.7% for participation in emergency drills to 87.1% for disaster planning resources. The MEC offices serving the larger jurisdictions ranged from 88.1% emergency drills to 99.5% for access to mass fatality trainings/resources. The chi-squared test for each resource: mass fatality (χ = 38.59; p < .001), disaster planning (χ = 22.24; p < .001), and emergency drills (χ = 20.14; p < .001) is statistically significant at the 0.05 level. 3.2 Budget and employment This analysis examined the relationship between budget and staffing and mass fatality and disaster planning levels. The median budget among reporting offices was $68,000. A binomial logistic regression was used to assess the relationships between the resources/trainings and the agency budgets. For mass fatality training, larger budgets were significantly associated with an increased likelihood of access (p < .001). Each additional $10,000 increases the odds of having access to this resource (exp(B)[odds ratio] = 1.01). Similarly, the data show that larger budgets were associated with increased likelihood of access to disaster planning (p < .005; (exp(B) = 1.006)) and emergency drill participation (p < .001; (exp(B) = 1.003)). The median number of full-time employees was 1, and the median number of part-time employees was 0. The measures of central tendency are low because of the zero-inflated distribution of employees, specifically part-time employees, meaning that the typical agency has none. However, a binomial logistic regression can still be useful for examining the effects of the number of employees to assess the relationships between variables. For access to mass fatality trainings/resources, additional employees were significantly associated with an increased likelihood of access (p < .001). Each additional employee increases the odds of access (exp(B) = 1.2). Similarly, we found that additional employees were associated with access to disaster planning trainings/resources (p < .001 each additional employee increases the odds of having access (exp(B) = 1.13) and emergency drill participation (p < .001; each additional employee increases the odds of participation (exp(B) = 1.06). The noted observations about population, budget, employment, and caseload are in the same direction, likely because of the correlation between these variables, which could all be attributed to larger, busier, and better-funded agencies having more resources. Budget and population were correlated at 0.646. 4 Discussion To understand the extent of MEC agency access to mass fatality and disaster planning trainings/resources and participation in emergency response drills, data from the 2018 CMEC were analyzed. This analysis shows that the vast majority of MECs have access to mass fatality trainings or resources and disaster planning trainings or resources and participate in emergency response drills. These findings align with Gershon et al. who found that 95% of agencies had an MFI plan in place [ 4 ]. Although these data are encouraging, it should be noted that high access levels to resources may not necessarily indicate thorough preparedness levels or readiness. For example, MFI resources may not include policies and requirements for a multiagency communication plan indicating (1) which agency is the command/lead agency; (2) which agencies should be notified and when this should happen; (3) which agencies will be responsible for family notification and grief services; (4) which agencies will be responsible for media outreach; (5) what are options for immediate and increased staffing needs; (6) what services will be a part of the post-incident plan, including workforce wellness and resilience polices; and (7) other necessary planning needs and resources [ 52 , 53 ]. These are all issues that have been identified as crucial in after action reports following MFIs [ 9–11 , 15 , 16 , 54 ]. Even though the literature shows it is beneficial for MEC offices to use standard operating procedures and training, studies point out that existing plans may be incomplete, and it is important to prepare for the probability of an “all fatal” incident [ 4 , 55 ]. Additionally, although a majority of MEC offices reported access to mass fatality trainings or resources and participated in emergency response drills, the rate is not reflective of the frequency or quality of material review or participation levels. It is also notable that MECs in the Midwest and South reported less access to emergency response resources than their counterparts in the Northeast and West. This is important because the Midwest and South tend to have more rural areas that depend on less-resourced coroner offices [ 7 ] and serve areas where natural disasters such as tornadoes (South and Midwest) and hurricanes and coastal storms (Southeast and Gulf Coast) occur most often [ 55 ]. In the absence of disaster planning and preparedness in part or in whole, and in light of known workforce shortages and training budgets among MECs [ 37 ], the federal government can dispatch Disaster Mortuary Operational Response Teams (DMORTs) to areas affected by an MFI and can also bring organizational structure through frameworks such as Emergency Support Functions via the Administration for Strategic Preparedness and Response [ 56 ]. Moreover, other federal participation may include the Federal Emergency Management Agency (FEMA), National Transportation Safety Board, and the Federal Bureau of Investigation when and where appropriate; these teams can bolster, and sometimes lead, less experienced local or regional efforts for disaster response. Finally, it is imperative that these organizational and communication frameworks are inclusive of MEC leadership and staff and involve them from the beginning with disaster planning and preparedness. In addition to jurisdictional, regional, and population demographics, resources were affected by agency budget and number of full-time staff. Unlike Gershon et al.‘s study, which found that emergency preparedness levels did not differ based on population size and number of staff, the present analysis showed access to relevant trainings and resources differed by these measures [ 4 ]. However, it should be noted that Gershon et al.‘s study relied on a convenience, cross-sectional study of 122 responding MEC offices; given the methods used to recruit MEC respondents (i.e., recruitment through newsletters, websites, and mass emails) and the short response period of 6 weeks, their findings may have skewed more toward agencies that were better resourced, better staffed, and more aware and supportive of research. By contrast, the 2018 CMEC was a census of all MECs and achieved a high response rate across all types of MECs, including small to large offices with varying types of agency resources, and was fielded over 41 weeks to achieve an 81.4% response across the long form of the survey. Population, budget, and employment are likely correlated with larger, busier, and better-funded agencies having more resources. For this analysis, it is difficult to isolate one contributing variable for why some MECs possess more mass fatality and disaster planning trainings or resources than others, which is one limitation of the data that could be explored in future studies. The 2018 CMEC survey had other limitations as well. First, an abbreviated CMEC survey was sent out after a full survey and did not include any emergency preparedness questions. Thus, this analysis did not include the 307 respondents who provided shortened survey responses, which may have introduced some bias. Second, the CMEC instrument was informed by an expert panel and vetted via cognitive interviews; however, given the large pool of respondents who range widely in nomenclature, educational attainment, and practices, the CMEC questions may not have captured all nuances involved. Third, the frequency of the main questions of interest regarding county/statewide emergency response drills and access to mass disaster fatality and disaster planning trainings or resources lacked any timeframe boundaries. It was not clear how often and how these resources were being used in the present analysis. More research is needed on what access to these resources really means to offices. Finally, it is important to note the measures of employment in this analysis may not be a true representation of time worked by MEC staff. For instance, part-time employees who were deemed half of a full-time employee may work significantly less hours in the MEC office. Because the 2018 CMEC was the first census to cover questions specifically related to mass fatality and disaster planning resources, it will serve as a foundation for future comparison [ 5 ]. Notably, the 2018 survey data are gross national metrics for emergency preparedness; future studies should look at more detailed measures of emergency preparedness, which could include training timeframes, use of resources, implementation of standards, and participation in emergency response drills. The federal government may also consider adding questions about DVI software and recently released standards ( Table 1 ) in future CMEC surveys—e.g., adopting best practices and standards for DVI and MFI management [ 26 , 27 ], communications following MFIs [ 25 ], and victim identification [ 42 ]—that would provide for greater insights into the adoption of these resources and practices moving forward. With the variety of natural disasters, mass shootings, and CBRNE agents at incidents, future studies could also investigate which agencies have implemented standards, best practices, tools, and policies and procedures and have access to resources specifically pertaining to those types of incidents. MECs continue to participate in the process to develop standards and best practices through ANSI/ASB and OSAC to help improve MFI response. More standard operating procedures, toolkits, and training should continue to be developed, and existing resources should be disseminated more broadly. MFI plans could address workforce resilience and follow-up to ensure post-traumatic stress disorder and other potential mental health issues are identified and treated. Similarly, MFI plans that put processes in place to avoid inequities could also improve these services. MECs are faced with a wide variety of mass fatality and disaster incidents (e.g., natural disasters, mass shootings, infectious diseases). This analysis of 2018 CMEC data sought to review U.S. MECs’ access to mass fatality and disaster response resources and participation in emergency response drills, identify potential contributing factors for gaps in this access, and offer recommendations for future surveys and possibly an independent FEMA census. Funding This work was supported, in part, by the Forensic Technology Center of Excellence (Award 15PNIJ-21-GK-02192-MUMU ), awarded by the National Institute of Justice, Office of Justice Programs, U.S. Department of Justice . This work was also supported by the Centers for Disease Control and Prevention (Contract Number HHSM500201200008I , Task Order Number 200-2016-F-91567). Funding for this research was also provided, in part, by RTI International . The original data collection effort, the 2018 Census for Medical Examiner and Coroner Offices, was performed under Cooperative Agreement 2017-MU-CX-K052 made to RTI International by the Bureau of Justice Statistics, Office of Justice Programs, U.S. Department of Justice. CRediT authorship contribution statement Micaela A. Ascolese: Writing – review & editing, Writing – original draft, Investigation, Formal analysis. Kelly A. Keyes: Writing – review & editing, Writing – original draft. Jeri D. Ropero-Miller: Writing – review & editing, Writing – original draft, Methodology, Formal analysis, Conceptualization. Sean E. Wire: Writing – review & editing, Methodology, Formal analysis, Data curation. Hope M. Smiley-McDonald: Writing – review & editing, Methodology, Formal analysis, Conceptualization. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgments The opinions, findings, and conclusions or recommendations expressed in this publication are those of the author(s) and do not necessarily reflect those of the U.S. Department of Justice. We would like to acknowledge and thank all individuals who helped contribute data from participating medical examiner and coroner offices across the U.S. whose inputs where critical for ensuring that their offices were represented.
REFERENCES:
1. (2011)
2. HICKMAN M (2007)
3. (2023)
4. GERSHON R (2014)
5. BROOKS C (2021)
6. (2003)
7. ROBINSON R (2017)
8. (2023)
9.
10. (2007)
11. (2024)
12. ISSA A (2018)
13. BLAKE E (2018)
14. RIVERA F (2020)
15. (2016)
16.
17. (2016)
18. (2013)
19. MERRILL J (2016)
20. (2021)
21. (2019)
22. (2018)
23. (2018)
24.
25. (2021)
26. (2021)
27. (2022)
28. (2023)
29. (2017)
30.
31.
32. (2018)
33.
34. (2022)
35. DRAKE S (2021)
36. (2022)
37. (2019)
38. (2022)
39. (2012)
40. LEE C (2010)
41. (2022)
42.
43. GEBBIE K (2002)
44. PURYEAR B (2023)
45. METZLER E (2015)
46.
47. JAVED B (2023)
48. DENOLF R (2022)
49. (2023)
50. (2018)
51. RODRIGUEZ A (2022)
52. (2019)
53.
54. (2019)
55. CARROLL E (2017)
56.
|
10.1016_j.asjsur.2022.08.129.txt
|
TITLE: Inferior inner quadrant schwannoma of the breast: A rare case report
AUTHORS:
- Xi, Xiaoxia
- Wang, Xiao
- Yue, Xiaolei
- Chen, Yonglin
ABSTRACT: No abstract available
BODY:
Dear Editor, Schwannoma are relatively uncommon tumors that originate from the Schwann cells that make up the nerve myelin sheath and are slow-growing, oval or fusiform shaped, complete capsules with clear demarcation, and arise from peripheral nerves or the spinal cord. The vast majority are benign, and malignant cases are rare. Breast schwannomas are extremely rare, accounting for only 2.6% of all schwannomas and 0.2% of all breast cancer tumors. 1 Here, we report a case of an uncommon breast lump in the inner lower quadrant that was diagnosed as schwannoma. 1 A 37-year-old woman presented with a > 2-month history of a painless lump in the right breast. The patient did not present with any features of von Recklinghausen's disease, such as skin nodules, café-au-lait spots, or a family history of tumors, and had no clinical manifestations. Sonography revealed a 2.3 × 0.8 cm oval, well-demarcated, and hypoechoic nodule with a palpable mass at the 5 o'clock position in the right breast, 1.8 cm from the nipple ( Fig. 1 A). Ultrasonography classified the lesion as a category 4 B in the Breast Imaging-Reporting and Data System because the tumor was likely malignant. We, therefore, performed an excisional biopsy using local anesthesia and removed an intact encapsulated mass. The mass was sent for histopathological examination, and the tumor size was 2.3 × 1.5 × 0.8 cm. Microscopically, the tumor envelope was intact, with the tumor tissue composed of numerous spindle-shaped Schwann cells arranged in an onion-like skin or spiral shape ( Fig. 1 B). The tumor had two alternating components ( Fig. 1 C), marked by Antoni A and B tissue. Antoni A: a cell-rich area with densely packed tumor cells that arranged in nuclear palisades (Verocay bodies); Antoni B: low-cellular areas with loose interstitial and myxoid changes, also known as reticular areas. Immunohistochemical results: tumor cells were diffusely and strongly positive for S-100 protein ( Fig. 1 D), tumor margin or perivascular fibroblasts express CD34 ( Fig. 1 E), EMA (−), GFAP (−), CD10 (−), CD117(−), KI67(<5%) ( Fig. 1 F). The pathologic diagnosis: right breast schwannoma. The patient recovered well after the operation, and no abnormalities occurred in the 2-year follow-up. Breast schwannoma is rarely reported. Its specific etiology is unknown, and it is difficult to distinguish it from other breast diseases. In all reported cases, the tumor was located in the outer upper quadrant, but in our patient the tumor was located in the inner lower quadrant, which is the highlight of this paper. Ultrasound has a high reference value for breast schwannoma, and pathological examination and immunohistochemical staining are the most reliable diagnostic methods. Histopathological manifestations of typical schwannomas: Cell-rich areas (Antoni A) and low-cellular areas (Antoni B). Antoni A tissue potentially transforms into Antoni B tissue during tumor progression. 2 S-100 is the preferred immunohistochemical marker. Once discovered, a complete surgical resection is necessary because of the possibility of a malignant transformation of breast schwannomas. 3 Funding This study has no funding source. Author contributions Xiaoxia Xi collected case data, wrote the manuscript and revised the paper; Xiao Wang collected data; Xiaolei Yue prepared materials; Yonglin Chen reviewed and confirmed the content of the manuscript. Declaration of competing interest Title Page All authors declare no conflicts of interest. Acknowledgements Not applicable. Appendix A Supplementary data The following is the Supplementary data to this article: Multimedia component 1 Multimedia component 1 Appendix A Supplementary data Supplementary data to this article can be found online at https://doi.org/10.1016/j.asjsur.2022.08.129 .
REFERENCES:
1. LEE H (2021)
2. HUANG L (2020)
3. HELBING D (2020)
|
10.1016_j.jsps.2018.05.008.txt
|
TITLE: Systematic review of the safety of medication use in inpatient, outpatient and primary care settings in the Gulf Cooperation Council countries
AUTHORS:
- Alsaidan, Jamilah
- Portlock, Jane
- Aljadhey, Hisham Saad
- Shebl, Nada Atef
- Franklin, Bryony Dean
ABSTRACT:
Background
Errors in medication use are a patient safety concern globally, with different regions reporting differing error rates, causes of errors and proposed solutions. The objectives of this review were to identify, summarise, review and evaluate published studies on medication errors, drug related problems and adverse drug events in the Gulf Cooperation Council (GCC) countries.
Methods
A systematic review was carried out using six databases, searching for literature published between January 1990 and August 2016. Research articles focussing on medication errors, drug related problems or adverse drug events within different healthcare settings in the GCC were included.
Results
Of 2094 records screened, 54 studies met our inclusion criteria. Kuwait was the only GCC country with no studies included. Prescribing errors were reported to be as high as 91% of a sample of primary care prescriptions analysed in one study. Of drug-related admissions evaluated in the emergency department the most common reason was patient non-compliance. In the inpatient care setting, a study of review of patient charts and medication orders identified prescribing errors in 7% of medication orders, another reported prescribing errors present in 56% of medication orders. The majority of drug related problems identified in inpatient paediatric wards were judged to be preventable. Adverse drug events were reported to occur in 8.5–16.9 per 100 admissions with up to 30% judged preventable, with occurrence being highest in the intensive care unit. Dosing errors were common in inpatient, outpatient and primary care settings. Omission of the administered dose as well as omission of prescribed medication at medication reconciliation were common. Studies of pharmacists’ interventions in clinical practice reported a varying level of acceptance, ranging from 53% to 98% of pharmacists’ recommendations.
Conclusions
Studies of medication errors, drug related problems and adverse drug events are increasing in the GCC. However, variation in methods, definitions and denominators preclude calculation of an overall error rate. Research with more robust methodologies and longer follow up periods is now required.
BODY:
1 Background The use of medication is perhaps the most common intervention in medical practice. Medication use occurs in many different settings and involves different health care practitioners, as well as patients and their carers ( Franklin and Tully, 2016 ). There are different types of problems associated with medication use, some are preventable events, some are not, and some result in injury and some do not. Medication errors (ME), are a global health care concern with the majority of research published from developed countries such as the United States of America and Europe ( Morimoto et al., 2010 ) and ( Jha et al., 2010 ) and much less information on the incidence and types of errors within the Middle East and the GCC in particular. The GCC comprises six countries: Bahrain, Kuwait, Oman, Qatar, Saudi Arabia, and United Arab Emirates (UAE). These countries are listed by the United Nations World Bank as high income countries ( World Bank, 2017 ). Total expenditure on health as a percentage of the country’s gross domestic product ranged from 2.2% to 5% in 2014 ( World Health Organization, 2014 ). The GCC countries’ health systems are also comparable and are more similar to each other than to other countries within the Eastern Mediterranean Region or other Middle Eastern countries. A previous systematic review of MEs across the whole of the Middle East concluded that ‘the main factor contributing to MEs […] is poor knowledge of medicines by both doctors (prescribers) and nurses (administering drugs).’ ( Alsulami et al., 2013 ). Publications specifically from the GCC region report ME as an issue within both primary care and the inpatient setting ( Al Khaja et al., 2005; Al Khaja et al., 2007; Al-Dhawailie, 2011 ). However, ME terminology used among the studies is different, as are the different types of medication safety aspects studied. The concept of “medication safety” potentially encompasses a wide range of areas ( Ackroyd-Stolarz et al., 2006; McLeod, 2016 ). We therefore aimed to conduct an updated systematic review of medication safety research from the GCC countries in order to describe the breadth of problems associated with medication use. This will enable a more complete representation of what has been explored in this region regarding medication errors and helps identify gaps in the literature and focusses on preventable harm relating to medication use in clinical practice. Our objectives were to summarise and evaluate quantitative as well as qualitative evidence published on MEs, drug related problems (DRPs) and adverse drug events (ADEs) within the GCC region and to make recommendations for addressing any gaps in the literature identified. 2 Methods 2.1 Data sources and search strategy The following electronic databases were searched for the period 1 January 1990 to 31 August 2016: CINAHL Plus, EMBASE, International Pharmaceutical Abstracts, PubMed (MEDLINE), ScienceDirect, and Web of Science. Bibliographies of relevant publications were also hand searched. The search strategy was tailored to each database and medical search headings (MeSH terms) were also utilised for PubMed (MEDLINE). One author (JAS) screened the titles and abstracts of all 3115 records identified, for relevance and to determine if the complete text should be retrieved for comparing against inclusion/exclusion criteria and potential inclusion to the review. A random 10% sample was then screened by a second reviewer NAS to assess reliability. Cohen’s kappa (K) value ( Cohen, 1960 ) was calculated to be 0.568; according to Landis and Koch (1977) this is interpreted as indicating moderate agreement. Any differences in opinion about the relevance of the papers were resolved by discussion. For final study selection, the full text was assessed by JAS; any uncertainty was referred to NAS. Any cases of disagreement were referred to BDF, with further clarification and consultation undertaken by JP and HA as needed. Details of the protocol were registered with the PROSPERO international prospective register of systematic reviews (reference CRD42016038733). 2.2 Study selection 2.2.1 Inclusion criteria Research articles focussing on medication errors in the GCC countries published in English or Arabic were included. Both qualitative and quantitative studies of all study designs were included. Relevant conference abstracts were also included given the anticipated paucity of published full text research articles within the GCC. Studies on prescribing, dispensing, administration and monitoring errors in inpatient, hospital outpatient and primary care settings whether hospital or community pharmacies were included. Also included for review were studies examining administration errors by patients/caregivers in their own homes as well as original descriptive research on medication reconciliation errors and medication history errors (errors in the process of documenting the medications the patient is taking or used to take). According to the definitions utilised by the Saudi Arabian Ministry of Health the term ME was defined as: “ Any preventable event that may cause or lead to inappropriate use or patient harm while the medication is in the control of the health care professional, patient or consumer ” (Saudi Arabian Ministry of Health, 2012). Studies assessing DRPs and ADEs, (both preventable and non-preventable) potentially inappropriate medications (PIMs) were also included. Lastly, studies of pharmacists’ interventions to reduce MEs, ADEs or DRPs were also reviewed for inclusion. Studies meeting the review inclusion criteria were included in the analysis. 2.2.2 Exclusion criteria Letters, opinion pieces, editorials and case reports were excluded. Studies focussing on expected side effects occurring with the proper use of a medication were also excluded, where a side effect was defined as: “ An expected, well known reaction that results in little or no change in patient management ”( Ninno and Ninno, 2000 ). Research concerned with blood/blood products, parenteral and enteral nutrition was excluded. Systematic reviews that had studies involving the GCC countries among their review of studies were not included in the review but used as a potential source to identify further relevant studies for inclusion. Finally, articles associated with attitudes, perceptions, or views on clinical services were excluded. 2.2.3 Process of data extraction Search results were exported to Endnote X7 (Thomson Reuters, Times Square New York, NY, USA). Duplicates were removed. Article titles and abstracts were initially screened for relevance to the systematic review inclusion and exclusion criteria, followed by full text retrieval analysis. Any ambiguities were discussed with BDF and NAS. Further clarification and consultation was undertaken by JP and HA if needed. Data from included studies was extracted on to a data collection sheet developed for this purpose. The extracted data comprised of country, setting and data collection duration, study design, definitions used for study outcomes, the medication safety aspect analysed method of error identification, and reported results. The extraction form was completed by JAS and reviewed by NAS and BDF. Any discrepancies were resolved by discussion with the remaining authors. 2.2.4 Quality assessment Quality assessment of the reviewed included studies was carried out by JAS and NAS with any remaining uncertainties directed to and resolved by BDF. The quality assessment was carried out as part of the analysis of the included studies; study quality was based on specific aspects relating to medication safety research (e.g. methods used for identifying medication errors). The studies were therefore reviewed according to 15 criteria ( Appendix B ) adapted from previous studies ( Allan and Barker, 1990; Ghaleb et al., 2006; Alsulaimi et al., 2013; McLeod et al., 2013 ). The first thirteen criteria are relevant to all medication error study types, while the remaining two apply only to studies of administration error. Any criteria not applicable to the study design were classified as ‘not applicable’ For the purpose of this study it was decided to assess the quality of the conference abstracts by adapting same criteria as for full text articles, as there are no is universally accepted criteria to evaluate conference abstracts. 3 Results 3.1 Search results The search yielded a total of 3115 hits; an additional two were identified via search of bibliographies to give a total of 3117. After duplicate removal, the number of records remaining was 2094. All studies identified were in English, no studies in Arabic were identified. Following initial screening, 294 records were assessed as being potentially relevant. Following full text screening, 54 studies were identified for inclusion: 42 full papers and 12 conference abstracts ( Fig. 1 ). The geographical distribution of the studies ( Fig. 2 ) reveals that the majority of the studies were carried out in Saudi Arabia. None of the included studies were conducted in Kuwait. 3.2 Quality assessment Quality of the studies varied ( Appendix B ). Many studies did not specify a definition for ME, and/or the categories of ME included. Criteria 14 and 15 were only applicable to three studies, one abstract by Aljamal (2012) and two full text studies, by Almazrou et al. (2015) and by Sadat-Ali et al. (2010 ). Most studies adequately described the setting and study objectives. Only six of the 54 studies adequately described sample size calculations. The majority of the conference abstracts did not meet quality assessment criteria such as the inclusion of sample size calculations, validity and reliability, study limitations or details of ethical approval, most likely due to their limited word count. 3.3 Description of included studies Studies were classified within five main categories, with studies of ME further categorised into six subcategories ( Fig. 3 ). The following is a summary of the characteristics of the included studies; more details are given in Appendix D . It was noticed that drug classes commonly studied were antibiotics ( Al Khaja et al., 2006; Alanazi et al., 2015 ) and drug classes used inappropriately were benzodiazepines and tri-cyclic antidepressants ( Al Omar et al., 2013 ). 3.3.1 Studies describing medication errors 3.3.1.1 Studies describing prescribing errors Seventeen studies described prescribing errors. Seven were cross-sectional audits of prescriptions ( Al Khaja et al., 2005; 2006; 2007; 2008 ; Khoja et al., 2011; Al-Hussein, 2008; Albarrak et al., 2014 ); all studied handwritten prescriptions except for Albarrak et al. (2014) who included both handwritten and electronic prescriptions. Five were retrospective audits ( Al Khaja et al., 2012; Altebenaui et al., 2015; Al Shahaibi et al., 2012; Irshaid et al. 2005; Aseeri 2013 ), as well as three retrospective analyses of patient charts ( Mahmoud et al. 2016; Aljeraisy et al. 2011; Youssef et al. 2015 ) and two cross-sectional chart reviews ( Alanazi et al. 2015; Al-Dhawailie, 2011 ). Data collection periods ranged from two weeks to three years. Common prescribing error definitions were those of Dean et al. (2000) and Neville et al. (1989) . Appendix C gives more information on the definitions. Ten studies were conducted within a primary care or outpatient care setting while the remaining four related to the hospital setting. Two studies assessed prescriptions from inpatient and outpatient setting. Within outpatient and primary care settings, the prevalence and types of prescribing errors have been described for infants as well as adults. In a study analysing prescriptions issued for infants ( Al Khaja et al., 2007 ) approximately 91% of prescriptions contained an error. In another study by Al Khaja et al. (2006) , approximately one fifth of infants were prescribed antibiotics at subtherapeutic doses. For adults, studies report varying results, ranging from approximately 7% as reported by Al Khaja et al. (2005) , to 18% of prescriptions containing errors ( Khoja et al. 2011 ). Al Khaja et al. (2012) revealed approximately one quarter of prescriptions ordered by two-thirds of primary care physicians had errors. Other studies reported up to 50% of prescriptions missing at least one item of information ( Altebenaui et al. 2015 ). Furthermore up to 88% of prescriptions written by junior doctors were identified to contain major errors of omission and commission or errors of integration ( Al Khaja et al., 2008 ). Errors of integration or knowledge-based errors in prescribing were defined in the study to include potential drug-drug interactions or drug allergies, which may reflect a failure of the prescriber to integrate information about the patient or drug history ( Al Khaja et al., 2008 ). An Omani study by Al Shahaibi et al. (2012) found that different kinds of omission error were evident in up to 72% of prescriptions. While Irshaid et al. (2005) reported, from their analysis of prescriptions, that physicians' handwriting was illegible in approximately 64% of prescriptions. In 2015, Alanazi, Aljeraisy, Salam reported that the prevalence of inappropriate antibiotic prescriptions with at least one or more types of error was significantly higher among paediatric patients compared to adult patients. In 2008 Al-Hussein carried out a study in which prescriptions were checked for compliance with 14 components of local guidelines; 87% of prescriptions did not meet these requirements ( Al-Hussein, 2008 ). Regarding prescribing errors within an inpatient setting, following analysis of inpatient handwritten medication charts and medication orders, Mahmoud et al. (2016) , reported that the incidence of prescribing errors was 3.6 (95% CI, 3.3–3.9) per 100 prescriptions, 33.9 (95% CI, 31.5–36.6) per 100 admissions and 76.5 (95% CI, 70.9–82.3) per 1000 patient days. Al-Dhawailie (2011) reported approximately 7% of medication orders had prescribing errors. Aljeraisy et al. (2011) reported that in a tertiary paediatric inpatient setting in Saudi Arabia, the overall error rate was 56 per 100 medication orders (95% CI: 54.2%, 57.8%). Dosing errors were most prevalent error type in all three studies. In a Saudi Arabian study by Youssef et al., (2015) on types of contraindicated medications, approximately 14% of the contraindicated medications that resulted in a computerised prescribing system alert were still administered to patients with renal insufficiency by the ordering physician. Two studies assessed prescribing errors from more than one hospital area. In 2014 Albarrak et al. compared handwritten and electronic prescriptions in primary care and outpatient surgery setting and inpatient respectively, for legibility, completeness and medication errors. A statistically significant difference was identified ( P < 0.001) between handwritten and e-prescriptions in omitted dose and omitted routes of administration. One study described the use of a tool to decrease problems associated with medication use, retrospectively reviewing prescriptions from outpatient, inpatient and emergency department settings ( Aseeri, 2013 ). This study reported the outcome of introducing an antibiotic dosing standardisation policy and its reduction of prescribing errors. Physicians were vigilant in documenting patient weight on prescriptions after the implementation of standardized dosing policy, and dosing errors identified on prescriptions reduced from approximately 34% of prescriptions pre-implementation phase to approximately 5% of prescriptions analysed after the implementation phase, a statistically significant reduction. 3.3.1.2 Studies describing administration errors Two studies were identified, both observational in nature, where observation of medication administration was the method of data collection. One was carried out in an outpatient pharmacy waiting area, and the other in the inpatient setting. In the outpatient setting, Almazrou et al. (2015) , revealed that 58% of mothers (patient carers’) measured an accurate dose of paracetamol using an oral syringe versus 50% of mothers using a dropper and 51% using a dosing cup. Dosing accuracy for each type of instrument was influenced by the mothers’ education status. Aljamal (2012) observed nurses during medication administration in the inpatient setting. A total of 169 medication administration errors were observed of 2112 opportunities for error, representing an error rate of 8%. The most common errors were wrong time, dose omission, and wrong dose. 3.3.1.3 Studies assessing errors in medication history and medication reconciliation errors There were several methods used by the five studies of medication reconciliation or medication history errors. Pharmacists screened the patient chart and performed interviews to identify discrepancies at admission or at discharge, triage or transfer between inpatient wards; and then compared with patient medical record or patient discharge medication list ( Abu Yassin et al. 2011, Aljadhey and Al-Rashoud 2013, Al Anany et al., 2012; Rehmani 2011 ). Another study developed a medication reconciliation (MR) form as a tool to detect medication discrepancies ( Sonallah et al. 2014 ). Discrepancies found in patients medication lists at triage, admission or discharge ranged from 18% up to approximately 77% of patient cases interviewed ( Aljadhey and Al-Rashoud, 2013 ) and Sonallah et al., (2014) . The most common types of unintended medication discrepancies were medication omissions, followed by errors in dosages. 3.3.1.4 Studies assessing potentially inappropriate medications (PIM) use Two studies, both from Saudi Arabia, assessed PIMs in elderly. Al-Omar et al. (2013) reported that approximately 44% of the patients had filled a prescription for at least one PIM in the outpatient setting, and Al Odhayani et al. (2016) reported that approximately 53% of the elderly participants attending appointments in the outpatient clinics or as part of home health care programme were using one or more PIMs. Harm caused by these PIMs was not assessed. 3.3.1.5 Studies assessing more than one type of medication error ‘Overall, eight studies reported more than one type of medication error. Two studies were retrospective in designs. The first, a retrospective study by Dibbi et al. (2006) reviewed medical records for adult hospitalised patients for 2 years and reported that the most common type of error was wrong strength (concentration). The second, a retrospective study by Alakhali et al. (2014) reviewed prescriptions from the outpatient setting for two months and identified 1850 opportunities for error and 201 (10.9%) prescribing, dispensing and administration errors. Five studies assessed more than one type of medication errors using incident reports. The following studies all took place in large tertiary care hospitals in Saudi Arabia. Incidents were reported at a rate of 5.8 per 1000 patient days as stated by Arabi et al. (2012) . Medication errors made up approximately 7% of all incident reports from the hospital and approximately 13% of all incident reports from the ICU. In a study by Alshaikh et al. (2013) , the medication error rate over the 1-year study period was 0.4% (949 medication errors for 240,000 prescriptions), approximately 2% of the medication errors were categorised as resulting in any harm to the patient. Medication errors were reported to have originated predominantly at the prescribing stage of the medication process. In the third study, all medication error incident reports collected in the two year period were analysed ( Sadat-Ali et al., 2010 ), and 38 medication errors reported from 23,597 admissions, giving a medication error reporting rate of 0.15% per admission. The fourth study was specific to a neonatal intensive care unit ( Hemida et al., 2011 ), estimating an incidence of one report involving medication error per 250 admissions, with antibiotics most commonly involved. The last study ( Al-Khani et al., 2014 ) determined 10% of prescribing errors included in the hospital reporting system were identified by pharmacists to be prescribing errors involving the wrong drug. The remaining study by Elnour et al. (2007) reported the impact of the structured educational programme for inpatient nursing staff on the usage of MedSafe tool, a medication error reporting program launched in all inpatient nursing stations. Results indicated an increase in the number of ME reported to the Med Safe Tool after the structured program, the types of errors most often reported were monitoring errors and dosage errors. 3.3.2 Studies of DRPs Five studies assessed DRPs. All five differentiated preventable and non-preventable DRPs. The DRP definition and classification of the Pharmaceutical Care Network in Europe (PCNE 2006, and 2010) was used by two studies ( Rashed et al., 2012; Al Hamid et al., 2016 ). Three studies ( Al-Olah and Al Thiab, 2008, Al-Arifi, 2014; Alghamdy et al. 2015 ) defined DRPs according to the definition of Strand et al. (1990) . The different DRP categories identified across all five studies included adverse reactions, drug choice problems, dosing problems, and interactions. Pharmacists’ clinical interventions on identification of DRPs were at the prescriber level, patient/caregiver level, and the drug level. Al-Olah and Al Thiab (2008) reported that the most common definite DRP-related admission to hospital was due to failure to receive medications accounting for approximately 47% of all DRPs, followed by adverse drug reactions approximately 25% of all identified DRPs respectively. Al-Arifi, 2014 reported approximately 19% of patients presented to the emergency department due to DRPs and approximately 93% of these patients needed hospital admission. The most common DRPs were adverse drug reactions (ADRs) and patients’ non-adherence. Alghamdy et al. (2015) reported approximately 5% of admissions were due to DRPs, 70% of which were preventable. Rashed et al. (2012) assessed attendance of paediatric patients to the emergency department and DRP incidence was reported as approximately 29%; the majority were judged preventable. Al Hamid et al. (2016) , randomly selected 150 patient medical records from all admissions for patients over 18 years of age and identified 94 MRPs, of which 67% were definite ( actual as defined by PCNE 2010, based on personal communication Alhamid, October 2017), while 33% were probable ( potential as defined by PCNE 2010). Major risk factors associated with MRPs were polypharmacy and patient non-adherence. 3.3.3 Studies assessing adverse drug events Three studies assessed ADE. A study from UAE, by Al-Tajir and Kelly (2005) , compared two methods to detect ADEs. The first method of data collection for ADEs was limited to spontaneous reporting. For the second arm of the study active monitoring for ADEs took place. It was concluded that the incidence of ADEs detected through surveillance (active monitoring) was significantly higher than for ADEs reported spontaneously for both inpatients and outpatients. About 56% of ADEs identified by both methods combined were judged definite or probable and, of these, approximately 14% were consistently judged preventable. In Saudi Arabia, Aljadhey et al. (2013) determined the incidence of in-hospital ADEs and assessed their severity and preventability in an academic tertiary hospital. Incidents were identified through a combination of medical record review by study pharmacists and voluntary reports from other health care professionals. The incidence of ADEs was 8.5 per 100 admissions. Incidences of preventable and non-preventable ADEs were 2.6 and 6 per 100 admissions respectively. In a more recent study, Aljadhey et al. (2016) determined the incidence of in-hospital ADEs and assessed their severity and preventability in four hospitals in Saudi Arabia. Incidents were identified as described in the study above ( Aljadhey et al. 2013 ). Authors used a variety of ages to differentiate adults from children. The incidence of ADEs per 100 admissions was 6.1 and the incidence of potential ADEs was 16.9 per 100 admissions where a potential ADE was defined as an error that carried a risk of causing injury related to the use of a medication but harm did not occur, either because of specific circumstances or because the error was intercepted ( Morimoto et al. 2004 ). 3.3.4 Studies of pharmacists’ interventions Nine studies focused on pharmacists’ interventions, these had a variety of designs and interventions were aimed to address assessment/reduction of medication errors or problems associated with medication use. The interventions included pharmacists’ documentation of interventions on prescriptions on the inpatient ward, or in primary care clinics; pharmacist performing medication use reviews pharmacist counselling to patients, pharmacist education to physicians. The first was a conference abstract ( Rahman et al., 1994 ) in which authors documented interventions on prescriptions with an intervention rate (error rate) was 1.3%. Al-Rashdi et al. (2010), Al Rahbi et al., (2014) both studies from Oman. Hooper et al. (2009 ), was a study from Qatar. These three studies documented pharmacists’ interventions on prescriptions and reported rates of interventions acceptance ranging from 53% up to approximately 98% of all suggested interventions. Al-Ghamdi et al. (2012) , reported benefits of comprehensive pharmacist counselling. There was a significant difference in the occurrence of ADEs between the control group (no pharmacist counselling) and intervention group. In the control group, 61% of ADEs were judged preventable, and 39% were judged to be serious. Al-Jazairi et al. (2008) studied the participation of a clinical pharmacist in an ICU setting, reporting that the medical team accepted approximately 95% of the interventions. The main DRPs were: no drug prescribed for the medical condition, inappropriate dosing regimen and no indication for drug use. Mitwally et al. (2015) in Qatar reported a 52% reduction in prescribing errors after the prescribing physicians attended educational sessions prepared by the clinical pharmacy team. Rashed et al. (2012) investigated DRPs in hospitalised children and 258 DRP were identified for 186 children. The median number of DRPs per patient was one. Dosing problems were the most common, followed by drug choice problems and ADRs. Regarding the interventions, approximately 43% of all interventions were at drug level, approximately 40% of interventions at prescriber level and approximately 10% of interventions were done at patient/caregiver level. Kheir et al. (2014) reported the results of their exploratory study of conducting interviews as part of medication use reviews within a primary health care facility in Qatar. The most commonly encountered DRPs were non-adherence and adverse drug reactions. 3.3.5 Studies on perceptions of health care practitioners towards medication errors Three studies considered this topic; all were conducted in Saudi Arabia. Al-Rowibah et al. (2013) reported the findings on the impact of computerised physician order entry (CPOE): 72% of physicians agreed that CPOE helped them to decrease ADEs, but 55% reported that it created new types of errors. Al-Arifi, 2014 reported the results of a validated questionnaire to community pharmacists on dispensing errors. The majority of respondents indicated that the risk of dispensing errors was increasing and most of them were aware of dispensing errors. Al Anazi and Al-Jeraisy (2015) used a survey to collect information on healthcare professionals’ perceptions towards contributing factors of medication error (ME) occurrence. Some of the underlying factors of MEs reported were interruptions while writing the order, lack of clarity of physicians order, and no double-checking of the doses. 4 Discussion 4.1 Key findings Fifty-four studies were identified that met our inclusion criteria; the majority were conducted in Saudi Arabia. No articles from Kuwait met inclusion criteria while only two were from the UAE. Notably, Kuwait is the only one of the six that is not a member of the Uppsala Monitoring Centre which is part of the WHO Programme for International Drug Monitoring ( WHO Collaborating Centre for International Drug Monitoring, 2017 ). Saudi Arabia is the largest and has the highest population of the GCC countries, all of which are factors that may account for the larger number of studies from this country. None of the included studies had a qualitative study design, and it may therefore be important to supplement quantitative studies with qualitative research to help understanding of the causes of medication errors and other problems, as well as identifying barriers and facilitators to addressing them. 4.2 Interpretation Only one study was related to the community pharmacy setting ( Al-Arifi ,2014 ). The studies concerning incident reporting were all in tertiary care hospitals in large cities of Saudi Arabia, with only one study educating staff on usage of a medication error reporting tool in Qatar. It is important to encourage a culture of ‘no blame’ across the GCC, and attempts to report and therefore analyse incidents in smaller healthcare institutions would help identify unique issues faced by the practitioners in these establishments. There was a lack of medication safety studies in health care settings of rural areas. Few studies had a multi-site setting described, which would make generalisation of results more difficult. In addition, studies identified errors and provided recommendations but there were few follow up studies of strategies to address them. Multi-disciplinary research is a concept that could be integrated within curricula of medical schools, pharmacy schools and allied health sciences for enhanced collaboration in focusing on medication safety. Lack of standardised terminology related to ME studies has been reported internationally ( Lisby et al., 2010 ) and the studies included in this review also used a variety of definitions to determine study outcomes as shown in Appendix C . Even among younger patients or elderly patients, there is a variety of definitions used as to what comprises a medication error or what comprises a drug related problem, even from within the same country. This is in agreement with ( Alsulaimi et al., 2013 ). 4.3 Recommendations These findings highlight the need to attempt to standardise GCC terminology related to ME and ADRs and create a unified platform to establish patient safety. It must also be kept in mind that in studies of voluntary reporting, the hospital culture and environment must be considered. Furthermore these reported numbers are an underrepresentation of reality as data from rural areas has not been reported, perhaps incentive scheme for reporting in rural health areas should be explored. Patient interactions, factors which would affect their adherence are worth exploring, various studies reviewed identified that patient non-compliance was a factor in emergency department use, and in interventional studies, only 10% of interventions were recorded on the patient level. Understanding of whether there are specific reasons and identifying of issues existing with patients in the GCC is a step closer to developing practical solutions. In recent years, advances in technology has played a big part in evolution of medication prescribing and administration; research evaluating the use of such technology should be encouraged and supported. The majority of studies were descriptive (n = 43) while only 11 were experimental in nature i.e. evaluating some kind of intervention. 5 Strengths and limitations The strengths of this review are that it is the first systematic review focussing on ME research within GCC; due to expected paucity of studies, conference abstracts were reviewed as well as full text journal articles, and thus the search was very comprehensive. While it was difficult to assess the quality of the included abstracts, we used the same criteria as for the full text articles. Limitations are that no quality threshold was in place and so full text articles as well as abstracts were included if they met inclusion criteria without regard to assessment of their quality. This was done to ensure all potential studies related to ME were included; while it allows us to present a comprehensive picture of research within the GCC it does mean that some included studies were of a higher quality than others. Due to the lack of standardisation, whether in terminology used or type of data collected, or age groups defined as ‘adult’ or defined as ‘child’ it was not possible to conduct any form of meta-analysis on the studies identified. A kappa value was not calculated for the quality assessment but the authors ensured that every article included was assessed by two reviewers and any differences were discussed and resolved. Also, the degree or the magnitude of harm caused by improper medication use was not assessed consistently in studies and could not be reported. 6 Conclusions This systematic literature review highlights several findings. The first finding is that the literature on ME in the GCC is very diverse, with a wide range of definitions, denominators and measurement approaches. This also meant it was not possible to calculate an overall incidence of medication error. Future studies could be improved to provide wider impact and a clearer rationale. Some suggestions are to enhance coordination between healthcare colleges in the region, and to strengthen research methodologies to increase validity of results and allow them to be placed in context, for example increase follow up studies. Other suggestions are to place more emphasis on research of medication errors in the community pharmacy setting. Increased understanding of patient behavior and medicine management is also justified. Medication error research is generally increasing in the region, several unique issues are noteworthy to be explored for a better understanding of errors occurring and solutions to overcome them. Funding statement Bryony Dean Franklin is supported by the National Institute for Health Research Imperial Translational Research Centre (NIHR Imperial PSTRC) and affiliated with the NIHR Health Protection Research Unit in Healthcare Associated Infections and Antimicrobial Resistance. The views expressed in this article are those of the author(s) and not necessarily those of the NHS, the NIHR, Public Health England or the Department of Health. Conflict of Interest: Bryony Dean Franklin is supervising a PhD student who is part funded by an electronic prescribing vendor. Acknowledgement Acknowledgement to Mansour Adam Mahmoud for his assistance during the full text article review process, and to Medication Safety Research Chair, Vice Deanship of Research Chairs. Appendix A Search terms used and databases searched Search term (terms) Number of hits “G.C.C” OR “Gulf Cooperation Council” OR Bahrain OR Kuwait OR Oman OR Qatar OR “Saudi Arabia” OR “United Arab Emirates” AND “Patient Safety” OR “Medication Safety” OR “Medication Reconciliation” OR Pharmacovigilance CINAHL = 62 Embase = 740 IPA = 24 PubMed = 654 Science Direct = 356 Web of Science = 79 “G.C.C” OR “Gulf Cooperation Council” OR Bahrain OR Kuwait OR Oman OR Qatar OR “Saudi Arabia” OR “United Arab Emirates” AND “Adverse drug reaction” OR “Adverse Drug Event” CINAHL = 14 Embase = 343 IPA = 12 PubMed = 45 Science Direct = 160 Web of Science = 20 “G.C.C” OR “Gulf Cooperation Council” OR Bahrain OR Kuwait OR Oman OR Qatar OR “Saudi Arabia” OR “United Arab Emirates” AND “medication error” OR “Prescribing error” OR “prescribing mistake” CINAHL = 1 Embase = 139 IPA = 4 PubMed = 13 Science Direct = 92 Web of Science = 11 G.C.C “OR Gulf Cooperation Council” OR Bahrain OR Kuwait OR Oman OR Qatar OR Saudi Arabia OR United Arab Emirates AND “dispensing error” OR “dispensing mistake” CINAHL = 0 Embase = 1 IPA = 1 PubMed = 1 Science Direct = 9 Web of Science = 1 “G.C.C” OR “Gulf Cooperation Council” OR Bahrain OR Kuwait OR Oman OR Qatar OR “Saudi Arabia” OR United Arab Emirates“ AND ”Medication administration error“ OR ”medication administration mistake“ CINAHL = 0 Embase = 1 IPA = 0 PubMed = 1 Science Direct = 2 Web of Science = 1 “ G.C.C” OR “Gulf Cooperation Council” OR Bahrain OR Kuwait OR Oman OR Qatar OR Saudi Arabia OR United Arab Emirates) AND “medication error” OR “drug error” OR “drug mistake” CINAHL = 1 Embase = 139 IPA = 3 PubMed = 15 Science Direct = 76 Web of Science = 10 “G.C.C” OR “Gulf Cooperation Council” OR Bahrain OR Kuwait OR Oman OR Qatar OR “Saudi Arabia” OR “United Arab Emirates” AND “transcription error” OR “transcription mistake” CINAHL = 0 Embase = 1 IPA = 0 PubMed = 0 Science Direct = 17 Web of Science = 0 “G.C.C” OR “Gulf Cooperation Council” OR Bahrain OR Kuwait OR Oman OR “Saudi Arabia” OR United Arab Emirates AND “monitoring error” OR “monitoring mistake” CINAHL = 0 Embase = 0 IPA = 0 PubMed = 0 Science Direct = 2 Web of Science = 0 “G.C.C” OR “Gulf Cooperation Council” OR Bahrain OR Kuwait OR Oman OR Qatar OR Saudi Arabia OR United Arab Emirates AND “wrong drug” OR “wrong drug error” OR “wrong drug mistake” CINAHL = 0 Embase = 6 IPA = 1 PubMed = 4 Science Direct = 12 Web of Science = 2 G.C.C“ OR ”Gulf Cooperation Council“ OR Bahrain OR Kuwait OR Oman OR Qatar OR Saudi Arabia OR United Arab Emirates AND ”dosage error“ OR ”wrong dose“ OR ”dosage mistake“ CINAHL = 0 Embase = 5 IPA = 2 PubMed = 4 Science Direct = 14 Web of Science = 2 G.C.C“ OR ”Gulf Cooperation Council“ OR Bahrain OR Kuwait OR Oman OR Qatar OR Saudi Arabia OR United Arab Emirates AND Cause of medication error CINAHL = 0 Embase = 1 IPA = 0 PubMed = 4 Science Direct = 5 Web of Science = 0 “G.C.C“ OR ”Gulf Cooperation Council“ OR Bahrain OR Kuwait OR Oman OR Qatar OR Saudi Arabia OR United Arab Emirates AND Preventable drug related problem CINAHL = 0 Embase = 0 IPA = 0 PubMed = 2 Science Direct = 1 Web of Science = 0 “G.C.C“ OR ”Gulf Cooperation Council“ OR Bahrain OR Kuwait OR Oman OR Qatar OR Saudi Arabia OR United Arab Emirates AND Risk factors for medication error CINAHL = 0 Embase = 0 IPA = 0 PubMed = 3 Science Direct = 0 Web of Science = 0 DataBase Total Number of hits from each database CINAHL 78 EMBASE 1376 IPA 47 PubMed 742 Science Direct 746 Web of Science 126 Total = 3115 Appendix B Quality assessment of included studies The numbers across the top of the table relate to the criteria which were used to assess quality of studies. The first twelve studies are published as conference abstracts, the remaining 42 are full text journal articles. Appendix C Definitions used for outcomes by studies included in the review Term Definition Definition reference Used by Medication error Any preventable event that may cause or lead to inappropriate medication use or patient harm while the medication is in control of healthcare professional, patient, or consumer NCCMERP (1995, 2005, 2008, 2012) Al Khaja et al. (2005), Elnour et al. (2007), Hooper et al. (2009), Khoja et al. (2011), Arabi et al. (2012), Alshaikh et al. (2013), Alakhali et al. (2014), Mahmoud et al. (2016) , Alghamdy (2015), Aljadhey et al. (2012 ) a A failure in the treatment process that leads to, or has the potential to lead to, harm to the patient Ferner and Aronson (2006) Al Khaja et al. (2012) A medication error was defined as an error in the medication process: ordering, transcription, dispensing, and administration, and discharge summaries Bates et al. (1995) a Almazrou et al. (2015), Aljadhey et al. (2016) A medication error was defined as an error in the medication process: ordering, transcription, dispensing, and administration, and discharge summaries. Errors included wrong as well as missing actions Lisby et al. (2010) Alakhali et al. (2014) Medication errors were defined as errors in drug ordering, transcribing, dispensing, administering or monitoring Kaushal et al. (2001) Aljeraisy et al. (2011) Medication errors were defined as any preventable error in the medication administration process starting from prescribing and including preparing, dispensing, administering, monitoring the patient for effect, and transcribing (eg medication administration record (MAR) Miller et al. (2007) Aljeraisy et al. (2011) A dose of medication that deviates from the physician’s order as written in the patient’s chart or from standard hospital policy and procedures American Society of Health system pharmacists ASHP (1982) Sadat-Ali et al. (2010 ) Prescription error Prescription error categories: – Omissions; major omissions, minor omissions – Dose or direction error – Legal requirements not met – Prescription written for a non-prescription product – Unclear quantity prescribed – Incomplete (“As directed or p.r.n”) Shaughnessy and Nickel (1989) Khoja et al. (2011 ), Al Khaja et al. (2005 ), Al Khaja et al. (2007 ) Prescribing error A clinically meaningful prescribing error occurs when, as a result of a prescribing decision or prescription writing process, there is an unintentional significant (i) reduction in the probability of treatment being timely and effective (ii) increase in the risk of harm Dean et al. (2000) Al Khaja et al. (2012), Al-Dhawailie (2011) Prescribing errors identified were dealt with in one of two ways: (1) if medication orders were ambiguous but the pharmacist could determine the medication intended, he or she would endorse the drug chart accordingly; (2) if the pharmacist was not certain of the medication intended or if the error concerned more fundamental errors in the choice of drug or dose, the prescriber would be contacted to resolve the issue Dean et al. (2002a,b) Al-Dhawailie (2011) Prescribing errors identified were dealt with in one of two ways: (1) if medication orders were ambiguous but the pharmacist could determine the medication intended, he or she would endorse the drug chart accordingly; (2) if the pharmacist was not certain of the medication intended or if the error concerned more fundamental errors in the choice of drug or dose, the prescriber would be contacted to resolve the issue Dean et al. (2002b) Al-Dhawailie (2011) Classification of prescribing errors as type A, Type B, Type C, Type D Neville et al. (1989) Khoja et al. (2011 ), Al-Hussein (2008), Altebenaui et al. (2015 ) Prescribing errors may be defined as an incorrect drug selection for a patient ASHP (1993) Elnour et al. (2007) Dispensing error Dispensing errors are mistakes made by staff when distributing medications to nursing units or directly to patients in an ambulatory-care pharmacy Allan and Barker (1990), Flynn et al. (2002) Elnour et al. (2007) Dispensing errors are defined as any inconsistencies or deviations from the prescription order such as dispensing the incorrect drug, dose, dosage form, wrong quantity, inappropriate incorrect or inadequate labelling, confusing or inappropriate preparation, packaging, or storage of medication prior to dispensing Szeinbach et al. (2007) Al-Arifi (2014) Administration error A drug administration error is error or omission or commission that occurs in the administration stage when the medication has to be given by a nurse, the patient himself or herself, or a caregiver Flynn et al. (2002) Elnour et al. (2007) Administration error An opportunity for error is defined as any drug prescribed, any unordered or omitted drug, and any dose given and any dose omitted Definitions utilised but not referenced Aljamal (2012) Medication reconciliation errors Discrepancies were classified as omissions (not ordering a medication used by a patient prior to admission); commission (adding a medication not used prior to admission); or wrong dose, frequency, or route of administration Definitions utilised but not referenced Abu Yassin et al. (2011) Drug related problem A drug related problem is an undesirable patient experience that involves drug therapy and that actually or potentially interferes with a desired patient outcome Strand et al. (1990) Al-Olah and Al Thiab (2008), Al-Arifi (2014), Alghamdy et al. (2015) An event or circumstance involving drug therapy that actually or potentially interferes with the desired health outcome Pharmaceutical care network Europe (PCNE) (1999, 2008, 2010, 2012) Hooper et al. (2009), Rashed et al. (2012 ), Rashed et al. (2013) , Kheir et al. (2014 ), Alghamdy et al. (2015 ), Al Hamid et al. (2016) Adverse drug event (ADE) ADEs are any noxious, unintended, and undesired reaction that occurs because of a drug in doses normally used in man for prophylaxis, diagnosis, or therapy of disease or for the modification of physiologic function, or that is caused by drug interactions, allergic drug reactions and medication errors WHO (1999) Al-Tajir and Kelly (2005) An adverse drug event is an injury caused by a medication, which include both adverse drug reactions (an effect which is noxious and unintended, and which occurs at doses used in man for prophylaxis, diagnosis or therapy) as well as harmful effects arising from errors at any stage including ordering, transcribing, dispensing, administering, or monitoring of a drug Bates et al. (1995b ), Jha et al. (1998) , Gandhi et al. (2000) Aljadhey et al. (2013) a , Aljadhey et al. (2016) An ADE was defined as an injury due to a medication, including both adverse drug reactions and injuries caused by medication errors Nebeker et al. (2004) Al-Ghamdi et al. (2012) Potential adverse drug event A potential adverse drug event is a medication error with the potential to cause an injury but which does not actually cause any injury, either because of specific circumstances, chance, or because the error is intercepted and corrected Morimoto et al. (2004) Aljadhey et al. (2013) a , Aljadhey et al. (2016) Non preventable adverse drug reaction Non preventable adverse drug reactions, also known as adverse drug reactions, are defined by the WHO as ‘a response to a drug which is noxious and unintended, and which occurs at doses used in man for prophylaxis, diagnosis, or therapy of disease, or for the modifications of physiological function WHO (2014) Aljadhey et al. (2016) Potentially Inappropriate medications Potentially inappropriate medications are medications in which the risks of use outweigh the benefits Rancourt et al. (2004) Al-Omar et al. (2013) Beer’s criteria-Potentially inappropriate medication list as updated by Fick and colleagues in 2003 Fick et al. (2003) Al-Omar et al. (2013) American Geriatrics Society Beers Criteria Update Expert Panel (2012) Al Odhayani et al. (2016) Appendix D Data extracted from included studies Reference Country Design Definitions used for study outcomes Method of error identification Medication safety aspect analysed Reported results I- Studies describing medication errors a. Studies describing prescribing errors (n = 17) Al Khaja et al. (2005) Bahrain Retrospective clinical prescription review A medication error is defined as ( NCCMERP, 2012 ) Audit of prescriptions to screen for errors (for a two week period) Identification of prescribing errors and their determinants in a primary care setting 77,511 prescriptions audited, 5959 (approximately 8%) were identified to contain errors. The 5959 prescriptions contained 16,901 medications, of which 13,630 (approximately 85%) were with errors. Major errors of omission associated with topical preparations were significantly higher than those with systemic preparations. However, prescriptions with systemic preparations had a higher rate of commission errors Irshaid et al. (2005) Saudi Arabia Retrospective analysis of prescriptions None used Prescriptions were analysed for the essential elements to be included in the prescription order and the data recorded using a coding key. The information within the prescription was judged “unclear” if one word was not written clearly and “unreadable” if none of the three investigators present during the screening session could read it Prescriptions written by physicians were screened for the stated essential elements Three thousand seven hundred ninety-six prescriptions analysed, approx. seven percent of the total prescriptions written during that period. The name and signature of the prescriber was included in approx. 83% and 82% of prescriptions respectively. The handwriting of the prescriber was not clear in approx. 64% of the prescriptions. The strength of the medications was included in approximately 26% of the prescriptions Al Khaja et al. (2006) Bahrain Cross- sectional prescription review Major errors of omission, major errors of commission were defined as per Al Khaja et al. (2005) . The British national formulary (#37; 2002) was referred to for the dose ranges and paediatric doses were calculated as per Insley, 1996 Prescriptions issued to infants were analysed for prescribing pattern and prescribing errors Nationwide study evaluated the prescribing profile and prescribing errors of antimicrobials in infants, in primary care Of 2282 dispensed prescriptions, 543 included an antimicrobial agent. Of these: 119 were prescribed at subtherapeutic doses 28 at supratherapeutic doses. The remainder 394 at therapeutic doses Major errors of omission from prescription involved duration of therapy, dosage form. Errors of commission commonly included dosing frequency and dosage of antimicrobials Al Khaja et al. (2007) Bahrain Retrospective clinical prescription review Definitions were as per Al Khaja et al. (2005 ) Prescriptions issued for infants were collected and reviewed over a 2-week period from 20 health centres The trends in drug utilisation including off-label drug prescribing and the prevalence, and the type of medication-related prescribing errors In 2282 dispensed prescriptions a total of 5745 medications were included. 2066 (approximately 91% of the prescriptions) were identified to contain major errors of omission, commission, and errors of integration. Errors of omission accounted for approximately 54%, and included length of therapy/quantity and dosage form. Errors of commission accounted for approximately 44%, commonly incorrect dosing frequency and incorrect dose/strength. Prescribing of an off-label drug was observed in approximately 16% of cases Alkhaja et al. (2008) Bahrain Prospective collection of prescriptions for two consecutive cohorts Prescribing errors were defined according to the definitions of: – Shaughnessy and Nickel (1989) – Al Khaja et al. (2005) All prescriptions issued by 12 final year residents in May 2004 and 14 final year residents in 2005 were collected. Prescriptions were screened by one author of this study, and subsequently audited by another author The percentage of omission commission and integration related errors in prescriptions written by final year residents were calculated A total of 2692 prescriptions were collected. Eighty-eight percent were identified to include major errors of omission. Dosage form and length of treatment were frequent omissions. As for errors of commission, dosing frequency was the most common incorrectly stated component Al-Hussein (2008) Saudi Arabia Cross-sectional audit of prescriptions Nonconformities were classified according to the component of prescribing process involved… patient, provider, prescribing, drug/dispensing or others (as described by study author). A second classification was used according to that of Neville et al. (1989) Prescriptions were collected during audits done fortnightly by sampling random selection of 30 prescriptions, for a total of 330. Information about each prescription was entered in a database by the pharmacists and based on yes-no answers to status of compliance to the indicators, an automated decision was made on conformity To explore the degree of prescription conformity to the prescribing guidelines at primary care. This is in order to develop and incorporate a systematic process in prescription errors in primary care and provide the health care providers with feedback Approximately 13% of prescriptions fully conformed to the given guidelines, while the remainder (87%) did not conform. Less than 1% of the inconsistencies were potentially harmful to the patient, approx. 77% had possible negative effect on the pharmacist’s work. Patient information was deficient in approx. 17% of cases Khoja et al. (2011) Saudi Arabia Cross-sectional audit of prescriptions prescribed or dispensed over one full working day Defined prescription errors as per US Pharmacopeia, 1995 Utilised the classification of Neville et al. for system for classifying prescription errors to classify errors as type A, B, C or D Neville et al. (1989 ) Samples of prescriptions were analysed to obtain evidence about the nature and extent of errors. Prescriptions containing errors were allocated an error classification following a discussion between one investigator and pharmacists involved Prescriptions errors and comparison between private and public sectors Public clinics – 1182 prescriptions (2463 prescribed drugs). Private clinics – 1200 prescriptions (2836 prescribed drugs) = Total (5299) drugs. Prescribing errors were found on 990 (approximately 19%). Both type B and type C errors were more common in public than private PHC centres. Type D were more frequently found on private than public clinic prescriptions Al-Dhawailie (2011) Saudi Arabia Not stated The sited medication errors were categorized according to the definition of Dean et al. (2002a) and (2002b) The inpatient medication charts and hand written orders were identified and rectified by ward and practicing pharmacists within inpatient pharmacy services. Data were collected and evaluated. The causes of problem were identified and dealt with in one of two ways To detect the incidence of prescribing errors for hospitalized patient, to evaluate the clinical impact of pharmacist intervention on the detection of these errors Of 1580 prescriptions, 113 (approx. 7%) were detected to contain prescribing errors and intervened by the clinical pharmacists. The errors of wrong strength and wrong administration frequency of the prescribed drug were the most errors reported (approx. 35%, and 23%, respectively). Other errors as wrong patient/ drug or wrong dose were also encountered. The prescribing errors encountered were of varying severity. Multiple factors were identified: lack of training for medical students about this during undergraduate studies, work load, stress and ineffective communication between healthcare professionals Al-Jeraisy et al. (2011 ) Saudi Arabia Retrospective cohort study A prescription error was defined per ( Kaushal et al. (2001) as well as ( Miller et al. (2007) and Lesar et al. (2006), Abushaiqa et al. (2007) and Kohn et al. (2000) Physical inspection of physician medication and reviews of patients files Determine incidence and types of medication prescription errors, and identify some potential risk factors in a paediatric inpatient tertiary care setting Out of 2380 orders examined in the five week period, error rate was 56 per 100 medication orders (CI: 54.2%, 57.8%) Dose errors accounted for approx. 40% of these errors, while incorrect dose errors approx. 21%. The errors occurred more frequently with intravenous route of administration and one third occurred in paediatric intensive care unit Al Khaja et al. (2012) Bahrain A retrospective, nationwide audit of prescriptions Medication error has been defined according to the definitions of ( Ferner and Aronson (2006) A prescribing error has been defined according to the definitions of ( Dean et al., 2000 ) The eligible prescriptions were carefully audited by the first author and then independently reviewed by second and third authors. Discrepancies were resolved by discussion This study was carried out to identify the frequency and nature of medication prescribing errors the medication prescribing errors pertaining to cardiovascular/antidiabetic medications in prescriptions written by primary care physicians Two thousand, seven hundred and seventy-three prescriptions were analysed. Approximately 26% of prescriptions had medication prescribing errors. No significant differences with respect to overall errors were evident in prescriptions ordered by the family physicians and general practitioners. Prescribing errors commonly involved lipid lowering medications, β-blockers, high dose metformin and high dose glibenclamide Al Shahaibi et al. (2012) Oman Observational, retrospective None stated Retrospective analysis of 900 prescriptions from four different hospitals. Each prescription was checked five times, once for the superscription errors, then second inscription, next for the subscription errors, followed by legal errors, and last for reviewing it all To evaluate and analyse the handwritten outpatient prescriptions and associated error of omissions Nine hundred handwritten outpatient prescriptions were analysed; a total of 1471 drugs were prescribed. Most common type of superscription error of omission was found to be age and gender. The most common inscription omissions were of dosage form and strength/dose of drug. Subscriptions omission were omission of prescriber’s signature, while date of dispensing medication was omitted in 100% of prescriptions Aseeri (2013) Saudi Arabia Retrospective cohort study. A dosing error was defined as per McPhillips et al. (2005) . A dosing error was defined as the presence of an antibiotic dose that was 110% or more of the maximum recommended daily dose or below 90% of the minimum recommended daily dose A retrospective cohort study of 300 randomly collected, physician-prescribed antibiotic order sheets was performed over a 2-week period within different settings in the tertiary hospital (inpatient unit, ambulatory care clinic, emergency department) 300 prescriptions collected in pre implementation phase and post implementation phase 300 sheets were collected To compare the rate of dosing errors for antibiotic orders in paediatric patients before and after the implementation of a standard dosing table. It is for oral or parenteral antibiotics with pre-calculated dosage for different weight ranges Physician compliance with the antibiotic dosing standardization policy after implementation was 62%. The dosing standardization policy reduced the rate of dosing errors from approx. 34% to approx. 5% ( P = 0.0001), and weight documentation on the antibiotic prescriptions improved from approx. 65% to approx. 85% ( P = 0.0001) Albarrak et al. (2014) Saudi Arabia Prospective study None stated Handwritten prescriptions were received from three outpatient departments whereas electronic prescriptions were collected from the paediatric ward. The handwritten prescriptions were evaluated for completeness and legibility by two pharmacists. The comparison between handwritten and electronic prescription errors was assessed based on the validated checklist adopted from previous studies. Delgado et al. (2007), Bobb et al. (2004), Al-Jeraisy et al. (2011 ). The prescriptions were evaluated for legibility by two pharmacists according to a three point legibility scoring Likert scale similar to the study Mendonca et al. (2010) To assess the legibility and completeness of handwritten prescriptions and compare with electronic prescription system for medication errors Three hundred ninety-eight prescriptions (199 handwritten, 199 electronic prescriptions) were assessed. Seventy-one (approx. 36%) of handwritten and 5 (approx. 3%) of electronic prescriptions were identified to contain errors. A significant statistical difference (P < 0.001) was observed concerning omitted dose and omitted route of administration. The rate of medication prescription completeness in handwritten prescriptions approximately ranged from 88% to 91% from the three different clinical units. In handwritten prescriptions there were drug interactions evident but in the electronic prescriptions drug interactions were not reported Altebenaui et al. (2015) Saudi Arabia Not stated As per Neville’s classification (1989). Prescription errors were classified as major (potentially life threatening), minor (non-life threatening) or trivial Retrospective cross sectional analysis of physician prescriptions that were issued over a one month period. One thousand prescriptions were randomly selected for review Identifying the types and frequency of prescription errors from different departments, outpatient and emergency room (ER). Patient file numbers and medication dosages were missing in more than 20 and 40%, of reviewed prescriptions, respectively. At least 30% of the reviewed prescriptions were deemed to have had illegible handwriting. Non-life threatening items including age, physician signature and stamp, date, sex diagnosis were missing in more than 50%. Weight was missing from all transcripts. Prescriptions written by ER physicians had more missing items compared to those wrote by outpatients clinics ( P = 0.01) Youssef et al. (2015) Saudi Arabia Retrospective study To calculate renal function the modification of diet in renal disease (MDRD) formula was used, and Cockgroft Gault, as per National Institute of Diabetes and Digestive Kidney Diseases (2014) Detailed prescriptions were abstracted from the electronic medical record. Examination of the data was performed for medications that are renally cleared and/or potentially nephrotoxic. These medications were then categorized according to the CDSS internal database into two types, as a contraindicated medication OR not a contraindicated medication Determination of various types of contraindicated medications that are administered to patients with renal insufficiency by physicians who override alerts provided by the Computerized Decision Support Systems (CDSS) Out of the 314 prescriptions that were renally cleared and/or potentially nephrotoxic, 44 (14%) were for contraindicated medications. The contraindicated medications ordered were limited to: aspirin, gliclazide, nitrofurantoin; and spironolactone Alanazi et al. (2015) Saudi Arabia Cross-sectional Not stated Reviewing charts and prescriptions of patients complaining of infections. The prevalence of an inappropriate antibiotic prescription was accounted as a physician order with at least one type or more of errors divided by total number of prescriptions and multiplied by 100 Study purpose was to assess the prevalence and predictors of antibiotic-related prescription errors among patients admitted to an emergency centre at a tertiary health care facility Adults (>15 years) were approx. 61%, whereas paediatrics (<15 years) were approx. 39%. Majority of patients were not screened for antibiotic allergies (approx. 92%). Three main antibiotic categories were prescribed in both age groups: penicillin, cephalosporin, and macrolide. The prevalence of inappropriate antibiotic prescriptions with at least one or more types of errors was approx. 46%, which was significantly higher among paediatrics compared to adults ( P = 0.001). Physicians tend to prescribe antibiotics with higher than the recommended dosages and/or frequencies Mahmoud et al. (2016) Saudi Arabia Retrospective chart review chart study (National Coordination Council for Medication Errors Reporting and Prevention NCC MERP) index (2005) Four month retrospective chart review chart study. The severity of prescribing errors was determined by two independent reviewers Aim was to determine the incidence of prescribing errors using a validated definition. The main study outcomes were the percentage of medication orders and hospital admissions with prescribing errors; and the types of prescribing errors Six hundred ninety-one prescribing errors were in 2033 patient files. The incidence of prescribing errors was 3.6 (95% CI, 3.3–3.9) per 100 prescriptions. Per 100 admissions the prescribing error incidence was 33.9 (95% CI, 31.5–36.6) and 76.5 (95% CI, 70.9–82.3) per 1000 patients days. The most common prescribing error type was dosing errors, while antibiotics were the most common drug class involved with prescribing errors I- Studies describing medication errors b. Studies describing administration errors (n = 2) Aljamal (2012) Saudi Arabia A cross-sectional prospective observational study An opportunity for error is defined as any drug prescribed, any unordered or omitted drug, and any dose given and any dose omitted. Disguised method was used. Definitions utilised but not referenced Medication administration error was calculated by dividing actual errors by the total number of opportunities for errors. Disguised method was used. The nurses were accompanied during medication administration. These medications were then registered and compared with eligible prescriptions in the medication chart The objective of this study was to assess the frequency, type, and potential clinical consequences of medication administration errors in a tertiary hospital A total of 169 medication administration errors were observed out of 2112 opportunities for error, representing an error rate of eight percent. Five types of errors were detected including dose omission (35%), wrong dose (5%), wrong drug (2%), wrong technique (1%) and wrong time (57%). Majority of errors did not cause harm and six errors were prevented before reaching patients Almazrou et al. (2015) Saudi Arabia A cross-sectional observational study A medication error was defined as per Bates et al. (1995) Interviews of the mothers were performed by pharmacy students The mothers were required to demonstrate how to measure 5 mL of paracetamol syrup using a cup and a syringe and 1 mL of paracetamol syrup using a dropper while being observed, dosing errors were evaluated visually A cross-sectional study in which mothers were observed as they used a set of commonly available dosing devices which are a dosing cup, syringe, and dropper Of 575 participants these measured an accurate dose of paracetamol using: an oral dosing syringe 334 (58%) dropper-286 (50%) dosing cup. As for errors, participants measured more than the intended dose with the dosing cup and less than the intended dose with the dropper I- Studies describing medication errors c. Studies assessing errors in medication history and medication reconciliation errors (n = 5) Rehmani (2001) Saudi Arabia Prospective cross-sectional survey None Specified Nurses recorded a medication list during triage in the electronic medical record (EMR). This home medication list was not placed in the emergency department chart. Records were then reviewed by a physician. The research generated home medication list was compared to the standard medication list and the number of omissions, duplications, and dosing errors was determined Evaluate the accuracy of medication history taking in emergency department triage Two thousand one hundred seventy adults completed the survey (88% of patients approached). Discrepancies in medication lists obtained during triage were documented in (52%) of patients. The dosing or frequency error was (62.0%). Discontinued medications were included, additional medications were omitted, and patients reported taking a non-prescription medication not listed in the electronic medical record Abu Yassin et al. (2011) Saudi Arabia A prospective observational study Definition for discrepancies utilised but not referenced. Discrepancies were classified as omissions (not ordering a medication used by a patient prior to admission); commission (adding a medication not used prior to admission); or wrong dose, frequency, or route of administration A pharmacist screened the patient’s chart and reviewed recent lab results and interviewed patients to acquire comprehensive medication history. All information obtained from patients was compared with medications recorded by the physician upon the patients’ admissions to the hospital To investigate the role of pharmacists in identifying discrepancies in medication history at admission to hospitals Sixty patients were interviewed, taking a total of 564 medications. At least one discrepancy was found in 37% of patients, and the most common discrepancies observed were omissions of medications and dosage errors Al Anany et al. (2012) Qatar Not stated but from the description it is a cross sectional interventional study None stated Clinical pharmacists conducted interviews with the patient or caregiver in the first 24 h of admission or transfer to review their medications. The collected data were then compared with the current medication list prescribed after admission or transfer. The interventions were done through a medication reconciliation process on specially prepared form To highlight the impact of medication reconciliation conducted by clinical pharmacists on reducing adverse drug events during admission and transfer by identifying different types of interventions For the 52 patients interviewed, the total number of medications reconciled was 263. Of these, 93 medications (35%) required the intervention of clinical pharmacists. Omission was the most common type of error, followed by wrong doses and medications with no indication Aljadhey et al. (2013) b Saudi Arabia Observational Cross-sectional None stated Discrepancies (number and type) were recorded in a data collection sheet. Then the discharge counselling pharmacist conducted medication reconciliation by comparing the discharge medication list with the best possible medication history provided by hospital pharmacy records To identify the discrepancies number and type upon conducting discharge reconciliation One-hundred and seventy-three patients were screened and 568 discrepancies were identified in 121 patients, with a mean of 4.7 ± 2.8 per patient. Eighteen percent of patients presented with at least one unintentional discrepancy, which were omission, commission, changed frequency, duplication and wrong duration Sonallah et al. (2014) Qatar Retrospective, descriptive and post-interventional study None stated A standardized medication reconciliation form was developed and used by clinical pharmacists as a tool to detect the number and types of medication discrepancies and document clinical pharmacist interventions This study was conducted to evaluate the medication reconciliation (MR) process as a newly initiated service by clinical pharmacists. Two hundred thirty-two forms were collected and 1640 medications were reconciled. One hundred and seventy-eight cases (approximately 77%) had medication discrepancies upon hospital admission, Most of the discrepancies were due to medication omissions, incorrect dosages, and different medications. Clinical pharmacists’ interventions were carried out in 150 cases (approx. 65%) I- Studies describing medication errors d. Studies assessing potentially inappropriate medication use (n = 2) Al-Omar (2013) Saudi Arabia A retrospective review of prescription records Potentially inappropriate medication (PIM) defined as the definition of: Rancourt et al. (2004) Beers criteria (as updated by Fick et al. (2003) were used to identify the PIMs The source of the data was outpatient pharmacy prescription records at Riyadh Military Hospital (RMH) for 2002, 2003 and 2004. Data were re-coded, new variables were created and the total cost of medications was calculated To explore the prevalence of (PIM) use in the elderly, to identify the trends and patterns of prescribing such medication, and to calculate the associated direct medication cost of such practice in Saudi hospital A total of 20 521 PIM were identified. The prevalence of PIM for 2002, 2003 and 2004 was approx. 3%, 2% and 2%, respectively. A total of approx. 43% of the patients had filled a prescription of one PIM, the remainder had filled a prescription with 2 or more PIM. Digoxin accounted for approx. 24% of these PIM. Other medications involved were cardiovascular drugs, iron supplements, and laxatives. The total direct cost that was associated with inappropriate prescribing was 518 314 Saudi Riyals (United States $138 217, where one US dollar = 3.75 Saudi Riyal) AlOdhayani et al. (2016) Saudi Arabia Retrospective The study participants were elderly, as defined by the World Health Organisation (WHO, 2011) Beers crtieria ( Fick et al., 2003 ) was used to determine the number of PIMs Data were collected from patients’ medical electronic and non-electronic records, and from the main hospital laboratory framework. The number of PIMs was determined by using Beers criteria 2012 and a review of the literature This study aimed to establish the extent of inappropriate drug prescription for and use by elderly patients Of the 798 included patients; 419 were using one or more PIMs. The most common PIM was a high dose of ferrous sulphate, in about 33% of the participants compared to the rest of the group ( P < 0.001). Analgesics, opioids, antispasmodics and muscle relaxants, were frequently prescribed I- Studies describing medication errors e. Studies assessing more than one type of medication error (n = 8) Dibbi et al. (2006) Saudi Arabia A retrospective review of patient medical records None stated Retrospective review of medical records for adult hospitalised patients 18 month period The study focused on types, causes, contributing factors, frequency of medication errors and patients outcome Two thousand six hundred twenty-seven medical records were reviewed and 3963 errors were identified. One thousand five hundred fifty-nine files contain one error, 800 files with 2 errors and 268 with 3 or more errors. The most common was wrong strength (confusion between microgram and milligram). Other errors included wrong route of administration, wrong dosage form and wrong dose which included over, under and extra doses. Medication errors were possible to be one of the related factors among 26 deaths Causes of error cited- Human factors as miscommunication (including misinterpretation of order and written miscommunication) Elnour et al. (2007) United Arab Emirates Prospective interventional study A medication error is defined by the National Coordinating Council for Medication Error Reporting and Prevention (NCC MERP 2005) Prescribing errors defined as per American Society for Hospital Pharmacists ASHP (1993) Dispensing errors defined as per Allan and Barker (1990 ) and as per Flynn et al. (2002) Drug administration error defined as per Flynn et al. (2002) A systematic random sample of the inpatient nursing staff completed a structured program consisting of pre/post self-reported questionnaire on a new medication safety program (Med Safe Tool) for medication error reporting The generated medication errors reported were edited by the clinical pharmacists, and root cause analysis was performed. The research clinical pharmacists reviewed and assessed the accuracy of each true medication error incident report Demonstrates the benefits of implementing a computerized medication safety program (Med Safe Tool) with regard to reporting all types of medication errors The number of medication errors reported to the Med Safe Tool before the program (n = 41) versus (n = 57) after the structured program. There were 9 types of medication errors (Most errors occurred during the medication administration stage]. Most of the medication errors pertain to the outcome category and severity code B, as per NCC MERP 2005 Sadat-Ali et al. (2010) Saudi Arabia Retrospective The definition of medication error was of Health System Pharmacists ( ASHP, 1982 ) The incident reports during two year period were collected and analysed for pertinent data. The medical charts were evaluated The prevalence and characteristics of medication errors reported Twenty-three thousand and nine hundred fifty-seven patients admitted and 38 medication errors reported. Most common errors: missed medication, expired medication, wrong time of medications There were three adverse events where the patients had extended hospital stay. No patients died or experienced permanent harm Hemida, et al. (2011) Saudi Arabia Observational study None specified All incident reports that were voluntarily reported from the neonatal intensive care unit were reviewed for medication errors From these reports, the incidence and nature of medication errors was estimated To study the nature of medication errors of neonates admitted to level III neonatal intensive care unit (NICU) There were 66 incident reports involving medication errors with estimated incidence of one per 250 admissions. Most prevalent type was dispensing error (91%). Nurses were involved more commonly than pharmacists and physicians. The most common type of MEs for nurses: delay/not giving; pharmacists: delay /not dispensing and physicians; incomplete prescriptions. The most common medications involved were antibiotics and total parenteral nutrition Arabi et al. (2012) Saudi Arabia Descriptive study of paper based incident reports Medication error as per National Coordination Council for Medication Errors Reporting and Prevention NCC MERP) index (2008) Evaluated submitted incident reports from all hospital areas including the intensive care unit for one year period To examine the rates and categories of incident reports, both hospital-wide and in the intensive care unit (ICU) There were 38,171 hospital admissions. Total of 3041 incident reports from all hospital areas, yielding a rate of 5.8 per 1000 patient days. Medication errors accounted for approximately 7% and 13% of all incident reports from the hospital and the ICU respectively Alshaikh et al. (2013) Saudi Arabia Cross sectional review of occurrence/variant reports related to medication errors (National Coordination Council for Medication Errors Reporting and Prevention NCC MERP) index (2008) All occurrence/variant reports related to medication errors were documented on a hospital web-based medication error form that was designed to capture information on all aspects. Medication error reports were reviewed and reported at quarterly intervals over a 1-year period The objective of the current study was to explore the rate of reporting medication errors and factors associated with the root causes of these errors in a large tertiary teaching hospital The medication error rate over the 1-year study period was 0.4% (949 medication errors for 240,000 prescriptions). During this period, approx. 1.5% of the errors were categorized as resulting in any harm to the patient (all category E). Medication errors were reported predominantly at the prescribing stage (approx. 89%). Illegible or unclear handwriting (17%) was a reported cause of error Al-Khani et al. (2014) Saudi Arabia Retrospective study including incorrect drug error reports Medication prescribing error was defined as per Dean et al. (2000) The study was a review of incorrect drug error reports for 21 month period. Reports were reviewed by two pharmacists to ensure accuracy of data classification The objective was to explore factors that help pharmacists identify and thus prevent harm from incorrect drug prescribing errors in an ambulatory care setting. During the specified period 2073 prescribing errors were reported in the hospital safety reporting system. Incorrect drug prescribing errors occurred at a rate of 10% (203 reports). Factors that allowed the pharmacist to identify incorrect drug prescribing errors before dispensing the medication include- reviewing the mandatory electronic prescription indication field, reviewing the patient medication history Alakhali et al. (2014) Saudi Arabia Retrospective prescription review A medication error was defined as per National Coordinating Council for Medication Error and Prevention (NCCMERP) 2005, and as per Lisby et al. (2010 ) Retrospective study reviewing all the prescriptions for two months To detect the medication errors in the different stages of medication use process such as prescribing, transcription, dispensing and administration Total 1850 opportunities for errors registered. Prescribing errors were 10% of these errors, dispensing errors and administration errors both approx. 0.5% No transcription errors were observed II- Studies describing drug related problems (n = 5) Al-Olah and Al Thiab (2008) Saudi Arabia Prospective observational study A drug-related problem (DRP) was defined as per Hepler and Strand (1990) . On a daily basis, the investigators collected data on a data collection sheet for all emergency department (ED) admissions during the previous 24 h To identify and evaluate admissions due to DRPs through the ED Of 557 patients admitted through the ED, 82 were admissions due to DRP (approx. 15%). Fifty-three were definite, 29 were probable. The most common definite DRP admission was due to failure to receive medications followed by adverse drug reactions and drug overdose Rashed et al. (2012) Saudi Arabia A prospective cohort study DRP were defined as per the (PCNE) Pharmaceutical Care Network Europe, 2008 ) DRPs were identified by a researcher reviewing the medical records of children attending the ED during a three month period DRPs incidence in children attending an ED was calculated, preventability was determined and severity assessed The results from KSA arm of the study: Total Patients (n = 143) Fifty-two DRPs identified; most common types were dosing problems followed by drug choice problems. Fifty-one of the 52 DRPs identified were preventable; and approx. 77% of minor severity Al-Arifi et al. (2014) Saudi Arabia Prospective cohort observational study Drug related problems (DRP) were defined according to the ( Strand et al., 1990 Information was taken by one of the authors from the patient file and/or patient interviewing using the specially designed data collection sheet Aims were: – To prospectively determine the incidence and types of emergency department (ED) visits and admissions due to drug related problems (DRPs) at a tertiary hospital – To assess the severity and preventability of the drug related admissions or visits – To identify the drugs and patient groups that are most commonly involved Random selection of 300 patients presenting to emergency department of which, 56 (approx. 19%) were presented to ED due to DRPs. The most common DRPs was due to adverse drug reactions (approx. 30%) and patients’ non-compliance (approx. 30%), followed by untreated indication then drug interactions; supratherapeutic and subtherapeutic dose. It was noted that adverse drug reaction incidence was almost double in female patients than male (11:6) Alghamdy et al. (2015) Saudi Arabia Retrospective review of medical records of selected emergency department admissions Definitions of DRP: ( Strand et al., 1990 ) While medication error is defined as ( Lisby et al., 2010 ) and ( NCCMERP 2012 ) Files of suspected cases of DRPs reporting to ED in the 12 month period were scrutinized. Suspicion arose from the hospital record system based on Diagnosis Code Numbers (ICD-9-CM, Professional 2010) and from triggers, such as some drugs, laboratory tests, and signs and symptoms pointing to DRPs To estimate prevalence of admissions as a result of DRPs at the emergency department (ED) Of 5574 admissions, 253 were DRPs. They were categorised as: non-compliance to treatment (approx. 44%), overdose toxicity and side effects of drugs (approx. 20%), drug-interactions (approx. 12%), accidental and suicidal drug ingestions (approx. 10%), drug allergy (4%), Over 60% of DRPs were preventable and approx. 4% of patients died Al Hamid et al. (2016) Saudi Arabia Retrospective medical record review Medicine-related problem (MRP) is defined as per Pharmaceutical Care Network Europe (PCNE 2010) A data collection tool was developed based on the Pharmaceutical Care Network Europe (PCNE) classification tool (PCNE 2010). The tool was used to extract data from each medical record The aims of this study were to: – Investigate hospitalisations due to medicine related problems (MRP). In adult patients with cardiovascular disease and/or diabetes mellitus – Determine the major causes and risk factors contributing to medicine related problems – Identify the main medicines associated with medicine related problems Out of 150 medical records reviewed, 94 medicine related problems were identified of which approx. 67% resulted in hospitalisations. Commonly encountered medicine related problems were treatment effectiveness and adverse drug reactions, accounting for approx. 98%. Polypharmacy was a major risk factor associated with medicine related problems. Insulin was implicated in approx. 47% of MRPs while oral antidiabetic agents. Approximately 34% of the MRPs were related to cardiovascular medicines, including antihypertensive (i.e., ACEIs, CCBs), anticoagulants (aspirin), antiarrhythmic (beta blockers and digoxin), and antihyperlipidemics (statins) III- Studies describing adverse drug events (n = 3) Al-Tajir and Kelly (2005) United Arab Emirates Prospective cohort study Definition for ADE according to WHO ( WHO, 1999 ) The incidence of ADE was detected through spontaneous reporting the first and last quarter of the year 2003. During the second and third quarters, active monitoring for ADEs took place. ADEs were identified by looking for the documented events and by using an ADE trigger list (Used by Gandhi et al., 2001) ADEs were assessed for causality using the Naranjo algorithm ( Naranjo et al., 1981 ) and for severity and preventability The incidence of ADEs was calculated and the two different detection methods were compared The incidence of ADEs detected through surveillance was significantly higher ( P < 0.001) than for ADEs reported spontaneously for both inpatients (and outpatients. Most ADEs were judged to be of mild to moderate severity. Approx. 56% of ADEs were judged definite or probable and, of these, approx. 14% were consistently judged preventable. The most prevalent drugs implicated were central nervous system, antiinfective, and cardiovascular agents Aljadhey et al. (2013) a Saudi Arabia Prospective cohort study ADE as per ( Jha et al., 1998 ) as well as Gandhi et al. (2000) A potential ADE as per ( Morimoto et al., 2004 ) Medication error and category classification as per Medication Error Reporting and Prevention (NCC MERP) (2005) were defined as harm from medications Incidents were identified through a combination of medical record review by study pharmacists and voluntary reports from other healthcare professionals. Trigger tool was used to guide chart review further Primary outcomes of this study were the frequency of, ADEs, potential ADEs and medication errors The secondary outcomes were the severity of these events, their preventability and the associated risk factors During the study period, there were 977 admissions with 9585 patient-days in the 5 study units. Pharmacists identified 361 incidents in 261 patients during the study period, of which the reviewers accepted 281. Approximately 30% of the accepted incidents were ADEs, judged definitely or probably preventable. Two hundred and twenty-three incidents were classified as medication errors, of which (approximately 59%) had the potential to cause harm. The incidence of ADEs in was 8.5 per 100 admissions. Preventable ADEs most commonly occurred in the ordering stage Aljadhey et al. (2016) Saudi Arabia Prospective cohort study Each incident was defined as an ADE (preventable and non-preventable), potential ADE (PADE) (which was classified as either intercepted or non-intercepted), or a medication error with low risk of causing harm. Used definitions adapted from Bates et al. (1995) a,b and Morimoto et al. (2004) and World Health Organization (2014) . Data collected from four hospitals (a teaching hospital, one large and one small government hospital and one private hospital), Incidents were identified through a combination of medical record review by study pharmacists and voluntary reports from other healthcare professionals. Two independent clinicians were provided with a study manual guide to independently review the incidents and decide on inclusion of incidents and further classify them as ADEs, PADEs or medication errors with low risk of causing harm. They were then able to assess severity and preventability. This was a methodology developed by the Brigham and Women’s Hospital’s Centre for Patient Safety Research and Practice Bates et al. (1995 ) a Objective was to estimate the incidence and risk factors associated with ADEs and determine their severity and preventability. Primary outcomes were incidence of ADEs, PADEs and medication errors with low risk of causing harm. Secondary outcomes were severity of events, their preventability, and associated risk factors Complete data for 3985 patients were analysed. One thousand six hundred seventy-six cases of ADEs, PADEs, and medication errors were identified. Physicians reviewed and accepted 1531 (approx. 91%). They were classified as: Approx. 40% medication errors with low risk of harm, approx. 44% PADEs, and approx. 16% ADEs. Of the ADEs, approx. 35% were deemed preventable. “Errors resulting from preventable ADEs were most common at the prescribing stage followed by the dispensing and administering stages. Most of the preventable ADEs were judged to be serious” IV- Studies assessing interventions of pharmacists (n = 9) Rahman et al. (1994) Saudi Arabia Interventional study Not stated (it’s an abstract) The pharmacist, after confirmation with the physician, documented the intervention. A computer program was developed using FOXPRO. Interventions made during the one year study period were analysed Pharmacist intervention, documentation and problem resolution of erroneous physician prescription The intervention (error rate) was approx. 1%. Approx. 34% were dose related while 67% as minor in terms of severity. Anti-infectives were involved in approx. 21% Al-Jazairi et al. (2008) Saudi Arabia Prospective, non-randomised observational study None Stated The clinical pharmacist performed daily multi-disciplinary rounds, with documentation of all interventions. At the end of the round the clinical pharmacist completed a data collection form to record each intervention given. A physician verified all interventions for validity and clinical significance To evaluate the rate, (and clinical significance, acceptance by medical team) of clinical pharmacist’s interventions in a cardiac surgery intensive care setting The clinical pharmacist made 394 interventions on 600 patients. The medical team accepted 328 interventions (approx. 83%). Main drug related problems and interventions were: no drug prescribed for the medical condition, inappropriate dosing regimen (including dose, rate, frequency, and route), no indication for drug use and inappropriate drug selection. The anticipated outcome of the interventions were targeted enhancing therapeutic outcomes, resolution/prevention of an adverse drug reaction or toxicity and cost saving Hooper et al. (2009) Qatar Prospective, Interventional study 1. Definition for a pharmacy intervention was: Working definition for intervention: ‘any contact made by a pharmacist during the dispensing process with a prescriber or a patient and that was aimed at rationalizing drug prescribing or use’ Medication errors defined as per Flynn and Barker (1999) Drug related problem defined as per PCNE (1999) 3. Considered a prescribing error as any prescribing decision which results, or had the potential to result in, an unintentional significant reduction in the probability of treatment being timely and effective, or an increase in the risk of patient harm Pharmacists used online integrated health care software (TrakCare®; InterSystems, Cambridge, MA, USA) to document all interventions made. Each intervention made was communicated to the respective prescriber. All interventions and their outcomes were reviewed later by two members from the research team Prescribing error interventions documented by pharmacists in four pharmacies in a primary health care service in Qatar Of 82,800 patients’ prescriptions, 594 patients’ prescriptions were intercepted for suspected errors (approx. 1%) The total number of DRP-related interventions made was 890 interventions Over half of all errors were related to drug choice problems, followed by drug safety problems. Fifty-three percent of all interventions were accepted. Interventions as a result of transcription errors, legality and formulary issues were eliminated from this study through the use of computerised physician order entry (CPOE) Al-Rashdi et al. (2010) Oman Prospective interventional study None stated (it’s an abstract) Interventions on electronic prescriptions over one-year were evaluated. A standard data collection form was used to capture the relevant data. Clinical relevance was defined as to whether efficacy or toxicity was either improved or reduced. Clinical relevance was based on the judgments of at least two pharmacists To evaluate the number and types of pharmacists’ interventions of electronic prescriptions at a University Hospital. Out of 186,353 prescriptions, 454,654 items were dispensed and 1123 interventions were recorded. Only 3% of the interventions were administrative (absence of doctor’s signature/ wrong patient’s card) while 97% clinical. The clinical interventions were categorized into drug regimen and drug choice. Approx. 62% of problems associated with drug regimen were related to wrong doses. Interventions improved efficacy and avoided toxicity Al-Ghamdi et al. (2012) Saudi Arabia. Prospective, nonrandomised observational study An ADE was defined as per ( Nebeker et al., 2004 ) ADEs due to medication errors were considered to be preventable, while those caused by adverse drug reactions (without an error) were considered to be non-preventable. The incidences of ADEs after discharge from the hospital were identified using a questionnaire The intervention pharmacist comprehensively counselled patients about their discharge medications. The control group included similar patients who received routine discharge counselling by nurses. Two weeks after discharge, the same pharmacist called the patients and assessed the frequency of ADEs. Two independent clinicians reviewed each ADEs and judged its severity and preventability To assess a program involving comprehensive medication counselling provided by pharmacists at the time of discharge. The study outcome was the incidence of patient-reported ADEs after discharge Two hundred patients were included, 100 in the control group and 100 in the intervention group. Approx. 88% (175/200) patients were successfully contacted two weeks after. ADEs occurred in 2 patients in the intervention group and in 21 patients (23 incidents in 21 patients) in the control group ( P < .001). 14 ADEs were judged as preventable, and 9 were judged as serious. Warfarin, insulin, anti-laxatives and iron supplements were some of the agents involved Rashed et al. (2012) Saudi Arabia A prospective cohort study Drug-related problems (DRP) defined as per (PCNE) Pharmaceutical Care Network Europe, (2008) Adopted the data collection method of intensive chart review, used by Ghaleb et al. (2010) and by Dean et al. (2002). For measurement of the severity of the DRPs used validated scale for medication errors published by Dean and Barber (1999) . Data were collected using a modified version of the DRP Registration Form version 5.01 designed by (PCNE) Of interest was the epidemiology of and potential associated risk factors of drug-related problems in hospitalised children. Once a potential DRP was identified, causes, intervention, and outcome of the intervention were identified and recorded. Total paediatric patients were 364, from medical ward, neonatal intensive care unit and paediatric intensive care unit Total No. of DRPs 258; most common types identified: Dosing problems made up approx. 71% of the problems, drug choice problems approx. 11%, and adverse drug reactions approx. 6%, with other problem types making up approx. 12% The majority of DRPs were preventable and interventions mostly made at prescriber level Kheir et al. (2014) Qatar Cross-sectional, descriptive and exploratory study The authors adopted the PCNE’s definition of a DRP (PCNE) Pharmaceutical Care Network Europe (2008 ) Data generated via semi-private interviews was documented A medication use review form was adapted from the form developed by one of the United Kingdom’s National Health Services (NHS) primary care trusts Cumbria. (NHS, 2011) The primary outcome measure for this preliminary study was characterising the drug related problems (DRPs) (types and number) captured by the pharmacists during the medication use reviews Fifty-two eligible patients were reviewed by six pharmacists A total of 175 DRPs were identified with an average of approx. 3 DRPs per patient The most common DRPs reported were: non-adherence to drug therapy (approx. 31%), need for education and counselling (approx. 23%), and adverse drug reactions (approx. 21%). There was a strong association between the incidence of DRPs and the patients’ age; as the age increases, the number of DRPs consistently increases. As medications increased, the number of identified DRPs increased ( P ≤ 0.05) Al Rahbi et al., (2014) Oman Systematic Retrospective Study Not specified and referenced The interventions filed by pharmacists and assistant pharmacists in outpatient pharmacy department were collected, categorized and analysed after a detailed review The primary objective was to determine the number and types of medication errors intervened by the dispensing pharmacists at outpatient pharmacy department. The study period was one year Thirty thousand five hundred sixty-three prescriptions dispensed. The number of interventions collected in this period was 692 interventions, approx. 2% of the prescriptions. Approx. 99% of all interventions were prescribing errors, Ninety-eight percent were accepted by prescribers. Approx. 15% of the interventions were administrative Mitwally et al., (2015) Qatar Pre and post-interventional analysis of prescriptions None stated (abstract) Random prescriptions were collected for 1 week both prior and after the educational phase. The use of unapproved abbreviations, trade names, and the absence/incorrect patient label was also considered as a prescribing error Investigate whether physician education had an impact on reducing prescribing errors within inpatient setting. The intervention consisted of the clinical pharmacy team preparing educational sessions discussing prescribing errors. The educational material included real case scenarios and the institution’s prescribing policies The overall physician attendance for the educational session was 92 from a total of 102 (approx. 90%) A total of 1822 prescriptions were involved in the study, with 948 in the pre sample and 874 in the post sample. The total number of errors within the pre sample was (approx. 20%) in comparison to (approx. 10%) errors for the post sample, an overall reduction of 52% in prescribing errors ( P < 0.001) V- Perceptions of HCP on ME and ME reporting (n = 3) Al-Rowibah and Younis (2013) Saudi Arabia Cross-sectional questionnaire None stated Not applicable To determine whether CPOE improves the quality of care by increasing patient safety and decreasing medication errors at study setting The response rate was 31%, with 93 physicians participating. Up to 88% of the physicians agreed that the use of CPOE improved their performance and 76% reported that the use of CPOE increased their productivity. In addition, 64% reported that it was easy to use. Fifty-five percent reported that it created new types of errors. However, 72% of the physicians agreed that CPOE helped them to decrease adverse drug events and 91% of the physicians agreed that CPOE reduced errors related to hand-written prescriptions Al-Arifi (2014) Saudi Arabia A cross sectional survey Dispensing errors defined as per Szeinbach et al. (2007) Not applicable To survey pharmacists’ attitudes toward dispensing errors and factors contributing to these errors in community pharmacy settings Response rate approx. 82% Seventeen factors were identified as contributing to errors, some are: pharmacist assistant, high workload, lack of time, and similar or confusing drug names. Among the major factors believed to reduce dispensing errors were improving doctors’ handwriting, reducing pharmacist work load, having drug names that are distinctive, privacy when counselling patients, having mechanism for checking dispensing procedure, keeping drug knowledge up to date Al Anazi and Al-Jeraisy (2015) Saudi Arabia Cross-sectional study conducted A self-administered paper based surveys was used Not Stated Not applicable The perceptions of healthcare professionals with respect to the underlying factors of medication errors. Response rate was 82%. The study cohort made up of approx. 42% pharmacists, approx. 31% physicians, and approx. 27% nurses. The perceptions of the professionals on the causes of errors differed on the following: interruptions while writing the order, clarity of physician’s order and knowledge of allergies
REFERENCES:
1. ABUSHAIQA M (2007)
2. ABUYASSIN B (2011)
3. ACKROYDSTOLARZ S (2006)
4. ALAKHALI A (2014)
5. ALANANY R (2012)
6. ALANAZI M (2015)
7. ALANAZI M (2015)
8. ALARIFI M (2014)
9. ALARIFI A (2014)
10. ALBARRAK A (2014)
11. ALDHAWAILIE A (2011)
12. ALGHAMDI S (2012)
13. ALGHAMDY M (2015)
14. ALHAMID A (2016)
15. ALHUSSEIN F (2008)
16. ALJADHEY H (2013)
17. ALJADHEY H (2013)
18. ALJADHEY H (2016)
19. ALJAMAL M (2012)
20. ALJAZAIRI A (2008)
21. ALJERAISY M (2011)
22. ALKHAJA K (2006)
23. ALKHAJA K (2007)
24. ALKHAJA K (2005)
25. ALKHAJA K (2008)
26. ALKHAJA K (2012)
27. ALKHANI S (2014)
28. ALLAN E (1990)
29. ALMAZROU S (2015)
30. ALODHAYANI A (2016)
31. ALOLAH Y (2008)
32. ALOMAR H (2013)
33. ALRASHDI I (2010)
34. ALRAHBI H (2014)
35. ALROWIBAH F (2013)
36. ALSHAHAIBI N (2012)
37. ALSHAIKH M (2013)
38. ALSULAIMI Z (2013)
39. ALTAJIR G (2005)
40. ALTEBENAUI A (2015)
41. AMERICANGERIATRICSSOCIETYBEERSCRITERIAUPDATEEXPERTPANEL (2012)
42. ARABI Y (2012)
43. ASEERI M (2013)
44.
45. ASHP (1993)
46. BATES D (1995)
47. COHEN J (1960)
48. DEAN B (1999)
49. DEAN B (2002)
50. DEAN B (2002)
51. DEAN B (2000)
52. DIBBI H (2006)
53. ELNOUR A (2007)
54. FERNER R (2006)
55. FICK D (2003)
56.
57. FLYNN E (1999)
58. FLYNN E (2002)
59. GHALEB M (2006)
60. GHALEB M (2010)
61. GANDHI T (2000)
62. HEMIDA A (2011)
63. HEPLER C (1990)
64. HOOPER R (2009)
65. INSLEY J (1996)
66. IRSHAID Y (2005)
67. JHA A (1998)
68. JHA A (2010)
69. KAUSHAL R (2001)
70. KHEIR N (2014)
71. KHOJA T (2011)
72. (2000)
73. LANDIS J (1977)
74. LISBY M (2010)
75. MAHMOUD M (2016)
76.
77. MCLEOD M (2016)
78. MCPHILLIPS H (2005)
79. MENDONCA J (2010)
80. MILLER M (2007)
81. MITWALLY H (2015)
82. MORIMOTO T (2004)
83. MORIMOTO T (2010)
84. NARANJO C (1981)
85.
86.
87. NEBEKER J (2004)
88. NEVILLE R (1989)
89. (2000)
90.
91.
92. RANCOURT C (2004)
93. RASHED A (2012)
94. RASHED A (2013)
95. REHMANI R (2011)
96. SADATALI M (2010)
97. SHAUGHNESSY A (1989)
98. SONALLAH H (2014)
99.
100.
101. WHO (1999)
102.
103.
104. WORLDBANK (2017)
105.
106.
107.
|
10.1016_S0960-9776(23)00172-8.txt
|
TITLE: P053 Dysregulation of microRNA signature in breast cancer cells can facilitate overexpression of genes responsible for multiple drug resistance
AUTHORS:
- Halytskiy, V.
ABSTRACT: No abstract available
BODY: No body content available
REFERENCES:
No references available
|
10.1016_j.envc.2025.101287.txt
|
TITLE: When safe Isn’t safe: Ischemic heart disease mortality below PM2.5 standards in urban and suburban Bangkok under tropical temperature extremes
AUTHORS:
- Chom-in, Kanyanat
- Jongcharoenkumchok, Apinya
- Choto, Pitawat
- Puangthongthub, Sitthichok
ABSTRACT:
Background
While PM2.5 is a well-established cardiovascular hazard, research from tropical, rapidly urbanizing regions remains scarce, especially in suburban zones characterized by unregulated and diverse emission sources. Thailand's 24-hour PM2.5 standard may underestimate the short-term ischemic heart disease (IHD) mortality risks, particularly under thermal extremes.
Objectives
This study assessed the acute association between PM2.5 and IHD mortality in urban and suburban Bangkok and evaluated how cold and heat extremes modify this association across demographic and clinical subgroups.
Methods
We performed a time-series analysis on daily IHD mortality from 2014 to 2022, using a generalized linear model (GLM) across lags 0–7. Models were adjusted for meteorological (temperature, humidity), co-pollutants (NO2, SO2), day of week, public holidays, and COVID-19 lockdown effects. Analyses were stratified by geographic area, sex, age, IHD subtype, temperature extremes, and PM2.5 thresholds per Thai (≤37.5 µg/m³) and WHO (≤15 µg/m³) standards.
Results
PM2.5 was significantly associated with IHD mortality in urban (lag 0–2) and suburban (lag 0–1) settings. Suburban areas showed steeper exposure-response slopes. The strongest associations occurred for acute myocardial infarction, particularly in men and elderly. Elevated risks were evident even at PM2.5 levels below Thai (RR=1.05; 95 %CI:1.01–1.09) and WHO (RR=1.08; 95 %CI:1.03–1.14) thresholds in suburban. Cold exposure significantly amplified IHD risk in urban elderly (RR=1.27; 95 %CI:1.05–1.53), while extreme heat increased risk in suburban men (RR=1.15; 95 %CI:1.05–1.26). Findings support progressively aligning the Thai standard with WHO guidelines through phased reduction targets.
Conclusions
Findings challenge the current regulatory PM2.5 standard, emphasizing urgent policy revisions and climate-sensitive health protections in tropical developing cities.
BODY:
1 Introduction Ischemic heart disease (IHD) mortality is the leading cause of death globally, responsible for an estimated 9 million deaths annually according to Global Burden of Disease in 2023. Its burden is particularly severe in developing countries, where rapid urbanization, environmental transitions, and insufficient health infrastructure amplify the impact of preventable risk factors. Air pollution and climate extremes are escalating threats, driven by weak regulatory enforcement and increasingly diverse emission sources. PM 2.5 is a well-established cardiovascular risk, linked to IHD through inflammation, oxidative stress, and vascular dysfunction. A recent meta-analysis reported an odds ratio (OR) of 1.22 per IQR increase in PM 2.5 for IHD mortality. In China, acute myocardial infarction (AMI) deaths rose by 22 %, even below the World Health Organization’s (WHO) PM 2.5 standard, underscoring the dangers of short-term exposure ( Cheng et al., 2023 ). According to the burden of disease in Thailand, where IHD ranks among the top three causes of death, the 24-hour PM 2.5 standard (≤37.5 µg/m³) may be insufficient. Suburban areas increasingly face diverse sources, such as biomass burning, industrial emissions, and pollution transport, distinct from Bangkok's vehicular-dominated emissions ( Kuson et al., 2023 ). These spatial differences in emission sources result in distinct pollution compositions and exposure profiles; however, the health implications of these contrasts remain inadequately studied. Prior studies examining the impact of air pollution on IHD mortality under temperature extremes in tropical areas are scarce, and none have systematically compared the dynamics between urban and suburban settings . Clarifying how these source-specific exposures translate into IHD risk across different population segments is essential, particularly as suburban expansion continues and climate-driven environmental instability increases. Co-pollutants like sulfur dioxide (SO 2 ), nitrogen dioxide (NO 2 ), ozone (O 3 ), and carbon monoxide (CO) also contribute to cardiovascular risk, a subject for confounding adjustment. SO 2 and NO 2 have been associated with sustained effects over extended lag periods, often without mortality displacement ( Chen et al., 2021 ; Halldorsdottir et al., 2022 ). O 3 exposure has been linked to increased IHD mortality, particularly during warm seasons due to its photochemical activity, while short-term CO exposure has been shown to impair oxygen delivery and exacerbate myocardial ischemia, increasing AMI risk. Vulnerable populations like the elderly, working-age men, and outdoor laborers are more exposed or physiologically susceptible to PM 2.5 and co-pollutant confounders. Temperature extremes further complicate risk. As effect modifiers, cold waves and heat events elevate cardiovascular mortality and may interact with pollution ( Bao et al., 2025 ; Ning et al., 2024 ). Urban heat islands (UHI) exacerbate this effect in city centers, while suburban zones face variable thermal exposures and seasonally driven biomass emissions ( Cichowicz and Bochenek, 2024 ; Dang et al., 2018 ). The variation in environmental stressors between urban and suburban areas underscores the need for a nuanced understanding of air pollution’s health impacts in these contrasting settings . Understanding how environmental stressors exacerbate the impacts of air pollution is increasingly urgent as climate change intensifies temperature variability and extreme weather events ( Khunthong et al., 2025 ). While numerous studies have examined the cardiovascular effects of air pollution using advanced exposure-response models and stratified subgroup analyses, this area of research remains underexplored in tropical regions, especially across distinct urban and suburban environments. This study addresses that gap by systematically analyzing short-term PM 2.5 -related IHD mortality by sex, age group, IHD subtype, season, and extreme temperature conditions. We further identified cold- and heat-sensitive subpopulations, providing one of the most comprehensive, policy-relevant assessments of exposure risk heterogeneity in a tropical megacity. 2 Methods 2.1 Study area This study focused on urban and suburban areas of Bangkok to reflect variations in population density and potential differences in exposure to air pollution. The urban area selected was Bangkok, a densely populated with an estimated population density of approximately 3562 people per km². In contrast, the suburban areas included Samut Prakan (1346 people/km²), Samut Sakhon (672 people/km²), Pathum Thani (771 people/km²), and Nakhon Pathom (425 people/km²). These suburban provinces demonstrated a broader range of population distributions. Samut Prakan exhibited higher density levels due to rapid urbanization and residential development, while provinces such as Samut Sakhon and Nakhon Pathom retained more peri‑urban or semi-rural characteristics ( Fig. 1 ). 2.2 Air pollutants and meteorological data Air pollution and meteorological data were obtained from 23 air quality monitoring stations (AQMS) operated by the Thai Pollution Control Department (PCD), comprising 19 stations in urban Bangkok and 4 in suburban areas. The dataset included hourly concentrations of PM 2.5 , PM 10 , NO 2 , SO 2 , CO, and O 3 . Meteorological variables included hourly temperature and relative humidity. To estimate population-level exposure, spatial buffers were applied around each monitoring station: a 5 km radius was used for urban areas due to the high density of pollution sources and population clusters, whereas a 20 km radius was adopted for suburban areas to reflect broader land dispersion and lower station density. This method assigned exposure levels based on the proximity between monitoring stations and population areas by considering Bangkok’s urban stable atmospheric conditions, where limited vertical mixing and slow pollutant spread, especially in dense urban areas, justify using a smaller 5 km buffer. In contrast, suburban areas have more open terrain and stronger wind flow, making a larger 20 km buffer more suitable to capture air quality across a wider area. This spatial approach is further supported by evidence showing that pollutant concentrations beyond the latter buffers diminish rapidly and contribute minimally to health-relevant exposure estimates ( Di et al., 2017 ). And for each day, pollutant concentrations were averaged across all stations within urban and suburban areas, respectively, to obtain representative mean exposure levels for each location. The geographical distribution of the AQMS across the areas was illustrated in Fig. 1 . Although this may obscure finer-scale variations in pollution levels, future work could refine exposure assessment methods by incorporating land-use regression models or satellite-based approaches to improve spatial resolution. 2.3 Mortality data This study obtained daily IHD mortality data from the Ministry of Public Health, covering January 2014 to December 2022. IHD deaths were classified according to the 10th revision of the International Classification of Diseases (ICD-10) under codes I20-I25. These codes include angina pectoris (I20), AMI (I21), subsequent myocardial infarction (MI; I22), complications following acute myocardial infarction (post-MI; I23), other acute IHD (I24), and chronic ischemic heart disease (I25). The dataset contained information on the date of death, date of birth, sex, and areas of residence. The Research Committee of the Department of Environmental Science, the Faculty of Science, Chulalongkorn University, Thailand approved this study. It was considered exempt from institutional review board approval, as it involved the use of secondary data without personal identifiers and did not include any confidential worker information. For this type of research, informed consent was not required regarding local regulations and international ethical standards for research using non-identifiable population data. Data from the PCD and the Ministry of Public Health are available upon a written request with a detailed research proposal to be considered, subject to the policies and permissions of each authority. 2.4 Data curation Hourly percentages of missing values before and after each curation step are shown in Tables S1 and S2, respectively. The datasets underwent systematic pre-processing to ensure analytical validity. Initially, missing entries and placeholders (e.g., ‘-’) were flagged as not available (NA) values. Pollutant concentrations reported below the minimum detection limits were imputed by replacing them with the corresponding lowest detectable value. To address gaps in hourly data, a two-step smoothing approach was applied: first, a 168-hour rolling mean was used to fill short-term missing values, followed by refinement with a 360-hour rolling mean for longer gaps. Daily average concentrations for each pollutant were computed only when at least 75 % of the hourly values within that day were available. This procedure was applied to 24-hour averages of PM 2.5 , PM 10 , NO 2 , SO 2 , and CO, as well as to the maximum 8-hour average of O 3 per day. Regarding a portion of missing data after the curation process in Table S2, the process aimed to maximize completeness while minimizing exposure misclassification. Over-imputation was avoided to reduce bias from artificial smoothing, while ensuring sufficient data coverage for robust time-series modeling. This approach follows established best practices in air pollution epidemiology for handling missing exposure data ( Peng et al., 2006 ; Samoli et al., 2008 ; Wei et al., 2019 ), helping to preserve the integrity of the dataset and the validity of effect estimates. 2.5 Statistical analysis To quantify the short-term associations between ambient air pollution and IHD mortality, we applied a Poisson time-series regression within the generalized linear model (GLM) framework. RRs and 95 % confidence intervals (CIs) were estimated for each pollutant using single-pollutant models, structured as Eq. (1) : (1) log(E(Yₜ)) = α+βZₜ +ns(time) +ns(temperature) +ns (RH) +DOW +holiday +COVID Where E(Y t ) represents the expected IHD mortality count on day t; α is the model intercept; βdenotes the log-RR per interquartile range (IQR) increase in pollutant concentration Z t ; and ns() indicates natural cubic spline functions used to control for nonlinear confounding. Temporal trends were adjusted using 6 degrees of freedom (df) per year for calendar time. Meteorological confounding was addressed using 3 df each for daily mean temperature and relative humidity (RH). For calendar time, we considered the df varying from 5 to 15 df per year, while those of temperature and humidity were performed using 3 to 10 df ( Phosri et al., 2019 ). Day of the week and public holidays were included as indicator variables. To account for pandemic-related behavioral and healthcare changes as well as altered air pollutant sources, we introduced a binary variable representing the COVID-19 period codes as 1 (March 6, 2020 – September 30, 2022) and pre-/post-pandemic periods coded as 0. We explored lag structures from lag 0 to lag 7, and cumulative averages (e.g., lag 0–1 to lag 0–7). The pollutant with the highest significance in the single-pollutant models was carried forward to the multi-pollutant models. The multi-pollutant models assessed the independent effects of pollutants, simultaneously including both major and adjusted pollutants using Eq. (2) : (2) log(E(Yₜ)) = α+β₁Z₁ₜ + ... +βₙZₙₜ +ns(time) +ns(temp) +ns (RH) +DOW +holiday +COVID To address concerns of multicollinearity among air pollutants, this study evaluated variance inflation factors (VIFs) to quantify the degree of correlation among predictors. In the single-pollutant model including only PM 2.5 , Generalized VIF values were low (GVIF 1/(2xDf) < 1.22), indicating minimal collinearity. When multiple pollutants (PM 2.5 , NO 2 , and SO 2 ) were included simultaneously, VIF values increased but remained below 2, reflecting moderate yet acceptable correlation. According to Fox and Monette (1992) , VIF values of 5 or less typically indicate that multicollinearity is not severe enough to affect regression estimates adversely. These tests supported the robustness of the multi-pollutant model despite moderate correlations among pollutants. To assess potential effect modification, we conducted subgroup analyses stratifying RRs by area (urban vs. suburban), sex, age group (≤60 vs. >60 years), IHD subtype, season, temperature sensitivity (cold- and heat-sensitive days), and compliance with PM 2.5 air quality standards. These analyses were based on estimates from the multi-pollutant models at the lag with the strongest association identified in the primary analysis. Seasonal definitions followed the Thai Meteorological Department classifications: summer (mid-February to mid-May), rainy season (mid-May to mid-October), and winter (mid-October to mid-February). Extreme temperature days were defined as cold days (≤10th percentile of daily temperature) and hot days (≥90th percentile). PM 2.5 exposure was additionally categorized based on alignment with both the WHO and Thai standards. Statistical differences in RRs across subgroups were formally tested using two-sample z-tests, calculated using Eq. (3) : (3) z = ( l o g R R 1 − l o g R R 2 ) ± ( S E 1 2 + S E 2 2 ) Where logRR 1 and logRR 2 represent the log-RR estimates for each subgroup, and SE₁ and SE₂ are their corresponding standard errors. According to best-practice guidelines for subgroup analysis ( Schochet, 2008 ), multiple comparisons are most critical for exploratory analyses. However, our subgroup comparisons were rather confirmatory and hypothesis-driven analyses; the primary comparisons were reported with unadjusted p-values to preserve statistical power. Nevertheless, to address concerns regarding inflated Type I error, we performed a Benjamini-Hochberg false discovery rate (FDR) correction across all Z-test p-values for between-subgroup comparisons in the tri-pollutant models. For exposure-response curves, we modeled using PM 2.5 concentrations, with a minimum concentration as the reference point. The spatial distribution of AQMS across urban and suburban areas was mapped using Google Earth Pro to visualize the overage. All the analyses were performed using the R program (version 4.4.1). 3 Results 3.1 Descriptive epidemiology of IHD mortality This study investigated IHD mortality and air pollution exposure in urban and suburban areas from 2014 to 2022. In urban areas, a total of 19,569 deaths were attributed, averaging 6 deaths per day, according to Table 1 . The majority was elderly individuals (70.95 %) and males (63.17 %). Chronic IHD (I25) was the leading cause (61.27 %), followed by AMI (I21) at 35.18 %. Similarly, in the suburban areas, chronic IHD (59.24 %) and AMI (37.72 %) were the most prevalent mortality remained predominant, confirming consistent demographic vulnerability across areas. 3.2 Air pollution and meteorological characteristics Regarding air pollutant distribution, Table S3 revealed distinct differences between urban and suburban areas, while PM 10 and PM 2.5 averaged concentrations were slightly higher in suburban areas. The 90th percentile concentrations of PM 2.5 exceeded Thailand’s 24-hour standard (≤ 37.5 µg/m³) in both urban (42.23 µg/m³) and suburban (50.17 µg/m³) areas. This indicated critical levels of PM 2.5 exposure, whereas concentrations of other pollutants remained below national standards. For gaseous pollutants, suburban areas exhibited slightly higher mean concentrations of SO₂ and O₃, but lower mean concentrations of NO₂ and CO compared to urban areas, reflecting variations in emission sources. Regarding meteorological parameters, urban areas recorded higher mean temperatures, whereas suburban areas showed higher relative humidity. These findings highlighted distinct environmental and atmospheric conditions between the two populated areas. These findings were corroborated by Fig. 2 , showing the peaks in PM 10 and PM 2.5 concentrations predominantly occurred, which were in the winter season. Urban NO₂ and CO levels remained consistently higher than those in suburban areas, and strong temporal alignment between PM 2.5 and NO₂ in urban zones suggested combustion-related sources. In contrast, suburban PM 2.5 and NO₂ levels displayed sporadic spikes, indicating more varied and localized pollution sources. O₃ was persistently elevated in suburban regions, and the contrast in temperature and humidity between urban and suburban areas remained consistent over the years. This interpretation was supported in Fig. 3 , which presents the Spearman correlation coefficients among air pollutants and meteorological variables. In urban and suburban areas, strong positive correlations were observed between PM 2.5 in urban areas exhibiting an influential association with NO₂ ( r = 0.71), with a similarly high correlation observed in suburban areas ( r = 0.68), representing the highest correlations with any gaseous pollutant. These findings suggest a close linkage between PM 2.5 and NO₂ emissions, likely from combustion-related sources. Furthermore, NO₂ demonstrated moderate to strong correlations with SO₂ in suburban areas, implying shared emission sources, and with CO in urban areas, pointing to traffic-related origins. In contrast, temperature and relative humidity were negatively correlated with most pollutants, especially in suburban zones, highlighting their roles in influencing pollutant dispersion and atmospheric chemical transformations. 3.3 PM 2.5 effects on IHD mortality To rigorously assess the association between PM 2.5 and IHD mortality, we adopted a sequential model-building strategy, progressing from single-pollutant to multi-pollutant models. The initial single-pollutant models (Table S4) revealed consistently elevated and statistically significant risks at early lags. While higher-order models (Tables S5–S8) confirmed the robustness of PM 2.5 effects, the five-pollutant model introduced substantial missingness due to CO data in suburban areas, reinforcing the tri-pollutant model (adjusted for NO₂ and SO₂) as the optimal balance between confounding control and more data completeness for subgroup analyses. The single-pollutant model analysis (Table S4) highlighted significant associations between short-term exposure to PM 2.5 and IHD mortality in both urban and suburban areas. Among six air pollutants, PM 2.5 showed the most pronounced effects across several single lags, with the highest relative risk (RR) observed at lag 1 in urban areas and at lag 0 in suburban areas, suggesting a more acute response in both environments. To account for potential confounders, we incorporated bi-pollutant models (Table S5), adjusting for SO 2 , NO 2 , or CO. These models confirmed that PM 2.5 remained a robust and significant predictor of IHD mortality in urban areas, with significant associations observed across lags 0 to 6, regardless of the pollutant adjusted for. In suburban areas, PM 2.5 also demonstrated elevated risks, particularly at lag 0 to lag 2. Though the strength of the association diminished with the inclusion of gaseous pollutants, PM 2.5 still emerged as an independent and dominant predictor of IHD mortality. Further analysis using tri-pollutant models (Table S6) confirmed these findings. The tri-pollutant model revealed the strongest and statistically significant associations between PM 2.5 and IHD mortality after adjusting for NO₂ and SO₂, with the highest risk observed at lag 2 in urban areas (1.07 (1.04 to 1.10)) and lag 1 in suburban areas (1.08 (1.04 to 1.11)). Cumulative lag models provided additional support, showing significant effects for PM 2.5 exposure at lag 0–2 in urban areas and lag 0–1 in suburban areas. We also conducted analyses using four- and five-pollutant models (see Tables S7-S8), which adjusted for additional pollutants. The inclusion of the multi-pollutant models (Tables S5-S8) strengthened the robustness of the PM 2.5 and IHD mortality association, with significant results persisting even after adjustment for other pollutants. These models also facilitated clearer comparisons of RR across different model specifications, allowing for the assessment of potential confounding by co-pollutants. The findings consistently confirmed the significant association between PM 2.5 and IHD mortality in both urban and suburban areas, demonstrating that the effect of PM 2.5 was stable across alternative model structures. Given substantial data gaps in CO measurements for suburban areas, the five-pollutant model (Table S8) reduced analytical power. Therefore, the tri-pollutant model, balancing confounder adjustment with maximal data retention, was selected for further subgroup analyses. Furthermore, sensitivity analyses (Table S9) across varying df for time, temperature, and relative humidity confirmed that the selected 6 df for time and 3 df for temperature and RH were appropriate. The relative risk (RR) estimates remained stable, with urban RR ranging from (1.06 (1.03 to 1.10)) to (1.08 (1.04 to 1.11)) and suburban RR from (1.05 (1.02 to 1.09)) to (1.08 (1.04 to 1.12)). These results demonstrated that, although the model was responsive to changes in df, the conclusions on PM 2.5 exposure and IHD mortality were robust. In the context of conservative risk assessment, we selected lag 0–2 (1.07 (1.04 to 1.10)) for urban areas and lag 0–1(1.08 (1.05 to 1.12)) (see Table S6) for suburban areas for policy-relevant interpretation. Those lags showed the lowest lower bound of the 95 % CI and a narrower CI width, indicating greater statistical precision. The lower confidence limit is a standard approach in risk assessments to avoid underestimating risk, especially when informing public health decisions ( USEPA, 1995 ). This approach provides a more robust foundation for decision-making. 3.4 Effect modification Subgroup analyses revealed effect modifications that males experienced slightly higher RRs for PM 2.5 -related IHD mortality than females, particularly in suburban areas, according to Table 2 . Age-specific patterns varied by location, where in suburban areas, elderly individuals showed stronger associations with PM 2.5 exposure, whereas in urban areas, adults under 60 had marginally higher RRs with 1.09 (1.03 to 1.15) compared to the elderly (1.06 (1.03 to 1.10)). However, this age-group difference was insignificant (z-test at 95 % CI, p = 0.54). Analysis by IHD subtype indicated greater sensitivity of AMI (I21) to PM 2.5 exposure compared to chronic IHD (I25), with statistically significant differences in both urban and suburban settings ( p = 0.04 and 0.03, respectively). Seasonal effects showed that winter consistently exhibited the highest RR across both areas ( Table 2 ). Under extreme temperature conditions, urban areas showed significantly elevated risk on cold days (1.20 (1.02 to 1.41)), while suburban areas showed more potent effects on hot days (1.09 (1.01 to 1.18)). Further stratification under extreme temperatures revealed significantly increased risk among urban males (1.22 (1.00 to 1.50)) and elderly individuals (1.27 (1.05 to 1.53)) on cold-sensitive days. In suburban areas, males (1.15 (1.05 to 1.26)) and the elderly (1.11 (1.01 to 1.21)) were more affected on hot-sensitive days. The highest subtype observed risk was among suburban AMI mortality under heat-sensitive exposure (1.14 (1.01 to 1.28)). In the Z-test subgroup comparisons, several associations remained statistically significant after applying the Benjamini–Hochberg FDR correction (Table S10). All previously significant comparisons retained significance (FDR-adjusted p < 0.05), except the AMI versus chronic IHD comparison in the suburban area, which was no longer significant after adjustment (p-value increased from 0.03 to 0.38). 3.5 Risk assessment by PM 2.5 quality standards Analysis stratified by air quality levels showed that PM 2.5 concentrations at or below the Thai national standard (≤ 37.5 µg/m³) remained significantly associated with increased IHD mortality in urban and suburban areas ( Table 2 ). In contrast, exposures above the Thai standard did not yield statistically significant associations, possibly due to a low statistical power. When applying the WHO guideline threshold (≤ 15 µg/m³), no substantial risk was observed in urban areas. However, a considerable association persisted in suburban areas. The elevated risk in suburban zones was statistically supported, with a z-test confirming inter-area differences ( p = 0.00). Significant associations were identified in both areas for exposures above the WHO guideline. Additionally, a substantial difference between risks associated with PM 2.5 levels above and below the standard was observed in urban areas ( p = 0.00; comparison (c vs. d)), suggesting that the WHO guideline may better capture health risks in urban settings. The exposure-response relationships adjusted for NO₂ and SO₂, in urban (lag 0–2), β of 0.0044 (95 % CI: 0.0025–0.0063) and suburban (lag 0–1), 0.0038 (95 % CI: 0.0018–0.0057) areas are shown in Fig. 4 . In both settings, RR increased in a monotonic, approximately linear fashion with rising PM 2.5 , with no evidence of a threshold. Elevated risks were apparent even at relatively low concentrations, including levels below the current Thai (37.5 µg/m³) and WHO (15 µg/m³) standards. At higher concentrations (≥60 µg/m³), confidence intervals widened, yet the central estimates continued to indicate excess mortality risk (RR > 1.0). The β slopes of the urban and suburban curves were statistically similar ( Z = −0.44, p = 0.658), suggesting that the effect of PM 2.5 on IHD mortality is robust across population settings. These findings highlight the absence of a safe exposure threshold and support the policy principle that any reduction in PM 2.5 yields public health benefits. 4 Discussion This study provided strong evidence that short-term exposure to ambient PM 2.5 , after adjustment for NO₂ and SO₂, significantly elevated the risk of IHD mortality in both urban (lag 0–2) and suburban (lag 0–1) areas of Bangkok during the period 2014–2022. Moreover, the comparative analysis between urban and suburban areas revealed clear contrasts in air quality and meteorology: urban zones showed higher NO₂ and CO concentrations, while suburban areas had higher SO₂, O₃, and extreme PM 2.5 episodes. Meteorological differences were also clear, with higher temperatures in urban zones and higher humidity in suburban zones. These distinctions highlight differing emission sources and atmospheric conditions that may explain variation in health risks. The selection of lag 0–1 for suburban areas and lag 0–2 for urban areas was based on identifying the most significant and adverse effects in each setting. The choice of lag periods reflects the distinct characteristics of pollutant sources and environmental conditions in these areas. Urban regions are characterized by concentrated traffic emissions, leading to more immediate and intense exposure, which is why lag 0–2 is used. In contrast, suburban areas experience a mix of traffic, biomass burning, and industrial emissions, which results in more diffuse exposure dynamics and a corresponding acute effect at lag 0–1. These differences in lag selection align with studies that highlight the role of environmental factors, such as traffic density, industrial activity, and meteorological conditions, in shaping the temporal dynamics of air pollution impacts. Consequently, the chosen lag periods ensure a scientifically robust and contextually appropriate approach to risk characterization across both urban and suburban areas. Specifically, suburban areas exhibited slightly higher mean concentrations of PM 2.5 and PM 10 . The 90th-percentile PM 2.5 levels in both areas exceeded Thailand’s 24-hour standard, reflecting contributions from heterogeneous suburban sources such as biomass burning, agro-industrial activity, and regional transport ( Punnasiri et al., 2025 ). Elevated NO₂ and SO₂ concentrations were likely attributable to the proximity of major industrial estates. These pollutants are recognized by the WHO as key drivers of cardiopulmonary morbidity and mortality, particularly among vulnerable groups. Suburban provinces bordering Bangkok (Samut Prakan, Samut Sakhon, Pathum Thani, and Nakhon Pathom) have undergone rapid land-use transitions linked to industrial expansion and urban encroachment ( Chalermpong et al., 2021 ). These changes have produced evolving emission profiles distinct from the vehicular emissions dominating the urban core. However, spatially detailed and temporally resolved emission inventories remain scarce. Future studies should therefore incorporate land-use regression models or chemical transport modeling to capture the interaction between emerging land uses and pollutant concentrations across suburban gradients. In contrast, urban Bangkok consistently demonstrated elevated NO₂ and CO levels, reflecting traffic congestion and diesel-heavy vehicle fleets ( Kuson et al., 2023 ; Saohasakul, 2022 ). Source apportionment studies estimated that vehicular exhaust contributes up to 44 % of ambient PM 2.5 in central Bangkok ( ChooChuay et al., 2020 ), with pre-Euro diesel engines alone responsible for nearly half of PM 2.5 emissions from on-road transport ( Saohasakul, 2022 ). These emissions were compounded by the characteristics of the urban built environment, narrow road corridors, high-rise density, and impervious surfaces, which intensify atmospheric stagnation and amplify photochemical activity through the UHI effect ( Dang et al., 2018 ; Ulpiani, 2021 ). To assess intra-urban variability more comprehensively, future exposure models should integrate parameters such as surface albedo, vegetation cover, and green infrastructure distribution. These refinements would improve estimates of pollutant dispersion and support equitable environmental policy across urban-suburban areas continually. Bio-mechanistically, PM 2.5 , along with co-pollutants such as NO₂ and SO₂, contributed to cardiovascular risk through multiple biological pathways, including systemic oxidative stress, endothelial dysfunction, inflammation, and autonomic imbalance and NO₂ exacerbated myocardial demand and arrhythmias, while SO₂ may lead to persistent vascular injury due to impaired antioxidant defense ( Chen et al., 2021 ; Halldorsdottir et al., 2022 ; Montone et al., 2023 ). Although these mechanisms were well-established. In addition, our subgroup analysis revealed differences in susceptibility across sex, age, and IHD subtypes. Males, particularly in suburban areas, showed slightly higher RR, which was consistent with biological and occupational exposure differences ( Romeo et al., 2024 ). Furthermore, age-specific patterns differed by location: in suburban areas, both adults and elderly individuals were significantly affected, in urban zones, only adults under 60 years of age demonstrated statistically significant associations between exposure and adverse health outcomes. This age-specific pattern likely reflected elevated vulnerability among working-age populations due to occupational exposure, active commuting, and psychosocial stress factors that collectively amplified both exposure and susceptibility and urban working-age adults were disproportionately impacted by air pollution and thermal stress due to intense occupational and transit-related exposures in dense city environments, despite generally having greater physiological resilience than the elderly ( Zhang et al., 2025 ). The absence of significant associations among elderly urban residents may reflect a combination of lower time-activity exposure, indoor lifestyles, and better access to healthcare infrastructure. Urban workers experience significantly higher exposure to PM 2.5 than the elderly due to prolonged outdoor activity in heavily polluted environments, peak-hour commuting, and increased respiratory rates from physical labor, all of which amplify pollutant inhalation. Compounding this risk, many urban workers face barriers to healthcare access, which can delay the diagnosis and treatment of pollution-related illnesses. Additionally, insufficient use of personal protective equipment and extended work hours during periods of elevated pollution exacerbate adverse health effects among workers ( Liang et al., 1999 ). In contrast, the elderly often benefits from more consistent healthcare services and typically remain indoors in relatively cleaner environments. These findings were consistent with WHO (2021) , which emphasized that exposure intensity and duration, rather than chronological age alone, can drive differential health risks in urban populations. Moreover, the UHI effect and dense built environments exacerbated local pollutant accumulation during peak hours, further intensifying exposure for mobile and economically active populations, while offering relative protection to those who remained indoors ( Deng et al., 2023 ; Ulpiani, 2021 ). These insights call for age- and occupation-sensitive public health strategies. Specifically, targeted mitigation measures such as protective policies for outdoor workers, low-emission transport zones, and thermally adaptive urban design should be prioritized to reduce cumulative exposure and health burdens among working-age adults in highly urbanized environments. Regarding IHD subtypes, the analysis revealed that AMI (I21) was significantly more sensitive to short-term PM 2.5 exposure than chronic IHD (I25) in both urban and suburban areas ( p < 0.05). This distinction was further supported by a Z-test for differences in effect estimates, reinforcing that AMI is more acutely affected by PM 2.5 exposure than chronic IHD. However, when adjusting for multiple comparisons using the Benjamini–Hochberg FDR correction, the association in suburban areas was no longer statistically significant. These findings were consistent with biological evidence indicating that PM 2.5 rapidly induced endothelial dysfunction and systemic inflammation, mechanisms known to acutely precipitate AMI events ( Montone et al., 2023 ). The inclusion of NO₂ and SO₂ in multi-pollutant models further amplified these associations, likely due to their synergistic cardiovascular toxicity mediated through oxidative stress pathways ( Cheng et al., 2023 ). Collectively, these results suggested that AMI may serve as a sensitive sentinel outcome for detecting the IHD impacts of ambient air pollution, particularly during high-emission or combustion-related pollution episodes. While chronic IHD reflected the cumulative effects of long-term exposure, AMI appeared more responsive to transient spikes in pollution in our study. These findings underscore the need for heightened preventive measures and exposure reduction strategies among individuals at risk for AMI, especially during periods of elevated air pollution. Seasonal patterns further reinforced these findings, with winter exhibiting the highest PM 2.5 -related IHD mortality. This outcome aligned with the seasonal convergence of atmospheric stagnation and elevated emissions from biomass burning during the maize and sugarcane harvests ( Ning et al., 2024 ). Winter cold temperature additionally exacerbated cardiovascular strain through vasoconstriction and increased blood viscosity, heightening the risk of ischemic events. Notably, these environmental stressors interacted with demographic and geographic factors: under extreme temperature conditions, urban residents were more vulnerable during cold-sensitive spells, particularly males (1.22 (1.00 to 1.50)) and the elderly (1.27 (1.05 to 1.53)), likely due to reduced physiological acclimatization from the UHI effect and abrupt temperature drops ( Alahmad et al., 2023 ; Cichowicz and Bochenek, 2024 ). Conversely, suburban residents exhibited higher susceptibility to heat-sensitive extremes, especially among males (1.15 (1.05 to 1.26)) and the elderly (1.11 (1.01 to 1.21)), consistent with elevated thermal exposure and potentially limited access to cooling infrastructure ( Bao et al., 2025 ). These differential vulnerabilities may reflect complex interactions between microclimate exposure, socio-economic adaptation capacity, and ambient pollutant profiles. Notably, AMI appeared more responsive to thermal extremes than chronic IHD in both hot-sensitive scenarios (suburban) and cold-sensitive scenarios (urban), although these differences were not statistically significant because of the low statistical power. This trend supports the interpretation that acute cardiovascular events were particularly sensitive to sudden environmental stressors. Our findings underscore a critical gap in Thailand's current PM 2.5 regulatory framework. Notably, IHD mortality remained significantly associated with PM 2.5 exposures even when concentrations were at or below the national standard of ≤ 37.5 µg/m³. In contrast, exposures above this level did not yield statistically significant associations. This could be due to fewer days exceeding the standard, resulting in a low statistical detection power. Otherwise, this pattern suggests potential saturation of risk, behavioral adaptation, or survivor bias, whereby those more vulnerable may already be affected at lower exposure levels. This complexity underscores the need for policy reforms that consider not just the average exposure levels but also the potential risks to those who are more vulnerable. Applying the more stringent WHO guideline threshold (≤ 15 µg/m³) revealed that significant IHD risks persisted only in suburban areas. This inter-area difference was statistically significant (z-test, p < 0.05), indicating spatial health inequities potentially driven by differential exposure patterns, access to care, or cumulative burden ( WHO, 2021 ). Suburban vulnerability may also be shaped by structural inequalities, such as delayed health-seeking behavior or longer cumulative exposure due to residential proximity to biomass burning and poor urban planning. Furthermore, when compared to the WHO PM 2.5 air quality standard, which stands as a critical benchmark for public health, the analysis demonstrated that exceeding the 15 µg/m³ threshold remained consistently associated with elevated IHD mortality risk. This association held strong even after FDR correction, confirming the robustness of the relationship. The persistent risk linked to PM 2.5 levels surpassing the WHO guideline underscores the importance of adhering to global air quality standards. The exposure–response curves (Figure 4), adjusted for NO₂ and SO₂, show a significant positive association between PM 2.5 levels and health risks in both urban and suburban areas. Notably, even at lower PM 2.5 concentrations, health risks were apparent, emphasizing the importance of reducing exposure across all levels. These findings reinforce the urgent need for proactive air quality measures, particularly since risks persist even at low concentrations. The shaded areas represent 95 % confidence intervals derived from bootstrapping, capturing the statistical uncertainty around the estimated exposure-response relationships. While exact threshold identification remains inherently uncertain due to variability in the data distribution, particularly at the lower exposure tail with potential non-linearity, our results remain consistent with global assessments ( HEI, 2020 ; WHO, 2021 ), which emphasize that even low levels of PM 2.5 pose serious health risks. This reinforces the urgent need for immediate and robust action to reduce exposure and prevent long-term health consequences, as the risks persist. From a regulatory perspective, this evidence calls for progressive lowering of the national PM 2.5 standard toward the WHO guideline, coupled with incremental reduction targets to drive sustained improvement. Beyond standard-setting, effective policy should incorporate equity-focused measures to protect vulnerable populations, short-term interventions during high-pollution episodes, and integrated strategies that link air quality management with climate mitigation. Strengthening surveillance and public risk communication is also critical to emphasize that no level of PM 2.5 exposure can be considered risk-free. Importantly, our findings align with global trends, indicating that even modest levels of PM 2.5 can significantly increase the risk of IHD, reinforcing the need for more stringent air quality standards. Future research should focus on incorporating source-specific particulate composition and land-use stratification, which would enhance model accuracy and provide stronger attribution of health risks. These findings have significant policy implications, especially in developing countries such as Thailand, where traffic emissions and biomass combustion are common sources of pollution. Currently, Thailand’s standards fall short of international benchmarks. For comparison, the U.S. and Canada enforce annual PM 2.5 standards of 12 and 8.8 µg/m³, respectively ( CCME, 2013 ; Giannadaki et al., 2016 ). A study by Giannadaki et al. (2016) estimated that adopting the U.S. standard globally could prevent up to 46 % of premature deaths attributable to modest levels of PM 2.5 , with Asia realizing the largest absolute gains. Integrating environmental and public health policies is essential to mitigating the effects of air pollution. Finally, although estimating health cost savings and reductions in healthcare burden is outside the scope of our study, these calculations could provide strong economic support for policy change. The potential cost savings in healthcare from lowering PM 2.5 exposure could be significant, especially in countries like Thailand, where air pollution-related health issues are a growing concern. Using the data from this study, further modeling of the healthcare savings could support arguments for stricter air quality regulations, particularly for sensitive populations. 5 Conclusion This study investigated IHD mortality about PM 2.5 exposure, adjusted for NO₂ and SO₂ using a multi-pollutant model, across urban (lag 0–2) and suburban (lag 0–1) areas in Bangkok from 2014 to 2022. Elevated risks were observed among sensitive groups under extreme conditions, particularly among men, the elderly, and AMI during cold- and hot-sensitive days in urban areas and in suburban areas. PM 2.5 exposure, even below Thailand’s national standard, was significantly associated with increased IHD mortality, highlighting a regulatory gap in air quality standards and policies addressing healthcare and structural disparities to reduce pollution-related IHD risks. Our findings support, from a regulatory standpoint, a need for a gradual tightening of the national PM 2.5 standard in alignment with WHO guidelines, accompanied by phased reduction targets with climate mitigation to ensure continuous air quality improvements. CRediT authorship contribution statement Kanyanat Chom-in: Writing – review & editing, Writing – original draft, Visualization, Software, Methodology, Formal analysis, Conceptualization. Apinya Jongcharoenkumchok: Writing – review & editing, Validation, Data curation. Pitawat Choto: Writing – review & editing, Validation, Data curation. Sitthichok Puangthongthub: Writing – review & editing, Supervision, Project administration, Conceptualization. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgements The authors would like to express our sincere thanks to the PCD and the Ministry of Public Health, Thailand, for providing essential data used in this study. Supplementary materials Supplementary material associated with this article can be found, in the online version, at doi:10.1016/j.envc.2025.101287 . Appendix Supplementary materials Image, application 1
REFERENCES:
1. ALAHMAD B (2023)
2. BAO Q (2025)
3.
4. CHALERMPONG S (2021)
5. CHEN Q (2021)
6. CHENG J (2023)
7. CHOOCHUAY C (2020)
8. CICHOWICZ R (2024)
9. DANG T (2018)
10. DENG X (2023)
11. DI Q (2017)
12. FOX J (1992)
13. GIANNADAKI D (2016)
14. HALLDORSDOTTIR S (2022)
15.
16. KHUNTHONG P (2025)
17. KUSON M (2023)
18. LIANG W (1999)
19. MONTONE R (2023)
20. NING Z (2024)
21. PENG R (2006)
22. PHOSRI A (2019)
23. PUNNASIRI K (2025)
24. ROMEO B (2024)
25. SAMOLI E (2008)
26. SAOHASAKUL L (2022)
27. SCHOCHET P (2008)
28. ULPIANI G (2021)
29.
30. WEI Y (2019)
31.
32. ZHANG P (2025)
|
10.1016_j.risk.2025.100030.txt
|
TITLE: Interpretability in deep learning for finance: A case study for the Heston model
AUTHORS:
- Brigo, Damiano
- Huang, Xiaoshan
- Pallavicini, Andrea
- de Ocáriz Borde, Haitz Sáez
ABSTRACT:
Deep learning is a powerful tool whose applications in quantitative finance are growing every day. Yet, artificial neural networks behave as black boxes, and this introduces risks, hindering validation and accountability processes. Being able to interpret the inner functioning and the input–output relationship of these networks has become key for the acceptance of such tools and for reducing the risks inherent in their use. In this study, we focused on the calibration of a stochastic volatility model, a subject recently tackled by deep-learning algorithms. We analyzed the Heston model in particular, as this model’s properties are well known, resulting in an ideal benchmark case. We investigated the capability of local and global strategies derived from cooperative game theory to explain the trained neural networks, and we found that global strategies, such as Shapley values, can be effectively used in practice. Our analysis also highlighted that Shapley values may help choose the network architecture, as we found that fully connected neural networks perform better than convolutional neural networks in predicting and interpreting the Heston model prices to parameters’ relationship.
BODY:
1 Introduction In recent years, machine learning has experienced a surge in popularity, and its applicability has been extended to almost every industry. In particular, the financial sector is no stranger to machine learning. Tools such as deep learning are already used in several applications, including portfolio management, algorithmic trading, cryptocurrency and blockchain studies, fraud detection, and model calibration, as described by Ozbayoglu et al. (2020) . In this work, we focus on deep learning to investigate the relationship between a volatility model and the related volatility smile. In other words, given the volatility smile produced by a model, we aim to find the model parameters that generated that smile. When applied to the volatility smiles obtained from market data, this may be helpful in model calibration. With the slight abuse of language, we refer to our inversion problem as “calibration” even if we do not work with real market data calibrations. We focused on the Heston (1993) model, which is one of the reference models in derivative option pricing. This model is well understood both in theory and in practice. Several contributions regarding using artificial neural networks (ANNs or NNs) for model calibration have already been made to the literature. A few examples include the works of Bayer and Stemper (2018), Bloch (2019), Hernandez (2016), Horvath et al. (2019), Roeder and Dimitroff (2020) . We also refer the reader to Ruf and Wang (2020) for a review on machine learning for option pricing and hedging. In their work, a section is devoted to the calibration problem. Yet, none of these contributions explores the interpretability of the results, and we need to address the “black box” risk inherent in these methods. Indeed, interpretability methods are conceived as a response to the black-box nature of NNs, where the functioning of the networks and the input–output relationship are difficult to understand, due to nonlocality and nonlinearity. (See Brigo 2019 for an overview.) In the context of our study, although the NNs aimed to learn model calibration, the NN approximation did not give the user any insight into the form of the original function. There was no straightforward link between the weights and the function being approximated. This can be a relevant concern since the model may have learnt the wrong model features but yielded satisfactory results with the available data set. This is easier to illustrate with an image recognition example. If we build a network that aims to identify cats, but all the cats in our data set wear a collar, the network may learn that having a collar is what characterizes cats. This is certainly not true. Therefore, by using interpretability, we can understand which input features affect the output the most and identify the problem. This is paramount in quantitative finance since decisions involving large amounts of funds are made based on complex machine-learning algorithms, which can be extremely hard to understand and in which errors can go unnoticed. As claimed by Molnar (2019) , there is no specific mathematical definition of interpretability. One (nonmathematical) definition was proposed by Miller (2019) : “Interpretability is the degree to which a human can understand the cause of a decision.” Another one was by Kim et al. (2016) : “Interpretability is the degree to which a human can consistently predict the model’s result.” These initial definitions may be helpful; however, they do not completely address what we intuitively understand to be interpretability. Reaching a precise and encompassing definition is certainly in progress. In fact, interpretability methods are a pressing issue in the literature on machine learning. However, to the best of our knowledge, we have found only a few examples on the subject in the financial literature. In particular, we refer to Wang et al. (2019) , in whose work, a sensitivity analysis was applied to a trained NN to model a trading strategy. Further, Demajo et al. (2020) presented applications on credit scoring, and Moehle et al. (2021) outlined applications in performance attribution for portfolio analysis. Moreover, Bellotti et al. (2021) briefly considered interpretability for recovery-rate predictions in nonperforming loans but without using deep NNs. In this paper, we analyzed the applicability of global and local interpretability methods to our calibration problem and calibrated the Heston model with two different NN architectures. We compared the global and local interpretability results; tested their applicability, reliability, and consistency; and investigated the results of both networks. Since we understood the Heston model well, theoretical and empirical facts about the model served as a solid benchmark for our results so that we could discriminate the best NN architecture to be used in the analysis. On the other hand, this showed us which methods worked best and could be potentially applied to more complex and less understood models. Our main finding from the Heston case was that interpretability strategies based on global tools from cooperative game theory, such as Shapley values, may outperform interpretability emerging from local tools, in which one fits simple models locally in the network input–output relationship. Given that this relationship is nonlinear and nonlocal, applying local methods, often based on linearization, is clearly not ideal in interpreting the NN behavior. This was confirmed by our findings. We also found that local interpretability did not align with our financial intuition in the Heston model, while global interpretability and the Shapley value, in particular, did. Moreover, we could use Shapley values as practical tools to discriminate among NN architectures and select the ones that better match the model behavior by establishing clear correspondences between calibrated options and model parameters. Indeed, we found that a fully connected NN (FCNN) performed better than a convolutional NN (CNN), which is contrary to what happens with image recognition. In addition, we could use these tools to investigate the model’s behavior when we were confident about the NN architecture; however, we lacked a clear model interpretation. Further, we did not use market data while learning about the smile-to-parameters map for the Heston model; we only used synthetic data. To elaborate, the volatility smile was calculated for a set of possible parameter values of the Heston model, using traditional quantitative finance methods. Following this, the smile was fed as an input to the network, with the parameter values that generated it as the output, for training. This is different from using real market data and asking the network to find the Heston parameters corresponding to the real market smile. The reason we used synthetic data was that we tried to distill the real correspondence between the smile and the parameters in the model. The Heston model could not properly fit the real market volatility smile in all cases, and that was why there would be an error between the closest possible Heston smile and the market smile. This error would be due to a number of variables affecting the smile–e.g., the liquidity discrepancy of different quotes, market impact, and other aspects that would distort the analysis of the smile-to-parameters map; this would add further variables that considerably complicate the mapping. As the purpose of this paper is to learn the pure smile-to-parameters’ map of the model and understand which parameters (and in which way they) contribute to the model smile more, we refrained from using market data, as that would have introduced further sources of error and variables; this would make the analysis less stringent. Ideally, market data could be considered in a second work after thoroughly understanding how the smile-to-parameter map functions within the model. A final comment we would like to offer in this introduction is how the NN’s accuracy in approximating the inverse map aligns with the findings of other scholars, such as Bayer et al. (2024) and Horvath et al. (2021) , who investigated NN approximations of the direct price map for both classical and rough Bergomi models. For example, for our FCNN, out-of-sample relative errors typically remained below 1 %, with maximum errors remaining below 3 %. Notably, Horvath et al. (2021) reported an average relative error below 0.6 % for the Bergomi price map, with maximum errors remaining below 16 %. While Bayer et al. (2024) explored the inverse mapping for the rough Bergomi model, their reported accuracy (average 2 %; maximum 40 %) was significantly lower, leading them to deem the approach less viable. In contrast, our results demonstrate the feasibility of accurately approximating the inverse map. The work presented here is an improved version of the preprint Brigo et al. (2021) , and a related preprint with a similar analysis of interpretability for the rough Heston model is by Yuan et al. (2024) . The structure of the paper is as follows: In Section 2 , we present the problem of using interpretability methods in calibrating pricing models via an NN, along with a description of the main tools at our disposal. In Section 3 , we illustrate how to calibrate the Heston model employing an NN. In particular, we describe two NN architectures: an FCNN and a CNN. Then, in Section 4 , we apply the interpretability tools to our case, and we present and discuss our results. In Section 5 , we review our contributions and make suggestions for further developments. 2 Interpretability of neural network (NN) calibration In this section, we present the relevance of NNs in the calibration of pricing models and outline the specific approach that we used for the interpretability analysis, the review of whose tools and methods is presented in subsequent sections. 2.1 Pricing-model calibrations via NN When a pricing model is conceived, the performance of the calibration process is paramount. One is given the liquid market prices of some benchmark products–typically options–and has to find the parameters of the model that generate model prices as closely as possible to the given market prices according to a chosen criterion. This involves minimizing a calibration error and measuring the discrepancy between market prices and prices. Calibrating model parameters to liquid market quotes can be very time consuming, especially if there is no analytic solution for pricing the benchmark products. If the calibration is a global one–i.e., if we seek to find a global optimum–the process becomes even slower. A possible way to deal with this issue and to ensure a fair level of accuracy, speed, and robustness is to use an NN as part of the calibration to make the process faster. We can better understand the mechanism if we consider the procedure proposed by Horvath et al. (2019) : a two-step algorithm for the calibration of the so-called the rough Bergomi (rBergomi) model, which is a rough volatility model. Interestingly, the proposed approach is generic and not limited to this specific model. In the first step, the authors built an NN to learn the pricing map from the model parameters to market quotes. Then, they performed the calibration by using the trained NN to efficiently find the market quotes from the model parameters. This map was highly fast, and it was easier to invert in a second step with standard optimization techniques. The authors studied both gradient-based and gradient-free methods. We may refer to this method as a two-step procedure. We stress that in this approach, the NN was used only to obtain a faster version of the pricing map, without learning the whole calibration procedure. On the other hand, we adopted a different approach and used the NN to learn the whole calibration procedure. Roeder & Dimitroff (2020) , focused on both the Heston and rBergomi models, and they built an NN to learn the map from market quotes to model parameters, so that the whole calibration process was learnt by the NN. The same approach is also described in the study of Hernandez (2016) using the Hull–White model. In our analysis, we chose to follow this procedure since the resulting network had to learn the whole calibration process, leading to a more challenging scenario for our interpretability methods. 2.2 Interpretability models and theoretical background Interpretability aims to provide the user with some insight into the machine-learning algorithm and how it makes decisions and learns about the input and output relationships. In our pricing-model calibration problem, interpretability allowed us to understand which inputs affected the most of our model parameters. These tools can be useful in two different situations: First, if we have complete knowledge of our model, as in the Heston case, we can test whether the correspondence between input data and model parameters matches our intuitive understanding of the pricing model. If this is not the case, we can reject the NN architecture in favor of more suitable choices. Second, if we lack knowledge of the model, we can use these tools precisely to improve our understanding of the model’s behavior. Chakraborty et al. (2017) conducted a survey of prior work on interpretability in deep learning models and presented the results alongside an introduction of relevant concepts mainly on two topics: model transparency and model functionality. For further details, Molnar (2019) published a book integrating the theoretical background and implementation examples of the current interpretable methods. This book is helpful for readers to build an overall framework of how to make deep learning models interpretable. However, some NN models may be able to produce accurate predictions with poor interpretability. Sarkar et al. (2016) discuss the trade-offs between accuracy and interpretability in machine learning. They proposed the TREPAN algorithm to draw better-performing decision trees from an NN that balances prediction accuracy and tree interpretability. We would like to highlight the difference between algorithm transparency and interpretability methods: The former analyzes the algorithm itself–i.e., how it works and what kind of relationships it applies to –and not the specific model or prediction-making process, whereas the latter, on which this paper focuses, requires knowledge of the algorithm and the data and analyzes the trained model. Interpretability is less understood than algorithm transparency, and it is a current area of research. In fact, many proposals for interpretability methods are published in the literature. We cite a few of them as an overview of the recent developments in the field 1 . 1 Local interpretable model-agnostic explanations (LIME) and SHapley Additive exPlanations (SHAP) can be implemented within the DeepExplain framework, whose Python package can be found here: https://github.com/marcoancona/DeepExplain . The individual Python packages for LIME and SHAP are available at https://github.com/marcotcr/lime and https://github.com/slundberg/shap , respectively. Bach et al. (2015) developed Layer-wise Relevance Propagation (LRP) to interpret the classifier predictions of automated image classification and provided a visualization tool that can draw the contributions of pixels to the NN predictions. Ribeiro et al. (2016) proposed local interpretable model-agnostic explanations (LIME), which can explain the predictions of any classifier or regressor. Specifically, it is a local surrogate model since it trains a local explanation model around individual predictions. Shrikumar et al. (2016) and (2017) introduced Deep Learning Important FeaTures (DeepLIFT) and Gradient*Input to attribute feature importance scores by evaluating the difference between the activation of every neuron and the corresponding reference level. They concluded that DeepLIFT significantly outperforms gradient-based methods. Lundberg & Lee (2017) modified classical Shapley values into a unified approach that can interpret model predictions globally. They called it SHapley Additive exPlanations (SHAP) values. Sundararajan et al. (2017) presented Integrated Gradients, a gradient-based attribution method driven by sensitivity and implementation invariance. Ancona et al. (2018) experimented with multiple interpretability methods, such as gradient-based attribution methods (such as saliency maps, Gradient*Input, Integrated Gradients, DeepLIFT, and LRP) and perturbation-based attribution methods (such as Occlusion and Shapley value sampling); among these methods, Occlusion is based on the work by Zeiler & Fergus (2014) . We can classify interpretability models into two main categories according to Molnar (2019) : global and local interpretability methods. The former aims to recognize how the model makes decisions based on a holistic view of its features and each of the learned components, such as weights, hyperparameters, and the overall model structure. It helps one understand the distribution of the target outcome based on the features. In contrast, the latter focuses on a single model prediction at a time. I an individual prediction is analyzed, the behavior of the otherwise complex model may appear more pleasant although this does not necessarily have to be the case. Locally, the prediction may only depend linearly or monotonically on some features; indeed, it does not have a complex dependence on them. In addition, since deep NNs are nonlinear and nonlocal, it seems unlikely that local methods can offer any relevant insights into the network behavior at large. In the previous list, most methods were local except for the methods based on Shapley values. 1 2.3 Local interpretability When a deep learning model is used to make predictions, it is often asked what the logic is behind the model and how the features contribute to the predictions. Local surrogate models, to some extent, answer this question. They interpret the individual predictions of an NN based on some features. Mathematically, let f be the original model in this explanation and the prediction estimated by a deep NN. We aim to explain a single prediction f ˆ based on the input y = f ˆ ( x ) x . In interpretability models, instead of using the original input x , the simplified input is often used, which is defined by a mapping function x ′ . With the original input x = h x ( x ′ ) x fixed, the simplified input sets each component to binary, where 1 means that the input component is present and 0 means that it is absent, the latter of which means that the feature value is equal to the mean of the data set. x ′ Given the input x , this has a few components j that are absent, such that ; it also has other components x ′ j = 0 k that are present, such that . For example, we have x ′ k = 1 where x = [ m 1 , v 1 , m 2 , v 2 , v 3 ] ⇒ x ′ = [ 0 , 1 , 0 , 1 , 1 ] , m represents the averages of the input data i x and v represents other different values. Then, i h enables us to map x back to the original input x ′ x . Following this, an explanation model h [ m 1 , v 1 , m 2 , v 2 , v 3 ] ( [ 0 , 1 , 0 , 1 , 1 ] ) = [ m 1 , v 1 , m 2 , v 2 , v 3 ] . g is called local if can be ideally approximated by f ˆ g –i.e., for z ′ ≈ x ′ For such an explanation model g ( z ′ ) ≈ f ˆ ( h x ( z ′ ) ) . g to have an additive feature attribution , g can be written as an additive structure where (1) g ( z ′ ) = ϕ 0 + ∑ i = 1 M ϕ i z ′ i , M is the number of simplified input features, is the coalition vector, and z ′ ∈ { 0 , 1 } M is the ϕ i ∈ R i th feature attribution. Notably, ϕ 0 is the case when for z ′ = 0 i = 1, …, M . In other words, each is equal to its respective data set mean. z ′ i accounts for the contribution of the different feature attributions g ( z ′ ) ϕ and aims to approximate the model output i f ( x ). Molnar (2019) summarize the process in the following steps: • Select an instance x whose model prediction is to be explained. • Perturb your data set and generate model predictions for these new data points. • Set weights for the new samples based on their proximity to the instance x . • Train the explanation model g with weights on the data set with variations. • Interpret the obtained prediction by explaining the local model g . LIME, DeepLIFT, and LRP are local surrogate models that use equation ( 1 ) locally in x to fit ϕ and obtain explanations. We provide a more detailed explanation of these methods in Appendix A . 2.4 Global interpretability According to Lipton (2017) , we can describe a model as interpretable if we can comprehend the entire model at once. This appears to be a daunting task for a complex NN that is both nonlinear and nonlocal. For this purpose, we can use global interpretability methods. We focus on interpretability methods based on Shapley values. We start by introducing the foundation behind classical Shapley values. Shapley values originated from cooperative game theory, and the term was coined by Shapley (1953) . The idea is to compute the average marginal contribution of each player i to the total game gain (payout) across all possible collaborating players. Shapley values can be used for interpretability in deep learning to explain the predictive model. In this context, the prediction task is viewed as a game and each feature value as a player in the game. Subsequently, the gain of this task is the difference between the actual prediction for a particular instance and the average prediction for all instances. Note that Shapley values are not equivalent to the difference in the prediction when the feature is removed from the model. They focus on how to fairly distribute total gains among the features. As an explanation model, they have the additive feature attribution structure similar to that of (1) : This is specifically in the case of Shapley values: g ( z ′ ) = ϕ 0 + ∑ i = 1 M ϕ i z ′ i . Intuitively, feature values ϕ i ( f ˆ , x ) = ∑ z ′ ⊆ x ′ \ { i } ∣ z ′ ∣ ! ( M − ∣ z ′ ∣ − 1 ) ! M ! f ˆ h x z ′ ∪ { i } − f ˆ h x ( z ′ ) . enter in a random order, and all values contribute to the prediction. The Shapley value of ( x k ) k x is the average (of the orders) of change in the prediction when i x joins. In deep NNs, the four Shapley value properties are the following: i (i) Efficiency: The total gain is recovered, which means that the sum of feature contributions must be equal to the prediction for an instance x subtracted from the average of all possible instances. Mathematically, ∑ i ϕ i ( f ˆ , x ) = f ˆ ( x ) − E [ f ˆ ( x ) ] . (ii) Dummy: If a feature j never adds marginal value, its Shapley value ϕ = 0. For a coalition of features j S , the prediction remains the same when x joins. j then f ( S ∪ { x j } ) = f ( S ) for all S ⊆ { x 1 , … , x M } , ϕ j = 0 . (iii) Symmetry: Any two features contributing equally to the total gain have the same ϕ . In other words, for all S ⊆ { x 1 , …, x }\{ M x , k x }, if j then f ( S ∪ { x k } ) = f ( S ∪ { x j } ) , ϕ k = ϕ j . (iv) Additivity: If a prediction task is composed of f 1 and f 2 , the Shapley values are ϕ ( f 1 ) + ϕ ( f 2 ). SHAP was proposed by Lundberg & Lee (2017) , and it is based on optimal Shapley values. It aims to calculate each feature’s importance in the prediction of an instance x . The authors built SHAP as a unified measure of feature contribution along with some methods to effectively generate them, such as Kernel SHAP (linear LIME and Shapley values), Tree SHAP (tree explainer and Shapley values), and Deep SHAP (DeepLIFT and Shapley values). Additionally, SHAP combines many global interpretation methods that are based on the aggregations of Shapley values. SHAP admits the additive feature attribution structure as Eq. (1) , a linear model of binary variables: Moreover, SHAP considers the case when all the features are present. For the simplified input g ( z ′ ) = ϕ 0 + ∑ i = 1 M ϕ i z ′ i . of an instance x ′ x , all components of the vector are present, which is 1. In this case, Eq. x ′ (1) becomes the following: Apart from efficiency, dummy, symmetry, and additivity, (2) g ( x ′ ) = ϕ 0 + ∑ i = 1 M ϕ i Lundberg & Lee (2017) highlighted three other desired properties that SHAP should satisfy. They are as follows: (1) Local accuracy: This property requires that the original model f with an instance x can be at least estimated by the explanation model g with the corresponding simplified input . x ′ where f ( x ) = g ( x ′ ) = ϕ 0 + ∑ i = 1 M ϕ i x ′ i . x = h x ( x ′ ) (2) Missingness: Since the binary components in the simplified input are either present or absent, features missing in the original input x ′ x should have no impact. If , then x ′ i = 0 ϕ = 0. i (3) Consistency: When a model is changed, the change of the input’s attribution should be consistent with the change of its simplified input’s contribution. It is denoted like so: . Further, it is defined like so: f x ( z ′ ) = f ( h x ( z ′ ) ) ; this is done by setting z ′ \ { i } . For any two models z ′ = 0 f 1 and f 2 satisfying and for all inputs f 2 x ( z ′ ) − f 2 x ( z ′ \ { i } ) ≥ f 1 x ( z ′ ) − f 1 x ( z ′ \ { i } ) , their effects are consistent with such a change: z ′ ∈ { 0 , 1 } M ϕ i ( f 2 , x ) ≥ ϕ i ( f 1 , x ) . The authors also claimed that there exists only one possible explanation model g satisfying the additive feature attribution structure and the SHAP properties (1) to (3) depicted above: where the number of nonzero components in vector (3) ϕ i ( f , x ) = ∑ z ′ ⊆ x ′ z ′ ! M − z ′ − 1 ! M ! f x z ′ − f x z ′ \ { i } is denoted by z ′ . ∣ z ′ ∣ In the case of SHAP values, and f x ( z ′ ) = f ( h x ( z ′ ) ) = E [ f ( z ) ∣ z A ] , where h x ( z ′ ) = z A A is the set of nonzero indices in and z ′ z has missing values for the features in the set A A . Additionally, we used to refer to the set of features not present in A ¯ A . For example, we started from the base value (corresponding to E [ f ( z ) ] ϕ 0 ) and SHAP attributes ϕ 1 to ; then, the change in the expected model prediction conditional on the feature 1, E [ f ( z ) ∣ z 1 = x 1 ] ϕ 2 , moved to , E [ f ( z ) ∣ z 1 , 2 = x 1 , 2 ] ϕ 3 , , and so on. Finally, the original model E [ f ( z ) ∣ z 1 , 2 , 3 = x 1 , 2 , 3 ] f ( z ) could be approximated by a conditional expectation A . Therefore, SHAP values are the Shapley values of an expectation of the original model E [ f ( z ) ∣ z A ] f , which is conditional on z . A According to Lundberg & Lee (2017) , although it is challenging to compute the exact SHAP values, they can still be approximated by combining current additive feature attribution methods. Two optional assumptions–model linearity and feature independence–are used to simplify the expectation calculation: (4) f h x z ′ = E f ( z ) ∣ z A = E z A ¯ ∣ z A [ f ( z ) ] ≈ E z A ¯ [ f ( z ) ] ≈ f z A , E z A ¯ The first equation in (4) is obtained by simplifying the input mapping. Following this, the conditional expectation taken over is calculated, instead of the marginal expectation, and the value is approximated assuming feature independence. Finally, to obtain the final result, the model is assumed to be linear. (For more information, see z A ¯ ∣ z A Lundberg & Lee 2017 and Molnar 2019 .) In the specific version implemented in this paper, Deep SHAP, it is worth observing that while Shapley values provide a theoretically rigorous method for attributing contributions to model features, their exact computation is combinatorially complex and computationally infeasible for large feature sets or real-time applications. This makes them particularly challenging to apply in deep-learning contexts. Practical implementations, such as those available in the widely used SHAP package discussed above, typically address this challenge by resorting to approximations. For instance, Deep SHAP, as employed in this work, leverages internal gradients and reference activations through the DeepLIFT algorithm. This significantly reduces computational complexity by approximating Shapley values via modified backpropagation rules tailored specifically to NN architectures built with frameworks, such as TensorFlow or Keras. This approach markedly enhances scalability and speed, rendering it suitable for large data sets and real-time calibration scenarios. 3 A detailed example with the Heston model We ran the subsequent analysis on the Heston model presented in Heston (1993) because it is often used in practice and is well understood. This provided us with a benchmark to evaluate which interpretability methods offer satisfactory results and could be used in less understood models in the future. 3.1 A brief overview of the Heston model The original Heston model was presented under the physical measure , which is also known as the objective measure. Yet, we moved on to the risk-neutral measure P to present our analysis, as this was more convenient considering option pricing and calibration. Q We defined S as the asset price and t v as its instantaneous variance; t v can be viewed as a mean-reverting square-root process. The system of stochastic equations of the Heston model under the risk-neutral measure t is written as Q where d S t = v t S t d W 1 t , S 0 d v t = κ ( θ − v t ) d t + σ v t d W 2 t , v 0 d W 1 t d W 2 t = ρ d t W 1 and t W 2 are the Brownian motions with correlation t ρ . We set the risk-free rate and the dividend yield as zero for simplicity, although we could have potentially added them in a straightforward way. All the free parameters are collected in the set ψ ≔{ v 0 , ρ , σ , θ , κ }. The range and meaning of the parameters are as follows: • v 0 > 0 is the initial value of the variance. • ρ ∈[−1,1] is the correlation between the two Brownian motions or the instantaneous correlation between asset and instantaneous variance. • σ ≥ 0 is the volatility of the volatility. • θ >0 is the long-term mean of the instantaneous variance of the asset price. • κ ≥ 0 is the speed at which v reverts to t θ . It can be seen that this model precludes negative values for v , and when t v tends to zero, it becomes positive. The mean reversion property of t v is important. Moreover, if the parameters satisfy the Feller condition then the instantaneous variance (5) 2 κ θ > σ 2 , v is strictly positive. As shown in the following section, we imposed the Feller condition when generating our synthetic data. Notably, when the calibration is conducted in the industry, the Feller condition is sometimes relaxed to obtain a better fit of the market data. Further, the Heston model is well studied in the financial literature (see t Gatheral 2006 for more details). 3.2 Calibration via NNs When calibrating the Heston model, we aim to find the optimal model parameters ψ ⋆ such that the model prices best match the market data. This is effectively a nonlinear constrained optimization problem, where we minimize the error between market option quotes ( C mkt ( T , K )) and their prices calculated by the Heston model ( C ( ψ ; K , T )) for a selection of maturities T and strikes K quoted by the market in the set {( T , i K ): i i = 1… n }. The calibration of the parameters to the observed market data involves minimizing the loss (6) C ( ψ ; K , T ) = E ℚ ( ψ ) [ ( S T − K ) + ] L with respect to the model parameter set ψ . The solution of the equation above can be regarded as a function of the market data mapping to the domain in which the model parameters reside. (7) L ( ψ ; C mkt ) ≔ ∑ i = 1 n C ( P ; K i , T i ) − C i mkt 2 (8) C mkt ⟶ ψ ⋆ ≔ arg min ψ L ( ψ ; C mkt ) As discussed in Section 2.1 , we proceeded, as in the study of Roeder & Dimitroff (2020) , to use an NN to approximate the calibrated model parameters function given by Eq. (8) . 3.3 Description of the synthetic data set We built our data set in a synthetic manner. We did not start from real market quotes since we were only interested in interpreting the NN used to learn the calibration procedure. For future research, we suggest training the NN with real market data. For the following steps, we referred to the work by Bayer & Stemper (2018) for the calibration setup. The features of the NN that we constructed to approximate the calibration map given by Eq. (8) were the volatilities of market plain-vanilla options C mkt , while the labels were the model parameters ψ . First, we began by randomly generating Heston model parameters ψ for the labels with large enough sizes and calculated the corresponding implied-volatility surface for the input features. We randomly generated each parameter using a uniform distribution within the bounds defined in Table 1 , according to previous work by Horvath et al. (2019) and Roeder & Dimitroff (2020) . Once the model parameters were generated, we selected the sets satisfying the Feller condition given by Eq. (5) . We disregarded possible correlations between model parameters when we generated them, since we preferred to assume a null prior knowledge of the joint distribution, as suggested by Bayer & Stemper (2018) . For the main discussion, we used a data set of size 10 4 in line with the order of the magnitude of the other data sets used by Horvath et al. (2019) and Roeder & Dimitroff (2020) . Additional results using a larger data set of size 10 5 can be found in Appendix C , although a little improvement is found at the expense of a high computational cost. Following this, by means of the Heston model, we evaluated the plain-vanilla option prices on a grid of maturities and strikes given by K = { 0.5 , 0.6 , 0.7 , 0.8 , 0.9 , 1.0 , 1.1 , 1.2 , 1.3 , 1.4 , 1.5 } Here, we assumed that the spot price T = { 0.1 , 0.3 , 0.6 , 0.9 , 1.2 , 1.5 , 1.8 , 2.0 } . S 0 was equal to 1. Thus, the dimensions of the resulting volatility matrices were 8 strikes per 11 maturities, leading to a feature collection of 88 = 8 × 11 volatilities, while the corresponding labels are the 5 model parameters. The data set of size 10,000 was split into two training sets of 8,500 and 1,500 data points. By taking a closer look at the data, if we, for example, selected one volatility matrix of the input data in the testing set and fixed the maturity to be T = 0.1, the volatility smile would be shown as an asymmetric U-shape in the left panel of Fig. 1 . However, in the case of the volatility smile at another entry of the training set with a fixed maturity T = 0.1, as shown in the right panel of the same figure, the smile is monotonic. Therefore, our synthetic data did not guarantee a specific shape of the volatility smile. It is well-known that substantial differences in the scale of the input data can lead to difficulties during the training stage and can cause instabilities. Therefore, it is common practice to standardize the input data to numerically aid the training of the NN. An FCNN and a CNN were implemented in the studies by Horvath et al. (2019) and Roeder & Dimitroff (2020) , respectively. For the FCNN, the data were scaled in the range 0–1 before being fed as the input to the network, while for the CNN, the data were scaled to have a mean of 0 and a variance of 1. In addition, after scaling the input data of the FCNN, a whitening process was applied. Generally, whitening is only applied to the FCNN case since it does not seem to aid training empirically in the case of the CNN. In particular, we relied on ZCA–Mahalanobis whitening, as described by Kessy et al. (2018) whose aim is to decorrelate the input matrices by multiplying the centered input data using a decorrelation matrix. In this way, the data were linearly transformed so that the sample correlation matrix of the training data was the identity. This whitening process presents this advantage: the decorrelation is achieved by only applying minor changes to the original input. 3.4 NN architectures The NNs presented here are inspired by those used by Horvath et al. (2019) and Roeder & Dimitroff (2020) . Two NN architectures were considered: the FCNN and the CNN. Both architectures are feedforward NNs because there are no feedback connections in which the outputs of the model are fed back into themselves. In fact, information flows from the input to intermediate computations (that define the function) and then to the output. FCNNs are called fully connected because all their nodes in each layer are connected to all the nodes in the previous and subsequent layers. While they are considered a rather simple NN architecture, they offer good and fast calibration results. CNNs derive their name from their characteristic convolutional layers. The primary purpose of the convolution is to highlight features from the input. CNNs were inspired by biological processes, in that the connectivity pattern between neurons resembles the human visual cortex. In fact, individual neurons respond to stimuli only in a specific region of the visual field, which is known as the receptive field. CNNs are commonly applied to analyzing visual imagery. We started by calibrating the model with the FCNN and showed that this simple model is enough to obtain good results. Following this step, we tested the CNN and compared the results with the original simpler architecture. The CNN was implemented to test whether it improved the calibration, since it previously outperformed the FCNN in a number of applications and has also been implemented by researchers for volatility smile calibration. Further, since FCNNs and CNNs learn the calibration in different ways, it is interesting to apply the interpretability methods to both networks. The FCNN presented here consisted of four fully connected dense layers, as shown in Fig. 2 . The hidden layers used ELU as their activation function, and the output layer used a hard sigmoid. The output layer generated a five output vector corresponding to the Heston model’s parameters. Four layers with a decreasing number of neurons were found to be enough to obtain excellent results. Further, 10,396 trainable parameters were required. In the case of the FCNN, the mean-squared logarithmic error was used for the loss function, since it seemed to be the best suited for this architecture after contrasting its performance with other typical losses. The Adam optimizer was used for training. In the case of the CNN, the ELU activation function was used for the hidden layers as well. The CNN started with an input layer that had no parameters to be calibrated. However, it is created to define the input shape of the training data. It is referred to as “InputLayer” in Fig. 3 . Following this, a two-dimensional convolutional layer with 3 × 3 filters was used. The convolutional layer was meant to extract features from the input and aimed to highlight the most important attributes of the data. After testing different filter sizes, the 3 × 3 filters proved to perform best. The convolutional layer used 32 filters. Then, maxpooling was applied, which again aimed to highlight the most important features with a 2 × 2 kernel. Maxpooling halved the dimension of the convolutional layer output. Following this, the output was flattened into a 3 × 4 × 32 = 384 tensor, and the fully connected part of the NN took over. The flattened data were passed to a dense layer that outputed a vector of merely 50 entries. Finally, we concatenated five different layers with a single output, one for each of the five Heston model parameters that needed to be calibrated, and made use of custom activation functions that improved the performance of the NN. The custom activation functions were hyperbolic tangent functions, which were normalized for each of the model parameters. These activation functions were chosen based on the research by Horvath et al. (2019) . This architecture equated to 19,825 trainable parameters. When training the model, the root mean square error was used as the loss, and the Adam optimization algorithm was implemented. 3.5 Calibration results Figs. 4 and 5 display the errors between the predicted Heston model’s parameters and the labels for the FCNN and the CNN. The fact that the performance of the NNs was good for both the training and testing data implies that the NN models could perform well when predicting unseen data. Only the predictions for ρ were slightly worse as compared to the other four model parameters. As can be seen in the figures, the relative errors obtained by the FCNN were significantly smaller. Although it has been found that, in general, CNNs outperform FCNNs, especially in tasks such as image recognition, our results indicate that this is not the case when calibrating the Heston model. The CNN applies 3 × 3 kernel filters and maxpooling to the training data, which, to some extent, causes information loss, whereas the FCNN avoids this problem. For image recognition, this is not an issue since the main features of the image are of most importance to the NN, whereas the Heston model calibration is a regression problem that is found to be overly sensitive to the information loss caused by this type of layer operation. In image recognition, the goal is often to simply label an image, whereas to calibrate the Heston model, the exact values for the parameter outputs are required instead. When trying to learn the Heston model’s calibration using an NN, another main limitation is model parameter identifiability due to the nature of the volatility matrices. As Roeder & Dimitroff (2020) mentions, Heston parameterizations exist, which differ significantly in terms of the values of the Heston parameters. However, they correspond to similar implied volatility surfaces. This explains why in the case of both NN structures, although most of the predictions are quite accurate and relatively close to the 0.0 % error mark, several outliers can be found, which are substantially far from the rest of the data points. This can also be appreciated when comparing the median and average errors. 4 Discussion of the interpretability results In this section, we discuss the interpretability results obtained by using different methods. We wish to understand whether local or global methods are best to interpret the NNs and whether they can be used for other, more complex and less understood models beyond the Heston model. 4.1 Local interpretability results: local surrogate methods Local surrogate methods (or models) are interpretable methods used to explain the single predictions of black-box machine-learning models. From the following results, we found that, in general, local interpretability does not align with our intuition behind the Heston model. 4.1.1 Local interpretable model-agnostic explanations (LIME) LIME is primarily designed to explain classifiers and NNs performing image recognition. In our case, we treated the NN as a regressor and first performed a local approximation around an individual prediction using an explanation model. (In this study, a linear model was used.) In particular, the Huber Regressor was chosen for its robustness against outliers. We considered the first κ in the test set as an example. As shown in Fig. 7 , with over 1500 predictions ranging from 1.17 to 9.77, the first predicted value for the CNN was 6.28. For this particular prediction, the most important feature was ( T = 0.3, K = 0.7), which was close to the at-the-money (ATM) level with a short maturity; this was followed by ( T = 1.8, K = 0.5) with long maturity, which was far away from the ATM level. On the one hand, they both influenced the predicted value positively, since they were colored in orange. Blue-colored components, on the other hand, had a negative contribution to the model output. The feature-value table specifies the input value of each feature, which is also colored either in orange or blue. As for the FCNN in Fig. 6 , the predicted value was 6.47, which was quite close to the value predicted by the CNN. We noticed that κ was generated from 1.0 to 10.0, according to Table 1 , which indicates that the FCNN covered the entire range of the model parameter. In the case of the FCNN, the topmost features concentrated around long-term maturities, such as T = 2.0, T = 1.2, and T = 1.5, and the top two strikes were close to the ATM levels. Fig. 7 We continued by analyzing the overall impact of the input entries on the model output by taking the absolute value of each feature importance and averaging them over the 1500 predictions. From these, we obtained Figs. 8 and 9 . The light colors in the heat maps indicate high attribution values for the input, and dark colors indicate low attribution values. The most influential volatilities seem to be randomly located in these two heat maps and do not show a particular pattern. We would like to highlight that the attribution results shown here are just one of the possible results that LIME may find. It may predict different feature attributions for the same instance since it uses an optimization algorithm with a random seed that can lead to different local minima and yield different results every time it is run. This suggests that the mapping function for the Heston model is highly nonlinear and may contain multiple local minima, which makes LIME not ideal for this application. 4.1.2 Deep Learning Important FeaTures (DeepLIFT) DeepExplain in Python provides the implementation for rescaled DeepLIFT, in which a modified chain rule is applied as discussed in Ancona et al. (2018) . Attributions were computed through the test data set, and the baseline was chosen by default to be a zero array with the same size as the input. These attributions had the shape (1500, 8, 11) for the CNN and (1500, 88) for the FCNN. To draw the feature importance, the attributions of the CNN should be reshaped to (1500, 88). Again, we took the absolute values of each feature importance and averaged them over the 1500 predictions. x ¯ Figs. 10 and 11 present the overall impact on the model output ψ . It can be seen that the lightest colors appear on both sides of the strike range, at the wings, and in the positions within the T = 1.2 borderline, whereas in the case of the FCNN, we can see that the lightest colors are located around short maturities: T = 0.1 and T = 0.3. However, its topmost features are located at extreme strikes: K = 1.5 and K = 0.6. 4.1.3 Layer-wise Relevance Propagation (LRP) The LRP algorithm aims to attribute relevance to individual input nodes to trace the contributions back to the output ψ layer by layer. Although several versions of the LRP exist, we focused on the ϵ -LRP algorithm, which makes use of the ϵ rule as described by Bach et al. (2015) . ϵ must be nonzero. In our case, we set it to the default 0.0001 value. LRP can be implemented in DeepExplain too. On the one hand, using LRP, we found that the most important features for the FCNN were the ones with strikes at both the left and right wings: K = 0.7, K = 1.5, and K = 1,4 ( Fig. 12 ). On the other hand, in the case of the CNN ( Fig. 13 ), the top features were at large strikes: K = 1.4, K = 1.2, and K = 1.3. Overall, the most relevant features for both the CNN and the FCNN were located at the volatilities of short maturities and extreme strikes. 4.2 Global interpretability results: Shapley values The size of the test subset is 1,500; therefore, the Shapley values have the dimensions (1500, 88) for the FCNN since the input is flattened in this case and (1500, 8, 11) for the CNN. We measured the feature importance of the model input by taking the mean of the absolute Shapley values, which was representative of the average impact on the model output. We present a selection of summary plots here and refer the reader to Appendix B for a more detailed analysis performed by visualizing the Shapley values as “forces”, as described by Molnar (2019) . The SHAP summary plots shown on the left side of Figs. 14 and 15 combine both the feature importance and effects. Each point drawn on the summary plot corresponds to a Shapley value for a feature and an instance. The position on the x-axis is determined by Shapley values, and the position on the y-axis is determined by the feature. For the Heston model calibration, each feature is an entry of the input volatility matrix for a given maturity and strike. The features are ordered from the most important to the least. On the other hand, the plots shown on the right of Figs. 14 and 15 are called the SHAP feature importance plots. We displayed one of these plots for each of the Heston model parameters. The aim of the plots is to highlight the importance of features with large absolute Shapley values. We averaged the absolute Shapley values per feature across the data to obtain the global importance. Once again, the features were sorted from the most important to the least (see Molnar 2019 ). Each row of the aforementioned plots represents one model parameter of the calibrated Heston model: v 0 , ρ , σ , θ , and κ from the top to the bottom. Although all Shapley values were calculated, here, we only show the top 20 values for practical reasons. In these figures, the features that affect v 0 the most can be seen to be the plain-vanilla options close to the ATM level. Yet, a different behavior can be seen between the FCNN and CNN architectures. In the case of the FCNN, only short maturities are found among the most relevant features, while for the CNN, long maturities are present. Since v 0 represents the starting value of the (squared) volatility process, it should be linked to near-ATM plain-vanilla options at short maturities, a correspondence supported only by the FCNN architecture. Moreover, the CNN obtains its worst calibration results for v 0 with an average error of 6.47 % for the test data, while the FCNN only produces an error of 1.07 % for this parameter, as shown in Figs. 4 and 5 . We continued by analyzing the correlation ρ and the volatility of volatility σ . These parameters are linked to the asymmetry and convexity of the smile, respectively. They should be linked to the plain-vanilla options with strikes far from the ATM level, mainly at shorter maturities, since at higher maturities, the smile flattens. Indeed, we found this correspondence to be well described by the FCNN architecture; however, we were not fully satisfied by the CNN architecture, which behaved well for ρ ; however, for σ , the architecture seemed to give importance mainly to ATM options. Then, we considered the speed of mean reversion κ , which was related to the flattening of the smile for longer maturities. We found that for both architectures, the plain-vanilla options with the most importance were close to the wings and with both shorter and longer maturities. Finally, we considered the mean-reversion level θ , which was expected to be mostly influenced by the long-term behavior of the data. In this case, both NN architectures showed similar results and agreed with our intuition. In Figs. 16 and 17 , we summarize the feature importance for the overall model output ψ . This is equivalent to summarizing the previous Shapley values for the set of parameters v 0 , ρ , σ , θ and κ . Notably, the top 10 most important features of the FCNN seem to be consistently at shorter maturities and not necessarily at ATM levels, whereas in the case of the CNN, they are all close to ATM levels; however, the maturity here fluctuates substantially. We obtained two important results from this analysis. First, Shapley values could be viewed as a practical tool to discriminate NN architectures and select the ones that better match the model behavior by establishing clear correspondences between calibrated options and model parameters. Second, this tool could be used to investigate a model’s behavior when we are confident of the NN architecture; however, we lack clear model interpretation. 5 Conclusion and further developments In this paper, we first explored the volatility smile to model parameter relationship (“calibration”) in the Heston model using deep learning and compared different networks used in the literature. We conclude that FCNNs outperform CNNs in pricing-model calibration since they require fewer trainable parameters and obtain better results. Further, the bulk of our analysis focused on comparing local and global interpretability methods and evaluating their ability to offer meaningful insights into the pricing-model calibration mechanism using NNs. We used the Heston model as our benchmark because it is a well-understood model, both theoretically and in practice. From our analysis, we found that global interpretability methods were the most reliable and substantially align with the common intuition behind the Heston model. They may also helped us in choosing the most convenient type of NN, favoring FCNNs over CNNs. Hence, global interpretability methods are encouraged. Furthermore, even with Shapley values or other global interpretability tools, this risk remains: the model could be overfitting to peculiarities in the data that do not reflect the true dynamics of the market. Here, the concept of generalization may be helpful, which refers to a model’s ability to accurately capture underlying patterns in data rather than simply memorizing the training examples. Empirically, generalization is commonly evaluated by comparing model performance across training, validation, and testing data sets. Consistent accuracy and similar error distributions among these data sets strongly suggest robust generalization and minimal overfitting. Indeed, as shown in Figs. 4 and 5 (FCNN and CNN prediction errors), the error distributions and scatter plots for training and testing data are notably consistent across all parameters. Errors remain low and focused and exhibit only slight deviations, which strongly indicates that our models exhibit robust generalization. However, a valid concern could be raised regarding the transferability of the results obtained from synthetic data to real-world market data, as hinted in the introduction, which may have different underlying dynamics. Importantly, this concern differs fundamentally from overfitting: the challenge here is not model memorization but rather a potential distribution shift between synthetic and actual market conditions. Next, we propose several promising directions for future research that could address the aforementioned synthetic-to-real distribution shift. One potential approach involves large-scale pretraining on synthetic data followed by targeted fine-tuning on real-world market data, which typically has limited availability and is scarce. This strategy aligns closely with methodologies used in foundation models for natural language processing, where extensive pretraining is paired with fine-tuning for concrete tasks. Techniques such as parameter-efficient fine-tuning (PEFT), particularly low-rank adaptation (LoRA), have recently gained prominence. Additionally, using LoRA weights with particularly low ranks (e.g., rank four or eight) or with asymmetric randomized factors can promote regularization and generalization when fine-tuning. An alternative direction is to employ recent alignment techniques using reinforcement learning (RL), such as Group Relative Policy Optimization (GRPO). In this scenario, the pretrained synthetic model acts as the initial policy, and real-world market data would provide reward signals. This method can be employed to fine-tune the model behavior explicitly for market performance. Unlike traditional fine-tuning methods based on supervised learning, RL approaches typically handle sparse reward signals, which can pose significant challenges if starting from randomly initialized weights. Hence, pretraining the model on synthetic data provides a beneficial starting point near a performance optimum, making RL-based alignment more feasible. Such approaches have demonstrated success in fields including language and generative image modeling. Finally, one could argue that this approach still does not fully resolve the black-box nature inherent in NNs. To address this, a finance-informed NN approach, analogous to those used in physics but adapted for finance, can be employed. In this method, the NN is tasked with modeling the coefficients of a parameterized equation. Specifically, if the underlying dynamics are known in form but their coefficients and constants remain uncertain, the network can infer these parameters directly from data. This hybrid approach constrains the network’s behavior, reducing its hypothesis space to functions that are consistent with established financial principles; thus, this significantly enhances predictability and interpretability. Finally, in future work, the Feller condition could be imposed not only on the training data but also on the model output. Local interpretability could be extended by trying to explain, for each prediction, the attributions for the individual model parameters v 0 , ρ , σ , θ , and κ instead of the output ψ as a whole, and the calibration could be performed on real market data, as mentioned earlier. CRediT authorship contribution statement Damiano Brigo: Writing – review & editing, Writing – original draft, Validation, Supervision, Software, Resources, Project administration, Methodology, Investigation, Formal analysis, Data curation, Conceptualization. Xiaoshan Huang: Writing – review & editing, Writing – original draft, Validation, Software, Methodology, Investigation, Formal analysis, Data curation, Conceptualization. Andrea Pallavicini: Writing – review & editing, Writing – original draft, Validation, Software, Resources, Methodology, Investigation, Formal analysis, Data curation, Conceptualization. Haitz Saez de Ocariz Borde: Writing – review & editing, Writing – original draft, Validation, Software, Resources, Methodology, Investigation, Formal analysis, Data curation, Conceptualization. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper: None of the authors are editors or have any financial interests/personal relationships that may be considered as potential competing interests. Appendix A Details on local interpretability methods In this section, we describe in great detail the local interpretability methods and highlight the particular choices we made in their implementation. A.1 LIME LIME is a technique that explains the predictions of any classifier or regressor by learning about an interpretable model locally around the prediction. It was first presented by Ribeiro et al. (2016) . It is an additive feature attribution method since it uses a local linear explanation model that is consistent with Eq. (1) . Here, the simplified inputs are considered as interpretable inputs. The mapping function x ′ h maps a binary vector of interpretable inputs x into the original input space–i.e., x ′ . Different input spaces use different mapping functions. Mathematically, LIME aims to minimize the following objective function: x = h x ( x ′ ) where (A1) ξ = arg min g ∈ G L f , g , π x ′ + Ω ( g ) g is the explanation model for instance that minimize the loss function x ′ L and is a family of possible explanations. Further, G g measures how close the explanation is to the prediction of the original model. Ω penalizes the complexity of g , and thus, Ω( g ) is the model complexity in the minimization. The local kernel constitutes the weights in the loss π x ′ L in a set of samples in the simplified input space, which measures how large the neighborhood around instance is. The explanation model x ′ g here satisfies Eq. (1) , and the loss L is taken to be a squared loss. Thus, we can solve the objective function (A1) by using penalized linear regression. A.2 DeepLIFT DeepLIFT is a method that explains the prediction of deep-learning NNs recursively. It was first presented in Shrikumar et al. (2016) , (2017) ). For each input x , the attribution i is computed by setting the effect of that input C Δ x i Δ y x to a reference value that is opposed to its original value. In this case, the mapping function i h converts the binary vector x into the original input space: x ′ with 1 in x = h x ( x ′ ) meaning that the original values are taken and 0 meaning that the reference value is taken instead. The authors specify the reference value as a typical uninformative background value for the feature. x ′ In addition, a summation-to-delta property should be satisfied in DeepLIFT: where ∑ i = 1 n C Δ x i Δ t = Δ t t = f ( x ) is the model output and Δ t = t − t 0 = f ( x ) − f ( a ) is the difference from the reference input value a . If we set , then the explanation model has the form of equation ( ϕ i = C Δ x i Δ t 1 ). Furthermore, in the context of our study, DeepLIFT was implemented with a modified chain rule proposed by Ancona et al. (2018) . Mathematically, we defined to be the weighted activation of a neuron z i j = w j i ( l + 1 , l ) x i ( l ) i of layer l onto neuron j in the next layer and b is the additive bias of unit j j . Each attribution of unit i indicates the relative effect of the unit activated at the original input x compared to the activation at some reference input (baseline). Reference values x ¯ for all hidden units are obtained by running a forward pass through the NN with input z ¯ j i and recording the activation of each unit. The baseline is to be determined by the user and is often chosen to be zero. Therefore, if x ¯ represents the attributions at the input layer for neuron ϕ i c ( x ) = r i ( l ) c , then where the algorithm starts with the output layer r i ( L ) = S i ( x ) − S i ( x ¯ ) if unit i is the target unit of interest 0 otherwise L . Finally, the Rescale rule is defined as The modified chain rule is adopted, and the original Reveal–Cancel rule from the study of r i ( l ) = ∑ j z j i − z ¯ j i ∑ i ′ z j i − ∑ i ′ z ¯ j i r j ( l + 1 ) . Shrikumar et al. (2017) is not considered for implementation purposes. A.3 LRP LRP is closely related to DeepLIFT. It was first presented by Bach et al. (2015) . If we fix the reference activations of all neurons to the value of 0 in DeepLIFT, it becomes LRP. Hence, for the mapping function , in the binary vector x = h x ( x ′ ) , 1 represents the fact that the original value of an input is taken and 0 means that 0 values are taken. It also satisfies the additive feature attribution method structure given in Eq. x ′ (1) . B SHAP force plots and clustering values Shapley values can be visualized as “forces”. Each feature value can be thought of as a force that either contributes to increasing or decreasing the prediction. The prediction starts from the average of all predictions, which is taken as the baseline, which may also be called the base value. Each Shapley value is then represented as an arrow in the plot, which either pushes to increase or decrease the model output. Positive values that push to increase the prediction are represented in red, and negative values that push to decrease the prediction are represented in blue. Note that the terms “positive” and “negative” here do not comment on Shapley values being “good” or “bad” for the prediction; however, they move the prediction to the right or left with respect to the base value. The red and blue features are stacked together on the edges of the force plot to show their values on hovering and to the extent that the features influence the final output value. The wider the Shapley value represented by the force plot, the larger the influence it has on the final prediction. All the forces balance each other at the actual prediction, as described by Molnar (2019) . This type of representation may be helpful to have a better intuition of the results obtained in Figs. 14 and 15 . Figs. B1 and B2 display two sample force plots for the FCNN and CNN, respectively. These force plots are generated using the first observation out of the 1500 samples in the test set to predict the model output v 0 . Let us explain how to interpret Fig. B1 . On the one hand, the base value lies between 0.5 and 0.55, and it is indicated by a vertical grey line. On the other hand, the model output value is 0.61, which is highlighted in bold, and its position on the plot is indicated by a vertical grey line too. As expected, the base value and model output do not coincide, although they are relatively close. Then, the most relevant Shapley values are highlighted at the bottom of each force plot. We can clearly appreciate that the red or “positive” Shapley values outweigh the blue or “negative” Shapley values. Therefore, the model output value is to the right of the base value. A similar behavior can be observed in Fig. B2 . The red Shapley values outweigh the blue ones, and the model output is moved to the right in relation to the base value. Fig. B1 FCNN force plot for v 0 (First observation). Fig. B1 Fig. B2 CNN force plot for v 0 (First observation). Fig. B2 We must mind the scale in Fig. B2 since there is a larger offset between the model output value and the base value in Fig. B2 than in Fig. B1 . The difference in scale between Figs. B1 and B2 can be attributed to the fact that the FCNN mainly focuses on a few inputs. Hence, the Shapley values for the least relevant inputs have a small influence in the final model output value; its Shapley values are represented by short arrows, and this results in a more compact force plot. In contrast, in Fig. B2 , the importance of each Shapley is relatively comparable for all inputs. This can be attributed to the fact that the CNN applies convolution and maxpooling to the original input. These operations have an “averaging” effect on the input information and importance, which results in the NN struggling to focus on the most relevant features. (This also makes the CNN perform worse.) Therefore, the Shapley values on the force plot are represented by longer arrows, and the overall force plot is wider for the CNN. Figs. B3 and B4 are known as SHAP clustering plots. They correspond to creating force plots for all 1500 observations and to stacking them together vertically in a single plot. The x-axis in Fig. B1 is equivalent to the y-axis in Fig. B3. The x-axis in Fig. B3 is the observation number. We can clearly see that, as previously mentioned, the force plots for the CNN are notably wider. The force plots of the other model parameters ρ , σ , θ , and κ can be visualized using the same method. Fig. B3 FCNN SHAP clustering for v 0 (All data). Fig. B3 Fig. B4 CNN SHAP clustering for v 0 (All data). Fig. B4 C Additional results using a larger data set We also list the results obtained using a data set of 10 5 data points in Figs. C1–C6 . The calibration results improve, but this occurs at a high computational cost. Fig. C1 FCNN prediction errors for the large data set. Fig. C1 Fig. C2 CNN prediction errors for the large data set. Fig. C2 Fig. C3 FCNN SHAP values and the feature importance of each model parameter for the large data set. Fig. C3 Fig. C4 CNN SHAP values and the feature importance of each model parameter for the large data set. Fig. C4 Fig. C5 FCNN SHAP values, overall feature importance, and the heat map for the large data set. Fig. C5 Fig. C6 CNN SHAP values, overall feature importance, and the heat map for the large data set. Fig. C6
REFERENCES:
1.
2. BACH S (2015)
3. BAYER C (2024)
4.
5. BELLOTTI A (2021)
6. BLOCH D (2019)
7.
8.
9. CHAKRABORTY S (2017)
10.
11. GATHERAL J (2006)
12.
13. HESTON S (1993)
14. HORVATH B (2019)
15. HORVATH B (2021)
16. KESSY A (2018)
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28. SHAPLEY L (1953)
29.
30.
31.
32.
33.
34.
|
10.1016_j.iswcr.2017.05.004.txt
|
TITLE: Land use and land cover changes and Soil erosion in Yezat Watershed, North Western Ethiopia
AUTHORS:
- Tadesse, Lemlem
- Suryabhagavan, K.V.
- Sridhar, G.
- Legesse, Gizachew
ABSTRACT:
Soil erosion affects land qualities and water resources. This problem is severe in Ethiopia due to its topographic features. The present research was aimed to estimate spatiotemporal changes in land-use/land-cover pattern and soil erosion in the Yezat watershed in Ethiopia. This study was carried out by using landsat imageries of 2001, 2010 and 2015. Images were classified into categories using supervised classification by maximum likelihood algorithm. They were also classified into different biomass levels by using Normalized Difference Vegetation Index (NDVI) analysis. Revised Universal Soil Loss Equation modeling was applied in a GIS environment to quantify the potential soil erosion risk. The area under grassland, woodland and homesteads have increased by 610.69 (4%), 101.69 (0.67%) and 126.6 ha (0.83%) during 2001–2015. The extent of cultivated land and shrub/bushland was reduced by 323.43(0.02%) and 515.44 ha (3.41%), respectively, during the same period. The vegetation cover in the watershed decreased by 91% during 2001–2010, and increased by 88% during 2010–2015. Increase of NDVI values indicates better ground cover due to implementation of integrated watershed development program in the region. The estimated annual soil losses were 7.2tha−1
yr−1 in 2001, 7.7t ha−1
yr−1in 2010 and 4.8tha−1
yr−1in 2015. Management interventions are necessary to improve the status and utilization of watershed resources in response to sustainable land management practices for sustainable livelihood of the local people.
BODY:
1 Introduction Environmental problems are alarming humanity all over the world. Its effects on ecosystem services challenge conservation, management and rehabilitation activities ( Ayele, Suryabhagavan, & Sathishkumar, 2014; Haregeweyn et al., 2015; Zewdu, Suryabhagavan, & Balakrishnan, 2016 ). Land degradation and associated decline in the productive potential of agricultural lands are threatening economic and social well-being of the present and future generations ( Berhanu & Suryabhagavan, 2014; Haregeweyn, Berhe, & Tsunekawa, 2012; Kouli, Soupios, & Vallianatos, 2009 ). Land degradation is one of the major and widespread environmental threats that the planet earth has been facing since long ( Ganasri & Ramesh, 2016; Krishna Bahadur, 2009; Rawat, Mishra, & Bhattacharyya, 2016; Xu, Xu, & Meng, 2012 ). Soil erosion negatively affects the soil quality, decreasing agricultural efficiency, water intention properly, flooding, debris flow and habitat destruction as a whole ( Kidane & Alemu, 2015; Park, Oh, Jeon, Jung, & Choi, 2011 ). In order to meet livelihoods, to address economic stress and to accelerate development, people in the developing countries utilize land and soil resources in an unsustainable way as evidenced by overgrazing, destruction of forest for urban expansion and high intensive and unscientific agricultural activities, and the resulted improper land-use/land-cover changes ( de Meyer, Poesen, Isabirye, Deckers, & Raes, 2011 ). According to Hurni (1985b) , degradation and loss of soil resulting from soil erosion was estimated to be about 20 t per hectare in Ethiopia, i.e., about 1 mm of soil depth per year. Ethiopia loses about 1.9 billion metric tons of fertile soil from the highlands every year and the degradation of land through soil erosion is increasing at a high rate ( Fitsum, Pender, & Nega, 1999; Hurni, 1989 ). Similarly, as reported by Ethiopian highlands reclamation study, soil erosion was forecasted to cost the country 1.9 billion USD between 1985 and 2010 ( FAO, 1986 ). According to Phillips (1989, as cited in Maria, Pantelis, & Filippos, 2009 ), the off-site effects of erosion such as reservoir sedimentation and pollution of water resources are more costly and severe than the on-site effects on land resources. There are two main approaches to study soil erosion depending on spatial and temporal scales ( Xu et al., 2012 ). The other is the off-site measurement through modeling, which can be applied to reveal potential patterns of the soil erosion, to evaluate soil erosion process from time to time on a larger scale. In order to build a dynamic model, as many as possible criteria, which influence soil erosion, should be taken into consideration. The Universal Soil Loss Equation (USLE) was developed by Wischmeier and Smith (1978) . The Revised Universal Soil Loss Equation (RUSLE) is a widely used soil erosion intensity evaluation model, modified and improved from the USLE, developed by Wischmeier (1976) . Revised Universal Soil Loss Equation was developed to estimate the annual soil loss per unit area based on erosion factors. It provides an estimate of the severity of erosion and also numerical results that can validate the benefits of planned erosion control measures in areas of soil erosion risk. For the last over twenty years, multi-temporal, high-resolution, remotely sensed data and GIS have been used extensively to monitor environmental changes specifically, to assess soil erosion rate, to map land-cover changes on the local, regional and global scales ( Ai, Fang, Zhang, & Shi, 2013 ; Checkol, 2014 ; Eweg, Van Lammeren, & Woldu, 1998 ; Gebreselassie, 1996 ; Girma, 2005 ; Ringo, 1999 ). Geographical information system technology is thus appropriate due to its powerful multi-criteria processing and calculation capability ( Chretien, King, Jamagne, & Hardy, 1994 ). Moreover, highly significant spatio-temporal phenomena or changing patterns are revealed by applying GIS and remote sensing based soil erosion and land degradation modeling ( Fistikoglu & Harmancioglu, 2002; Gelagay & Minale, 2016; Hoyos, 2005 ). Thus, evaluation and prediction are easy and faster to address hazards caused by soil erosion. The present study was aimed to detect the spatiotemporal changes in the status and utilization of watershed resources in response to sustainable land management interventions and to assess the extent and rate of soil erosion, which is a major driving force of land degradation. 2 Materials and methods 2.1 Study area This study was conducted in Yezat watershed, West Gojam Zone of the Amhara Regional State of Ethiopia. It falls in two districts, viz ., Gonji Kolla and Yilmana Densa. This area is situated at 37°31'32"–37°31'32"E longitudes and from 11°08'22"–11°09'45"N latitudes covering a total area of about 15,085 ha ( Fig. 1 ), around 430 km from Addis Ababa and 70 km south of Lake Tana, Bahir Dar Town, the capital of Amhara Regional State. The altitude of the study area ranges between 1485–3207 m. The slope gradient of the watershed ranges from 4 to 66.5°. Higher elevation ranges are located at the southwest and eastern parts of the watershed ( MoARD, 2006 ). According to the 2007 National Population and Housing Census, the two districts have a total population of 321,508 of which 160,709 are men and 160,829 are women. About 91.9% of the area is predominantly used for crop production. The livelihood of the people depends on mixed farming ( Checkol, 2014 ). Based on the agro-climatic classification of Ethiopia ( Hurni, 1986 ), the majority of the study watershed falls in Woina Dega agro-climatic zone (traditional climate classification), which is similar to dry sub-humid. Heavy rainfall causes in the area during June–October. Based on long term climatic data and the average annual rainfall of Adet meteorological station near the study area was 1508 mm and mean maximum and minimum temperatures were 29.6 °C and 12.9 °C, respectively. The highest mean monthly temperature was recorded in March and the lowest during December–January. 2.2 Soil and vegetation According to the FAO-WRB (2006) soil map unit classification system, vertisols are the predominant soil type with the area coverage of 7166.2 ha in moderately gentle slopes and in very deep soils of the study area. This soil class can be characterized by heavy black clay, mostly water logged during the rainy season. It has high cation exchange capacity and base saturation content both in surface and subsurface horizons. The rest of the physiographic units are dominated by cambisols, regosols, luvisols, and leptosols. Moderately deep to very deep major soil types dominate the study area. The main land-covers in the study area are: settlements surrounded by eucalyptus trees, cultivated land, grassland, woodland, and shrub/bushland. The vegetation consists of evergreen and semi-evergreen, small trees and occasionally larger trees. Depending on the landscape and topography of the watershed, there are different types of indigenous vegetation existing in the area. Major crops grown in the area are wheat, barley and sorghum. Few scattered trees such as Acacia sp., Cordia africana and Croton sp. are found in the farmland whereas, Eucalyptus camaldulensis in grown around the homestead. According to the information from farmers, the cultivated land has received urea and Diammonium phosphate fertilizers for most crops. They usually use crop residues for livestock feed. In addition, animals are allowed to graze on the cultivated land after harvest. 2.3 Method 2.3.1 Data acquisition and software Time series landsat (TM, ETM+ and OLI) satellite data of 2001, 2010 and 2015 with path 169 and row 052 were used for developing land-use/land-cover maps of the study area and to determine C and P-factors used in RUSLE model. Field observations were also conducted to fix up training sites, to check ground truth, and to verify the final output of the maps. Shuttle Radar Topographic Mission (SRTM) digital elevation data of 30 m resolution were used in this study. Digital Image data files were downloaded in zipped files from the United State Geological Survey (USGS). All satellite imageries were geometrically rectified with the help of a topographic map (1:50,000) of the study area obtained from the Ethiopian Mapping Authority. This was used to digitize the contours and develop the Digital Elevation Model (DEM), in order to determine S and L-factors in the RUSLE. The soil map of the study area was obtained from the soil database compiled by Food and Agricultural Organization, collected from the Ministry of Agriculture, Federal Government of Ethiopia, and the soil types were classified to obtain K factor values. These were adapted to Ethiopia based on the FAO Soil Classification System ( Hurni, 1985a ). For determining the amount of soil loss in the study area, the relatively simple RUSLE soil erosion model was used. Remotely sensed data combined with further spatial information in a GIS environment to assess the extent and rate of annual soil loss. Rainfall data of the study area for the years 1980–2013 obtained from National Meteorological Agency (NMA) of Ethiopia were used to determine the R-factor in RUSLE. All factors in the RUSLE were derived independently. The RUSLE was modified to suit Ethiopian Highlands conditions ( Hurni, 1985a ) and adapted for the present work to determine values for rainfall erosivity, soil erodibility, slope gradient, slope length, land-cover and conservation practice. Values of the rainfall erosivity factor, slope length factor, slope gradient factor, land cover factor and conservation practice factor were taken empirically by Hurni (1985a) who used trial plots in various parts of the Ethiopian highlands, whereas the quantitative soil erodibility factor was based on FAO Soil Degradation Assessment Methodology adjustments to the RUSLE model. Remote sensing and GIS software used in this study were ERDAS Imagine®2014 for image classification. Image processing tasks and NDVI analysis were accomplished by using ENVI 5.1. GIS analysis was conducted using ArcGIS®10.2. Stream extraction, fill sinks and flow accumulation generation ware performed using Arc Hydro 10.2 Software plug-in into ArcGIS software. The DEM for the study area was analyzed and processed using ArcGIS®10.2 software and employing RUSLE calculations. 2.3.2 Data processing and analysis Imageries of bands 4, 3, and 2 of Landsat TM and Landsat ETM+ and bands 5, 4, and 3 of Landsat-8 were used in image enhancement to identify changes in land-use/land-cover features in the study area. All satellite images were in TIFF format. They were exported to img format in ERDAS Imagine®2014 software using layer stack function. These images were georeferenced in to the same map projection of World Geodetic System 1984 Zone 37 N. All satellite images were sub-mapped (subset) for covering only the study area. In order to interpret and discriminate the surface features clearly, all satellite images were composed using Red Green Blue (RGB) color composition. False Color Composites (FCC) of satellite imageries were prepared for the years 2001 and 2010 using band 4 (NIR), band 3 (red), and band 2 (green) and for the year 2015 Landsat 8 using band 5(NIR), band 4(red) and band 3 (green) combination. Descriptions of the land-cover categories of the study watershed are shown in Table 1 . Normalized Difference Vegetation Index (NDVI) was used in this study for gaining information about the seasonal growth of vegetation condition, vegetation dynamics and as one of the input parameters for estimating the potential of erosion using RUSLE model. This index was also used to differentiate vegetation from other land-cover classes. It was estimated by the division of the difference between the near infrared and red reflection (visible wavelength observations) and the sum of these measurements using the formula Eq. (1) : where, NIR is the reflectance value in near-infrared band; red is the reflectance value in the visible red band, (1) N D V I = ( N I R − R ) ( N I R + R ) Landsat TM and ETM+, following the formula Eq. (2) : where, B4 is band 4 (0.76–0.90 (2) N D V I = ( B 4 − B 3 ) ( B 4 + B 3 ) µm), which represents infrared band; and B3 is band 3 (0.63–0.69 µm), which represents red band for landsat TM or ETM+ imageries, and Landsat8 (OLI-TIRS), following the formula Eq. (3) : where, B5 is band 5 (0.85–0.88 (3) N D V I = ( B 5 − B 4 ) ( B 5 + B 4 µm), which represents infrared band and B4 is band 4 (0.64–0.67 µm), which represents red band for landsat 8 imagery. 2.3.3 Methods of woody biomass estimation For woody type mapping, land use-/land-cover map of the study area were used. Land use-/land-cover map was prepared from landsat images. During land-use-/land-cover classification, ground truth data and Google earth satellite data were used as reference. Preliminary interpretation of satellite data was done visually on false color composite in order to stratify woody types ( IPCC, 2003; Rosenqvist, Milne, Lucas, Inhofe, &, Dobson, 2003 ). Possible separability of various land-use/land-cover types with special reference to vegetation cover was studied using ground collected data for land-use/land-cover of the study area. Woody biomass available from tress was estimated using the following formula Eq. (4) : (4) Growing stock ( per ha ) = Area under plantation or canopy ( perha ) × productivity ( m 3 / ha ( peryear ) Productivity estimates made by FAO for indicative forest plantation yields by species and country for hardwood species grown in the tropical and subtropical zone were used ( FAO, 1997 ). Productivity of Eucalyptus sp. of Ethiopia is 8.0–12.5 m 3 /ha/year. By using this source compiled by FAO, the most dominant species grown in the study area is Eucalyptus species. Therefore, an average 10.25 m 3 /ha/year was used to compute sustainable yield. Landsat satellite data were used to estimate the size of the woody stands. 2.3.4 Determining the RUSLE model and GIS Parameters The following five parameters were used in the RUSLE model to estimate soil loss ( Renard, Foster, Weesies, McCool, & Yoder, 1997 ): Rainfall erosivity (R), soil erodibility (K), slope length and steepness factor (LS), cover management factor (C) and conservation practice factor (P). Referring to RUSLE model, the relationship is expressed as Eq. (5) : where, A: computed spatial annual soil loss (t (5) A = R × K × L S × C × P ha −1 y −1 ); R: rainfall erosivity factor (MJ mm h −1 ha −1 y −1 ); K: soil erodibility factor (t ha −1 MJ −1 mm −1 ); LS: slope length and steepness factor (dimensionless); C: land surface cover management factor (dimensionless); and P: erosion control or conservation practice factor (dimensionless). To identify the spatial pattern of potential soil erosion in the study area, all the above six erosion factors were surveyed and calculated depending on the recommendations of Hurni (1985b) . The framework of the study is schematically shown in Fig. 2 . 3 Results 3.1 Land-use/land-cover change detection The land-use/land-cover maps were classified into five classes, such as cultivated land, woodland, grassland, shrub/bushland and homesteads with high classification accuracy (overall classification accuracy >85% and overall kappa coefficient >80%) for each periods (2001, 2010 and 2015). Land-use/land-cover classification maps for the years 2001, 2010 and 2015 are given in Fig. 3 . The spatial distribution of land-use/land-cover categories of the study area during the period 2001, 2010 and 2015 shows that cultivated land, woodland and homesteads areas have increased, while the extent of shrub/bushland declined continuously from 2001 till 2015. A comparison of different land-use/land-covers during these years is shown in Table 2 . As per the land-use/land-cover classification map of 2001, the watershed was covered with shrub/bushland (37.3%), while cultivated land, woodland, grassland and homesteads covered only 32.4%, 15.4%, 10% and 4.9%, respectively. By the year 2010, the extent of cultivated land and homesteads have increased to 37% and 7.12%, respectively, while that of shrub/bushland, woodland and grassland have decreased to 36.7%, 14.8% and 4.3%, respectively. By the year 2015, the extent of woodland increased by 15.5%, followed by grassland (8.3%) and homesteads (7.96%). The extent of shrub/bushland was decreased to 33.35% by the year 2015. 3.2 Trends of land-use/land-cover changes From the results of classification during the 2001–2010 ( Table 1 ), grassland and woodland areas have decreased. Especially the extent of grassland was decreased by 865.19 ha (−5.73%) during these nine year period. Areas under woodland and shrub/bush land were decreased by −93.58 ha (−0.62%) and −81.26 ha (−0.51%), respectively. Conversely, the extent of cultivated land and homesteads were increased by 702.97 ha (4.66%) and 337 ha (2.23%), respectively. Land-use/land-cover of the study area for the period 2010─2015 showed that extents of grassland, woodland and homesteads have increased by 610.69 ha (4%), 101.69 ha (0.67%) and 126.6 ha (0.83%), while the extents of cultivated land and shrub/bush land have decreased by 323.43 ha (0.02%) and 515.44 ha (3.41%), respectively. The change detection during 2001–2015 showed that the area coverage of cultivated land, woodland and homesteads have increased by 379.54 ha, 8.11 ha (2.51%) and 463.62 ha (0.05%), while grassland and shrub/bushland slightly decreased 255 ha (1.69%) and 596.65 ha (3.4%), respectively. The change of land-use/land-cover areas during 2001, 2010 and 2015 are shown in Table 3 and Fig. 4 . The overall accuracy for land-use/land-cover of 2001 image classification was 85%, of 2010 was 91% and of 2015 was 93.2%. 3.3 Estimation of spatial distribution of woody biomass production The estimated woody biomass production for 2001, 2010 and 2015 were 5844, 5706 and 5972 t/ha/yr, respectively ( Fig. 5 ). The estimated woody biomass was less during the period 2001–2010. In the year 2015, significant increase in the woody biomass area was observed. This increase may be due to the interventions (transformed of wastelands to plantation, due to adoption of soil and water conservation practices, better utilization of surface and groundwater). 3.4 Vegetation cover The range of NDVI values for 2001, 2010 and 2015 were, 0.025–0.75, −0.028–0.67 and −0.03–0.76, respectively. The NDVI map ( Fig. 6 ) indicates that the vegetation was less during the period from 2001–2010, whereas during the period 2010–2015, there was significant increase observed. This increase may be due to adoption of soil and water conservation practices, better utilization of surface and ground water. It was also noticed that the NDVI values were higher in the central part of the watershed than the south and east during the study periods. Such indication could be of interest in understanding the hydrology of the area. The value of NDVI indicates the absence or presence of groundwater assuming that vegetation response to presence of water in the soil. Areas with denser vegetation, i.e. higher NDVI, may indicate areas with higher rainfall and presence of ground water, In the northwest and west parts of the watershed, low NDVI values were limited groundwater or low rainfall zones. 3.5 Soil loss rates The predicted annual soil loss maps of the study area for 2001, 2010 and 2015 are given in Fig. 7 . For the year 2001, annual soil loss ranged from 0 in the plain area to 201.4 metric tons ha −1 yr −1 in much of the steeper slopes of the banks of the tributaries in the watershed. The mean annual soil loss for the entire watershed was estimated at 7.2 metric tons ha −1 yr −1 . For the year 2010, annual soil loss ranged from 0 in the plain area of the study watershed to 152.2 metric tons ha −1 yr −1 in much of the steeper slope banks of tributaries. The mean annual soil loss for the entire watershed was estimated at 7.7 metric tons ha −1 yr −1 for 2010. For the year 2015, annual soil loss ranged from 0 in the plain area to 184.9 metric tons ha −1 yr −1 in much of the steeper slopes of the banks of the tributaries the study area. The mean annual soil loss for the entire watershed was estimated at 4.8 metric tons ha −1 yr −1 . The results for the year 2001 presented in Fig. 8 show that about 60.2% of the study area was of low potential erosion risk, while rest of the area was under moderate to high erosion risk. In terms of actual soil erosion risk, 15.7% of the area was of moderate risk, 20.5% was of high risk and 3.7% was of very high risk. In the year 2010, 57.4% of the area was of low potential for erosion risk, 22.7% was of moderate potential for erosion risk, 15.8% was of high potential for erosion risk and 4.1% area of very high potential for erosion risk. There was an increase of very high and moderate soil erosion risk compared with the year 2001. The result for 2015 showed 70% of the area was under low potential erosion risk, which was much higher than that of 2010 with 17.2% of the area with moderate erosion risk, 11% with high risk and only 1.8% with low risk of soil erosion. The threshold for each of the risk level is presented in Table 4 . 3.6 Soil erosion trends related land-cover changes Soil erosion trends in the study area were assessed in terms of NDVI index. As illustrated in Fig. 9 , mean NDVI values decreased from 0.25 to 0.15 during the years 2001–2010 and increased from 0.15 to 0.23 during the years 2010–2015. The histogram shows a comparison of the NDVI increase and decrease among the three target years. The increasing NDVI indicates better ground cover vegetation condition. Vegetation cover 91.1% of the land area in the study area has decreased during the years 2001–2010. Throughout the watershed, only 8.9% increase was observed mainly in the central part of the watershed. From 2010–2015, 88% of the land area was changed to increasing trend. An increase in NDVI was observed across the watershed. However, 12% of the land area has decreasing. Comparing the years 2001 and 2015, 36% of the land area has an increasing trend in vegetation cover. This indicated that most of the central part of the study area has got more vegetation cover during the years 2010–2015. Soil erosion changes and trends explored are given in Fig. 10 . The estimated soil erosion increased during 2001–2010, and decreased during 2010–2015. The NDVI value in the year 2010 was much lower than the year 2001 and 2015. This indicates that soil erosion is more sensitive to changes in vegetation cover. There were an increasing and decreasing trends in the mean annual soil loss during the year 2001, 2010 and 2015 ( Fig. 10 ). From the year 2001–2010, 0.5 metric tons per ha −1 yr −1 soil loss especially in the southwest and eastern parts of the watershed. From 2010–2015, there was a general decrease in soil erosion risk by 2.9 metric tons ha −1 yr −1 . Areas with higher soil erosion risk were located in the southwestern and eastern parts of the study area. When comparing the years 2001 and 2015, soil loss through erosion had significantly decreased by 2.4 metric tons ha −1 yr −1 . 4 Discussion Many studies ( Baigorria & Romero, 2007 ; Hamelmal, 2005 ; Paul, 1997 ) revealed that soil erosion estimation using the application of empirical soil erosion model such as RUSLE integrated with GIS to estimate soil erosion potential and the potential zones in Yezat Watershed. Also, an attempt has been made to study the impact of change in land-use and land cover on erosion rate. The Ethiopian government has recognized the serious implications of soil erosion and to mitigate environmental degradation national programs were implemented in the 1970s and 1980s ( MoARD, 2005 ). There was an expansion of the area of cultivated land during the period 2001–2010 in the study area. During this period, sparsely wooded land, grassland and shrub land have vanished. This was due to the human population pressure, which resulted in the expansion of agricultural activities and settlements. Detection of land-use/land-cover changes for the period 2010–2015 has revealed that the extents of grassland, woodland and homesteads have increased. Due to the implementation of watershed management program, considerable amount of shrub/bush lands were transformed to cultivated land and plantation area. These changes led for productive use of the area by adopting suitable treatment measures like changes in the cropping pattern and in soil and water conservation practices. It was also observed that the area under homesteads was increased. This coincides with the increase in human population and construction of new houses ( Bajocco, De Angelis, Perini, Ferrara, & Salvati, 2012 ; di Gregorio, 2005 ; MoARD, 2005 ; Zhou, Luukkanen, Tokola, & Nieminen, 2008 ). A similar land-use/land-cover study made by Abate (1994) in southern Ethiopia indicated that the influence of land-use/land-cover changes depends on the nature of the land and the management techniques used. The rapid change in the land-use/land-cover of the study area has been driven by factors such as population pressure, expansion of rural towns, overgrazing and recurrent droughts. Marked land-use/land-cover dynamics were also observed in dense forest, wetland, shrub-land, and intensively cultivated (irrigation) land ( Fitsum et al., 1999 ). From the estimation of spatial distribution of woody biomass maps for 2001, 2010 and 2015, the woody biomass considerably decreased during the period 2001–2010. On the other hands in the year 2015, significant increase in the woody biomass was observed. This increase may be due to the interventions (transformation of degraded land to plantation, due to adoption of soil and water conservation practices, better utilization of surface and groundwater). According to Kumar, Gupta, Singh, Patil, and Dhadhwal (2011) , a combination of satellite and forest inventory data reduces uncertainties in aboveground biomass estimation. Sheikh, Kumar, Bussman, and Todaria (2011) estimated the carbon storage in India's forest biomass for the years 2003, 2005 and 2007 using secondary data of growing stock data and satellite data and revealed that there was a continuous decrease in the carbon stock in India's forest biomass since 2003, despite a slight increase in forest cover ( ISFR, 2003, 2005, 2009 ). Lu (2005) conducted a study to estimate the above-ground biomass in the Brazilian Amazon using Landsat TM data. This study showed that the use of Landsat TM image for estimating forest above-ground biomass was more successful for succession forests rather than mature forests. There was an improvement in the vegetation cover owing to implementation of various soil and water conservation measures as reflected in the NDVI images of the present investigation. The rehabilitation of vegetation in many places of the watershed has improved the vegetation cover. Farmers also confirmed during focus group discussions, that the vegetation cover has increased and the changes observed were results of the intervention, i.e. the establishment of enclosures. The major changes in the watershed due to this implementation of watershed development program are having reflected in the development of vegetation cover due to control of soil erosion. Prasannakumar, Vijith, Abinod, and Geetha (2012) proved NDVI to be a useful indicator of land-cover conditions and a reliable input into models of soil dynamics. The estimated rate of soil loss and the spatial patterns are generally realistic and in agreement with results from previous studies. The average annual soil loss estimated by USLE from the entire Medego watershed in northern Ethiopia was 9.63 metric tons per ha −1 yr −1 ( Tripathi & Raghuwanshi, 2003 ). Therefore, the RUSLE model was critically applied using an integrated remote sensing and GIS approach in a raster environment to obtain maps for each RUSLE factor. The positive impact of the watershed management in the study area could be explained in terms of reduced soil erosion rates, and increased soil moisture availability, which resulted in the increased crop production, reduced sedimentation and flooding problems in the lower parts of the watershed, stabilized gullies and river banks, rehabilitation of degraded lands and improved ecological balance in the area. Similar studies elsewhere in northern Ethiopia also reported the effectiveness of sustained conservation efforts in catchments in controlling soil erosion and in improving hydrology and land productivity of the area ( Bewket, 2003; Liu et al., 2008; Munro et al., 2008; Nyssen, Getachew, & Nurhussen, 2009; Zhang, Drake, & Wainwright, 2002 ). Improvement of vegetation cover in the watershed decreased the depth to the groundwater, which could be managed and used for irrigation purpose. 5 Conclusion Assessment of the impacts of watershed development programs using satellite data are paramount importance in order to evaluate the pre and post watershed intervention conditions, and generate baseline information that helps to monitor and evaluate real time situation in the future for different options within the relatively large geographical area and repetitive time scale coverage. Major changes in the watershed due to implementation of integrated watershed development programs an reflected in the development of vegetation cover, agricultural land-use, reduced soil erosion and rehabilitation of degraded lands. The improvement in vegetation cover could be attributed to the better soil and water conservation practices through SLM interventions. It is hoped that the findings of this research will contribute to developing future watershed resources management strategies in response to sustainable land management. Acknowledgements The authors wish to thank the School of Earth Sciences, College of Natural and Computational Sciences, Addis Ababa University, Addis Ababa, for providing funds and facilities. Financial support from the International Crops Research Institute for the Semi-Arid Tropics is gratefully acknowledged. We also thank the anonymous reviewers for their valuable comments to improve the manuscript.
REFERENCES:
1. ABATE S (1994)
2. AI L (2013)
3. AYELE K (2014)
4. BAIGORRIA G (2007)
5. BAJOCCO S (2012)
6. BERHANU K (2014)
7. BEWKET W (2003)
8. CHECKOL T (2014)
9. CHRETIEN J (1994)
10. EWEG H (1998)
11.
12.
13.
14. FISTIKOGLU O (2002)
15. FITSUM H (1999)
16. GANASRI B (2016)
17. GEBRESELASSIE E (1996)
18. GELAGAY H (2016)
19. GIRMA U (2005)
20. DIGREGORIO A (2005)
21. HAMELMAL H (2005)
22. HAREGEWEYN N (2012)
23. HAREGEWEYN N (2015)
24. HOYOS N (2005)
25. HURNI H (1985)
26.
27. HURNI H (1986)
28.
29. IPCC (2003)
30. ISFR (2003)
31. ISFR (2005)
32. ISFR (2009)
33. KIDANE D (2015)
34. KOULI M (2009)
35. KRISHNABAHADUR K (2009)
36. KUMAR R (2011)
37. LIU B (2008)
38. LU D (2005)
39. MARIA K (2009)
40. DEMEYER A (2011)
41.
42. MOARD (2006)
43. MUNRO R (2008)
44. NYSSEN J (2009)
45. PARK S (2011)
46. PAUL D (1997)
47. PRASANNAKUMAR V (2012)
48. RAWAT K (2016)
49. RENARD K (1997)
50.
51. ROSENQVIST A (2003)
52. SHEIKH M (2011)
53. TRIPATHI M (2003)
54. WISCHMEIER W (1976)
55.
56. XU L (2012)
57. ZEWDU S (2016)
58. ZHANG X (2002)
59. ZHOU P (2008)
|
10.1016_j.aej.2024.07.101.txt
|
TITLE: Task offloading scheme in Mobile Augmented Reality using hybrid Monte Carlo tree search (HMCTS)
AUTHORS:
- Soundararaj, Anitha Jebamani
- Sathianesan, Godfrey Winster
ABSTRACT:
Mobile Augmented Reality (MAR) applications enhance user experiences by providing realistic information about the current location through mobile devices. However, MAR applications are computationally intensive, leading to high energy consumption and latency issues. To address these challenges, this research presents a Hybrid Monte Carlo Tree Search (HMCTS) based task offloading scheme, combining a genetic algorithm with Monte Carlo tree search for efficient task management. The proposed method uses YoloV7 for object recognition and aims to reduce energy consumption, response time, and migration time. Experimental results demonstrate that the HMCTS approach significantly reduces energy consumption to 1290 kJ, response time to 24 ms, and migration time to 0.52 ms, outperforming existing techniques. These improvements highlight the potential of the HMCTS method for enhancing the performance of MAR applications. Proposed hybrid approach aims to improve the efficiency and effectiveness of task offloading in MAR applications. The HMCTS model dynamically offloads tasks to edge servers, optimizing scheduling time, response time, and energy consumption.
BODY:
1 Introduction The rapid growth of augmented reality and its applications is visible over a decade. Specifically, it has gained more attention in the past few years due to improvements in techniques and developed hardware devices [1] . This attention turned into a crucial breakthrough when intelligent smartphones were used for AR applications. The technical advancements in smartphones support a wide range of AR applications. The computing platform of augmented reality that utilizes smartphones is technically defined as mobile augmented reality (MAR). Physical environments and virtual objects are integrated in MAR systems so that users can interact with the physical world digitally [2] . MAR supports universal access to digital content, providing a seamless transition for users from the real world to the digital environment [3] . Though MAR provides better user experiences, it has a few real-time constraints. As MAR needs to process physical world information, it requires an intensive multimedia processing procedure. A large amount of data, video streams have to be processed by the MAR systems to display the virtual layer over the real environment. Smartphones and smart devices have high power drain and thermal dissipation factors due to multiple applications and limited battery size. To overcome these issues and enhance performance, MAR applications utilize remote servers to execute large data [4] . The computationally intensive task is moved to remote servers for further processing. Due to this, the power limitation and computation limitation of MAR systems are rectified. However, there is an issue in this process which is latency. Since transferring all the computationally intensive tasks requires bandwidth and essential computing resources. Practically, the wireless links' characteristics will differ in mobile environments, causing service degradation in MAR applications. Thus, instead of transferring all the computationally intensive tasks to remote servers, edge computing is incorporated in MAR applications, which reduces the latency constraints, bandwidth, and resource limitations. The user tasks are offloaded to edge computing for further processing, and the results are retrieved to provide a better experience to users [5] . The major challenges in task offloading to the edge server are the diverse distribution of mobile users. The diverse distribution creates imbalanced workloads. This imbalance impacts the offloading decisions. In another view, resource management in edge computing is complex as it is less powerful than cloud servers. Due to diversity in data size and computation requirements, instead of optimal task offloading, a simple fair offloading can be followed by MAR systems, affecting the user experiences while using MAR applications. Numerous tasks offloading strategies are presented for edge computing; however, the existing procedures offload user tasks individually to edge servers [6] . Few works considered multiple dependent tasks offloading procedures without considering communication delay, response time, and processing time. The major objective of incorporating edge computing in MAR is to reduce latency and processing time, but it is not considered in the dependent task offloading process, which makes the design inappropriate for MAR. The training procedure of existing methodologies utilizes a large amount of computing resources and takes more time, which again degrades the performance of MAR applications. To address the above challenges, in this research work, a task offloading procedure is presented using a hybrid Monte Carlo tree search algorithm. A genetic algorithm-based local search procedure is combined with Monte Carlo tree search to attain improved offloading performances in MAR applications. The proposed offloading procedure dynamically offloads tasks to the edge network to reduce scheduling time, response time, and further decrease the energy consumption of user devices. The contributions made in this research work are summarized as follows. • Develop a precise MAR application to describe task dependency and formulate a scheduling procedure based on HMCTS. • Implement an object recognition model using YoloV7 to enhance the performance and accuracy of object detection in MAR systems. • Validate the proposed offloading model through experimental results, comparing it with state-of-the-art techniques in terms of migration time, energy consumption, and response time. • By addressing these objectives, the HMCTS approach aims to overcome the limitations of existing task offloading schemes, providing a robust solution for enhancing the performance and user experience of MAR applications. The remaining discussions in the article are arranged in the following order. Section 2 provides a detailed literature analysis of existing offloading procedures. Section 3 provides the details of the proposed hybrid model for task offloading. The experimental results are presented in Section 4 , and the conclusion is presented in Section 5 . 2 Related works The increased use of smartphones and smart devices is ready to adopt mobile augmented reality applications. In addition to gaming, MAR can be used in various domains, and by incorporating computer vision algorithms, an extensive experience can be provided to the users [7] . The task offloading procedure was followed in MAR to offload the task to edge computing. This offloading procedure is mainly considered to reduce energy consumption and latency minimization. The literature analysis in this section considered task offloading schemes that have recently been developed. The analysis summary details the methodologies and experimental findings to gain insight into the offloading procedures. The task offloading procedure presented in [8] analyzes the computational complexity of tasks before offloading them into edge computing. The analysis considered the task offloading application and its requirements in the edge cloud and summarized it before offloading. Due to this analysis, the energy consumption of mobile devices is reduced, and the application's overall performance is improved. The investigation of task offloading is analyzed in [9] for software-defined networks to minimize battery consumption and latency in offloading. The offloading problem is formulated as an integer nonlinear program and solved through task placement and resource allocation procedures. The presented task placement algorithm makes offloading decisions based on the resource requirement of each task, reducing energy consumption and latency compared to uniform task placement and offloading procedures. By offloading tasks to mobile edge computing, the quality of services and computation capacity of mobile devices can be improved. The offloading scheme presented for multimedia applications [10] allocates computing and communication resources in edge servers to minimize execution delay. Providing both resources will improve performance and reduce the computation complexity of end-users; thus, improved quality of services can be experienced by users in real-time. The task offloading and resource allocation procedure presented in [11] provided a better quality of experience to end-users using the Lyapunov optimization algorithm. The given model formulated an energy deficit queue to describe energy consumption and performed centralized and distributed offloading to minimize energy consumption and maximize the quality of experience for mobile users. A similar task offloading procedure presented in [12] included an efficient Lyapunov online algorithm to offload tasks in edge computing. The given model utilizes data-catching strategies to analyze data and task contents and then offloads the task with minimum latency. A statistical computation model is presented in [13] to provide guaranteed quality of service in task offloading, reducing energy consumption without sacrificing QoS. The proposed model statistically analyzes the correlation between the task and QoS, using solutions based on Gibbs sampling theory and convex optimization procedures. The better performance of the presented model is evident from the experimental analysis compared to existing methods. The multi-objective optimization model shown in [14] provides a task offloading scheme to reduce latency in a large-scale intensive computing application. The given model considers multiple dependent tasks and analyzes their computational resource requirements before offloading. Due to this, the delay in offloading is reduced, and user offloading benefits are improved compared to existing methods. The server selection and parameter optimization model presented in [15] formulates the offloading problem and provides an optimal solution that minimizes energy consumption and maximizes offloading performance. The presented optimization algorithm initially generates a task priority queue to offload the tasks to mobile edge servers. Further, the task parameters are calculated, and based on completion time, the tasks are redistributed to improve energy efficiency over existing methodologies. The hybrid optimization algorithm-based task offloading procedure presented in [16] includes gray wolf optimization and particle swarm optimization algorithms to attain improved energy efficiency in the offloading procedure. The given model performs offloading based on the optimal solution of the hybrid optimization model, which considers power, bandwidth, and subcarriers as optimization parameters. The presented model improved energy gain and reduced latency and response time in the offloading process compared to existing methods. The multi-objective offloading procedure presented in [17] for mobile edge computing includes a gorilla algorithm to offload interdependent tasks. The proposed algorithm initially considers edge and cloud servers at multiple cost levels to improve the overall system flexibility. Then, using a mutation operator, the functionalities of the gorilla algorithm are improved in providing offloading solutions. Due to this, improved performance is attained by the presented model compared to the traditional gorilla algorithm. Deep learning methods are used in various applications [18,19] . Specifically in cloud computing, deep learning models are used for resource allocation, task scheduling, and migration decisions. The deep learning-based task offloading procedure presented in [20] includes a deep neural network and partition point retain algorithm to reduce energy consumption and latency in the offloading procedure. The given model has an adaptive DNN for initial task offloading and partitioning. Then, optimal points to minimize the solution space are obtained through the partition point retain algorithm. Offloading is performed with minimum cost compared to conventional methods based on the optimal points. An auction-based edge server allocation procedure presented in [21] reduces task processing delay and energy consumption using deep neural networks. Existing methods neglect the delay during task processing and the energy consumed while utilizing the application. Thus, the presented model considers these delay and energy consumption factors, allocates optimal edge servers for tasks using deep neural networks, and attains better performance than traditional models. The task offloading model presented in [22] for mobile edge computing reduces the complexity of mobile edge computing devices. The presented approach combines the binary and partial task offloading schemes to attain a low-cost energy efficient offloading model. The task offloading scheme presented in [23] for extended reality utilizes deep reinforcement learning algorithm to handle the computationally intensive tasks. The presented model selects the tasks which has high energy consumption and divides the tasks into several subtasks to reduce the complexity and energy consumption. Due to this, the presented model attains better quality of experience and satisfies the user requirements compared to traditional methods. The task offloading scheme presented in [24] for edge assisted environment reduces the demands of mobile application that requires memory, storage and computation. The presented model divides the augmented reality tasks into multiple subtasks and collects the task requirements from edge servers. Based on the collected features, a subtask execution procedure is predicted using Bayesian network. The network predicts the execution delay of subtasks on each edge platform. Finally using a hybrid optimization algorithm that combines particle swarm optimization with genetic algorithm the task offloading is performed with better efficiency and reduced latency. Table1 provide the summary of literature analysis and from this the importance of optimization and learning algorithms in task-offloading procedures can be observed. Techniques like Lyapunov optimization, auction-based allocation, and multi-objective optimization are commonly used to minimize latency in task offloading. Moreover the existing task offloading applications are generic to edge computing, and limited research works are addressed for mobile augmented reality. Developing optimal task offloading for the MAR application is essential, considering energy consumption and response time. Thus, this research work presents a hybrid model for task offloading in edge computing for multi-user MAR applications. 3 Proposed task offloading scheme 3.1 Edge network In the mathematical model of the proposed work, an edge cloud network is considered with servers, which is denoted as N . The heterogeneous edge servers considered in the network are installed in cloudlet devices, edge computing hosts, and peer mobile devices. The physical location of edge servers is diverse, and the computational abilities are different for each server. All the servers are connected through the Internet. The delay between server N = s 1 , s 2 , s 3 , … , s N and s i is given as s j . A simple illustration of the edge cloud network used in the proposed work is given in d i , j Fig. 1 . 3.2 MAR model The Mobile Augmented Reality (MAR) application used in the proposed work has a set of M users, denoted as . Each user can execute the AR application, and the application includes artificial intelligence algorithms for object recognition. In the proposed work, object recognition is performed using the YoloV7 algorithm. YoloV7 was chosen for object recognition in this research work due to its exceptional speed, accuracy, and efficient architecture, making it ideal for real-time applications like Mobile Augmented Reality (MAR). Its advanced features, such as efficient layer aggregation networks, model scaling skills, and dynamic label assignment strategy, enable YoloV7 to handle diverse and complex object recognition tasks with high accuracy. Additionally, YoloV7's scalability across different computational environments ensures consistent performance on both mobile devices and edge servers. These attributes collectively enhance the robustness, reliability, and overall performance of the MAR system, providing superior user experience. M = u 1 , u 2 , u 3 , … , u M A detailed description of the object recognition algorithm is presented in the following sections. The MAR application has multiple tasks that need to be offloaded separately to the optimal computing nearby edge nodes. Consider the number of tasks to be , and the application is denoted as M k . Based on the input data size and algorithm, the computation complexity of tasks is determined. The computation delay of the u k task for user i is denoted as i th . The major components in the MAR application are presented in c k i Fig. 2 . The video captured using the mobile camera is processed through a feature extractor for extracting the feature points. The extracted feature points are fed into the mapper, tracker, and object recognizer. The mapper builds a digital model of the environment in three dimensions using the feature points. Meanwhile, the generated structure is provided to the object recognizer to locate the specific object in the frame. Further, the tracker tracks the objects in consecutive frames based on the results of the object recognizer. Finally, the results of all these three modules are transmitted to the renderer. The renderer combines the results and overlays the virtual content on top of the video stream. In this process, the video capture and render directly interact with the user, so they must run locally on the mobile device. Meanwhile, the object recognition, mapper, and feature extractor are offloaded to edge cloud servers for further execution. The proposed model utilizes the Structure from Motion (SFM) technique [25] , and for object recognition, YoloV7 is used. 3.3 Object recognition model The architecture of YoloV7 used in the proposed work includes an efficient layer aggregation network and model scaling skills. The training process of YoloV7 includes a dynamic label assignment strategy to assign labels to different output layers. The architecture consists of an input, backbone, and prediction module. The backbone module has a CBS layer, which includes convolutional input, batch normalization, and SiLU activation functions. Multiple CBS structures in ELAN generate feature maps into three parts. The channel in the architecture divides the features and feeds them into convolution layers. The neck part in the architecture has an SPPCSPC layer and consists of two parts. The divided feature maps undergo three consecutive pooling operations. The CUC layer in the architecture is used for feature map combination, including convolution, up-sampling, and feature combining modules. The REP layer in the architecture adjusts the inferences to improve the model performances. In the object recognition process, YoloV7 provides three feature maps, with small feature maps suitable for detecting large-sized objects, and large size feature maps used to detect small-sized objects. 3.4 Hybrid Monte Carlo search tree model In the proposed work, task offloading is performed using a hybrid Monte Carlo Tree Search algorithm. The traditional Monte Carlo search requires multistep execution, and its long-term effects are time-consuming. Thus, a hybrid model is presented in this research work using a genetic algorithm local search procedure. In the task offloading problem, the model is formulated as a search tree, and the nodes in the tree represent the state, defining the offloading tasks allocated in a specific time. The edges in the graph indicate the individual operation of the offloading tasks. The search process of the Monte Carlo tree has processes like selection, expansion, simulation, backpropagation, and termination. In the selection process, the leaf node with the largest confidence value is selected, and the rule for selection is framed as Where the number of visits to the root node is given as n, and the exploration parameter c is set to the maximum for faster convergence. The range of the exploration parameter is given as [0,1]. Once the leaf nodes are selected, the children are generated based on the decision obtained in the search tree decision space, and the child nodes' fitness is calculated. Furthermore, the leaf node is considered as the root node, and the newly selected child nodes are considered as new leaf nodes (1) L n = max v i + c × ln n n i i arg In this stage, the proposed model includes a local search procedure of a genetic algorithm to improve convergence performance. In this step, children are generated using the crossover operator. The best nodes are selected from the parent set to improve the efficiency of optimization. The crossover is performed between the best nodes; however, unlike the traditional procedure, which exchanges random columns, the presented model follows a complex crossover to attain improved exchange performances. Thus, the traditional crossover operator is replaced with order crossover in the proposed model. The exchange process is mathematically formulated as Where (2) j 1 N + i 1 − j 2 N + i 2 N d ≤ c and ( i 1 , j 1 ) indicate the positions of parents. Furthermore, the probabilistic mutation operator included in the proposed work performs mutation in which all the children except the best are required to take the operator. This mutation generates a random number for each child, and if the random number is less than the threshold, then two random elements from each column are selected for swapping; otherwise, the values of the random elements are exchanged. After mutation, a set of children is generated. A process flow that describes the local search procedure in the genetic algorithm is presented in ( i 2 , j 2 ) Fig. 3 . Finally, using the local search, the exploitation quality in finding the optimal solution is improved. The search procedure holds the best and neglects the others in the children’s set. This indicates that the local search adopts the best and keeps the remaining children in a reserved state. This procedure is repeated for continuous updates until the stopping criteria are attained. 3.5 Combination of genetic algorithm and Monte Carlo tree search in HMCTS Monte Carlo Tree Search (MCTS) Framework: The task offloading problem is initially modeled as a search tree where each node represents a state of task allocation at a specific time, and edges represent individual operations of the offloading tasks. MCTS involves four main processes: selection, expansion, simulation, and backpropagation. • Selection: The process starts by selecting the most promising node (leaf node) with the highest confidence value. The selection rule is based on the Upper Confidence Bound (UCB) formula, ensuring a balance between exploration and exploitation. It is computed as (3) UCB 1 = v i + c ln ( n ) n i In Eq. (3) , denotes the value of node v i i, the exploration parameter and c n the number of visits. • Expansion: New child nodes are generated from the selected leaf node, representing possible future states. • Simulation: A random simulation is run from each new node to estimate potential outcomes. • Backpropagation: Results from the simulations are propagated back up the tree to update the values of the nodes. The back propagation update given as where (4) v i = v i + R − v i n i R is the total reward obtained from simulation. Integration with genetic algorithm (GA) local search: To enhance the efficiency and convergence speed of the MCTS, a genetic algorithm-based local search procedure is incorporated at the expansion stage. • Crossover: Instead of randomly generating new child nodes, the genetic algorithm uses the crossover operator to combine the best nodes (parents) to produce new child nodes. This step ensures that the new nodes inherit the best characteristics from their parents, improving the search space exploration. Crossover could be defined as (5) Child = Parent 1 : k ∪ gene ∈ Parent 2 | gene ∉ Parent 1 1 : k } • Order Crossover: The proposed model uses an order crossover operator, which exchanges ordered segments of the parent nodes to create new child nodes. This method maintains the sequence of tasks and improves the quality of offspring. The crossover point with L chromosome length is determined as (6) k = [ rand X L − 1 ) ] And the order crossover operator is defined as (7) OX Parent 1 , Parent 2 = Child • Mutation: A probabilistic mutation operator is applied to introduce variability and avoid local optima. All children, except the best one, undergo mutation where random elements are swapped if a generated random number is below a certain threshold. The mutation probability is given as (8) P mut = 1 if rand < p mutation 0 otherwise (9) Swap c i , c j if P mut = 1 Eq. (9) denotes the mutation operation. Following this, fitness evaluation is given as (10) Fitness x = 1 1 + f ( x ) In (10), is the objective function. f ( x ) • The local search procedure improves the exploitation quality by refining the child nodes generated during the expansion phase. The best solutions are retained, and less promising ones are kept in reserve for potential future exploration. This hybrid approach allows the MCTS to leverage the global search capabilities of the genetic algorithm, ensuring that the task offloading decisions are optimized more effectively and efficiently. By combining the genetic algorithm's strength in exploring the search space with MCTS's structured exploration and backpropagation, the HMCTS achieves superior task offloading performance. This integrated approach results in lower response times, reduced migration times, and decreased energy consumption, ultimately enhancing the overall performance and user experience in Mobile Augmented Reality (MAR) applications. 4 Results and discussion The proposed HMCTS model experimentation is conducted through the simulation of a multi-user edge computing system. The simulation model involves various edge and cloud hosts, where Microsoft Azure Virtual Machine and Azure B2s are used for edge hosts, and B8ms machines are used for cloud hosts. The experiments simulate the Mobile Augmented Reality (MAR) scenario and tasks. Details about the simulation environment are presented in Table 2 . The proposed model's performance is evaluated using metrics such as migration time, average number of migrations, response time, scheduling time, energy consumption, and SLA violations ratio. The genetic algorithm in the HMCTS model uses specific parameter settings to ensure effective task offloading. The population size is set at 100, providing a diverse choice of solutions that helps in better exploration of the search space and prevents premature convergence to suboptimal solutions. The crossover rate is 0.8, facilitating the combination of good solutions from the parent population to generate potentially better offspring, enhancing the search efficiency. A moderate mutation rate of 0.1 introduces variability in the population, helping to escape local optima and explore new areas of the search space. The selection method employed is tournament selection, ensuring that the best individuals are chosen for crossover, maintaining high-quality solutions throughout generations. The crossover operator used is Order Crossover (OX), which preserves the sequence of tasks, crucial for maintaining dependencies and constraints in the task offloading problem. The mutation operator is swap mutation, which ensures that the solution space is adequately explored by swapping the positions of two randomly chosen tasks, introducing necessary diversity. The Monte Carlo Tree Search component of the HMCTS model is configured with an exploration parameter (c) of 1.41, balancing exploration and exploitation. This value is commonly used and derived from the UCB1 algorithm, ensuring that the algorithm explores new possibilities while exploiting known good solutions. The number of simulations is set to 1000 per move, ensuring a thorough evaluation of potential moves, leading to better decision-making in the tree search. The maximum tree depth is limited to 50 to prevent excessive computational overhead and ensure timely decision-making, which is crucial for real-time applications like MAR. The expansion strategy is balanced tree expansion, ensuring that the tree grows uniformly, allowing all parts of the search space to be explored adequately. The chosen parameters for the genetic algorithm and MCTS are designed to balance exploration and exploitation, ensuring that the search space is thoroughly explored while maintaining high-quality solutions. The larger population size and high crossover rate in the GA promote diversity and the generation of good offspring. The moderate mutation rate and selection method help maintain solution quality and introduce necessary variability. In MCTS, the exploration parameter ensures a balanced search, while the number of simulations and tree depth control the thoroughness and efficiency of the search process. These parameters collectively enhance the performance of the HMCTS by ensuring efficient task offloading decisions, reducing response time, migration time, and energy consumption. By carefully selecting and tuning these parameters, the HMCTS approach is optimized for handling the complexities and real-time constraints of MAR applications, leading to superior performance compared to state-of-the-art techniques. The experiments to evaluate the effectiveness of the proposed Hybrid Monte Carlo Tree Search (HMCTS) method were conducted in a simulated environment using Microsoft Azure Virtual Machines and Azure B2s edge servers, mimicking a multi-user edge computing system. Tasks representative of typical Mobile Augmented Reality (MAR) applications were generated and allocated based on their computational requirements. The training methodology involved fine-tuning key parameters for the genetic algorithm and Monte Carlo Tree Search through preliminary experiments to optimize performance. Key parameters for the genetic algorithm and Monte Carlo Tree Search were fine-tuned through preliminary experiments. The HMCTS method was compared with state-of-the-art task offloading schemes (MMCT, MCDS, GA, DRL, and Closure) under identical conditions. Performance metrics included energy consumption, response time, migration time, scheduling time, and SLA violations. The proposed model's performance is evaluated by increasing the number of tasks, and the average values obtained by the model for all metrics are presented in Table 2 . The results for 6000 tasks are considered indicative of the better performance of the proposed model. From the results given in Table 3 , it can be observed that the average number of migrations for 6000 tasks is 10, and the migration time is 0.52 ms. The average response time and scheduling time of the proposed offloading scheme are 24 ms and 0.18 ms, respectively. Due to efficient task offloading, the energy consumption is significantly reduced to 1290 kJ. The Scalability of the Proposed HMCTS Model are validated through an analysis of the performance metrics obtained from the metrics like response time, energy consumption, migration time, scheduling time, and the number of migrations. Fig. 4 presents the analysis considering the number of tasks increases from 1000 to 10,000 and the values are provided in Table 4 for different metrics. From the analysis of response time the quick response of proposed model in completing the tasks can be observed. For the tasks from 1000 to 10,000, the response time increases from 20 ms to 30 ms and this gradual increase in response time indicates that the proposed model stable performance even under different load conditions. The proposed exhibited only 50 % increase in the response time even for tenfold increase in tasks. The response time results demonstrate the scalability and task handling efficiency of the proposed model. The energy consumption of proposed model shows a linear increase for tasks 1000–10,000. The linear increase ensures that the proposed model effectively manages the energy utilization even with the increase in task count. Similarly, the migration time reflects the task offloading efficiency of the proposed. The minor increase in migration time from 0.22 ms to 0.30 ms for 1000–10,000 tasks indicates that the proposed model optimizes the migration paths. By avoiding unnecessary migrations and maintaining low migration time the proposed exhibits its better scalability for varying task conditions. In case of scheduling time the proposed model exhibits a minimum variation from 0.16 ms to 0.20 ms for varying task conditions. The proposed model quickly determines the task requirements and handles the tasks effectively to maintain its consistence performance in scheduling. In terms of migration the system ability in redistributing the tasks is analyzed. The migrations are increased from 4 to 20 for 1000–10,000 tasks. This increase in number of migrations indicates that the proposed model effectively balances the need to offload tasks while minimizing the migration overhead. Overall, the proposed model performs the essential migrations and enhance the stability and performance. Further to measure the robustness of the proposed model, latency and bandwidth parameters are considered. By varying the latency and bandwidth three different network conditions are obtained as stable, moderate and poor. To obtain a stable network the latency is considered as below 20 ms and the bandwidth is considered as above 100Mbps. For moderate network condition, the latency is considered ranging from 20 ms to 100 ms and the bandwidth is between 50 Mbps and 100 Mbps. To obtain poor network scenario, latency is set above 100 ms and the bandwidth is limited to below 50 Mbps. Table 5 presents the network conditions and the performance of proposed model for metrics like SLA violations, Stability and Error Rate. Service Level Agreement (SLA) violations refer to instances where the performance of the system fails to meet the agreed-upon standards set by the SLA. These standards typically include metrics such as response time, availability, and reliability. In the study, SLA violations measure how often the proposed model fails to perform within the expected parameters. A lower number of SLA violations indicates a higher adherence to the performance standards, showcasing the model's reliability and robustness. System stability reflects the percentage of tasks that are successfully completed without failure, highlighting the system's ability to maintain consistent performance despite changes in network conditions. High system stability means the model can effectively manage resources and maintain operations even when the network is unstable. In the research, system stability is crucial as it demonstrates the HMCTS model's capability to handle tasks reliably across different network scenarios. Error rate measures the proportion of tasks that fail due to network issues or other operational challenges. A lower error rate signifies better performance and reliability, indicating that the system can manage and complete tasks with minimal failures. In the context of the research, analyzing the error rate helps to understand how well the HMCTS model can cope with adverse conditions and maintain a high level of performance. From the results given in Table 5 , the SLA violations indicate the proposed model ability in maintaining the operation standards. The results indicate that the SLA violations increase from 5 to 25 when the network is condition is poor. Though the violations increase for poor network, the proposed model ability in controlling the violations demonstrates the robustness under variable network conditions. While analyzing the system stability that indicates the percentage of successfully completed tasks, the proposed model performance decreases from 99.5 % to 90 % from stable to poor network conditions. However, the proposed manages the resources effectively and ensures the system operations by maintaining the stability under varying network conditions. The error rate indicates the proportion of task failure due to the network issues. Definitely for the poor network condition the error rate will increase and it is shown in Fig. 5 . The error rate increases from 0.5 % to 10 % indicates the challenges in poor network conditions. However, the proposed mode has the error rate of 10 % even for poor network condition indicates the model efficient task management and ability in managing the network instability. To demonstrate the proposed model better performance, heuristic methods like Greedy Offloading and Random offloading procedures are considered for comparative analysis. In the greedy offloading, the tasks are offloaded to edge servers based on the current load condition whereas in the random offloading the tasks are offloaded to randomly selected edge servers. The network conditions considered in the robustness analysis is considered for this comparative analysis and the results are comparatively presented in Table 6 . It can be observed that the proposed model exhibit low SLA violations compared to greedy and random offloading schemes. In case of system stability, the proposed model exhibits its better performance compared to greedy and random offloading even in the poor network condition. The error rate of greedy offloading is higher than proposed HMCTS and lesser than random offloading. The proposed model exhibits minimum error rate and indicates its better error handling ability in different network condition. For the metrics like energy consumption and migration time the proposed model exhibit better performance compared to existing greedy and random offloading schemes which is clearly presented in Fig. 6 . To validate the better performance of the proposed model, existing task offloading schemes are considered for comparative analysis. Existing methods like MMCT, MCDS, DRL, Closure, and GA are included for analysis [26] . Fig. 4 depicts the comparative analysis of all the models for the average number of migrations. It can be observed from the results that the number of migrations performed by the proposed model is comparatively fewer than the existing offloading procedures. The average number of migrations performed by the proposed model for 6000 tasks is 11, whereas existing GA performed 26 migrations, MCDS performed 19 migrations, MMCT and Closure performed 18 migrations. The DRL-based offloading performed 12 migrations, which is close to the proposed model average, but the time taken to perform these 12 migrations is comparatively high, whereas the proposed model takes less time to migrate all the 11 tasks, as seen in the migration time analysis given in Fig. 7 (a-g). Fig. 7 clearly demonstrates that as the number of tasks increases, the proposed HMCTS scheme consistently requires fewer migrations compared to other methods like MMCT, MCDS, GA, DRL, and Closure. At the highest load of 6000 tasks, HMCTS performs approximately 10 migrations, significantly lower than MCDS and GA, which perform around 25 and 26 migrations respectively. This superior performance can be attributed to the hybrid approach of combining genetic algorithms with Monte Carlo Tree Search, enabling more efficient exploration and exploitation of optimal task offloading strategies. Consequently, the HMCTS method effectively reduces the computational burden, leading to enhanced scalability and performance in high-load scenarios. Table 7 provides the numerical values of migration analysis in detail. Fig. 8 illustrates the average migration time in milliseconds (ms) for different task offloading schemes (MMCT, MCDS, GA, DRL, Closure, and the proposed HMCTS) as the number of tasks increases. The proposed HMCTS consistently exhibits the lowest migration time across all task volumes, demonstrating its superior efficiency. At 6000 tasks, the HMCTS shows an average migration time of approximately 0.52 ms, significantly lower than DRL, which peaks at around 1.64 ms, and other methods such as GA and MCDS, which have higher migration times as well. The superior performance of the HMCTS can be attributed to its hybrid approach, combining genetic algorithms with Monte Carlo Tree Search. This combination allows for more efficient decision-making and task management, reducing the time required for task migrations and optimizing resource utilization. Table 8 presents the numerical values of average migrations. As a result, the HMCTS method enhances system responsiveness and performance, particularly under high-load conditions. Fig. 9 depicts the average response time in milliseconds (ms) for various task offloading schemes (MMCT, MCDS, GA, DRL, Closure, and the proposed HMCTS) as the number of tasks increases. The proposed HMCTS scheme consistently shows the lowest average response time, highlighting its superior efficiency. At 6000 tasks, the HMCTS maintains an average response time of about 24 ms, which is significantly lower than DRL, which peaks at around 45 ms, and other methods such as GA and MCDS, which hover around 30 ms. The superior performance of the HMCTS is due to its hybrid approach, which combines genetic algorithms with Monte Carlo Tree Search, enabling more effective task management and rapid decision-making. This hybrid model optimizes resource utilization and reduces latency, resulting in a more responsive system and this can be observed from the numerical values given in Table 9 . Consequently, the HMCTS method significantly enhances the performance of MAR applications, especially under high-load conditions, by providing faster response times compared to existing offloading schemes. Fig. 10 exhibits the average scheduling time in milliseconds (ms) for different task offloading schemes (MMCT, MCDS, GA, DRL, Closure, and the proposed HMCTS) as the number of tasks increases. The proposed HMCTS consistently demonstrates the lowest average scheduling time across all task volumes, highlighting its superior performance. Specifically, at 6000 tasks, the HMCTS maintains an average scheduling time of approximately 0.18 ms, while other methods like MCDS and GA exhibit significantly higher scheduling times, peaking at around 1.2 ms and 1.1 ms respectively. The remarkable performance of the HMCTS can be attributed to its hybrid approach, which integrates genetic algorithms with Monte Carlo Tree Search, enabling more efficient task scheduling and resource allocation. This hybrid method effectively minimizes the scheduling overhead by optimizing the exploration and exploitation processes within the task offloading strategy. Consequently, the HMCTS method provides faster scheduling times, reducing latency and enhancing the overall efficiency of Mobile Augmented Reality (MAR) applications, particularly in high-load scenarios. Table 10 presents the numerical values of scheduling time for all the algorithms. Fig. 11 illustrates the Service Level Agreement (SLA) violations ratio for various task offloading schemes (MMCT, MCDS, GA, DRL, Closure, and the proposed HMCTS) as the number of tasks increases. The proposed HMCTS scheme consistently exhibits the lowest SLA violations ratio, highlighting its superior reliability and performance. At the highest task load of 6000 tasks, the HMCTS maintains a violations ratio of around 21, while other methods like DRL show a much higher ratio, peaking at approximately 33. The superior outcome of the HMCTS can be attributed to its hybrid approach, which combines genetic algorithms with Monte Carlo Tree Search. This hybrid model effectively manages task offloading decisions, ensuring optimal resource allocation and reducing the likelihood of SLA violations. By optimizing both the exploration and exploitation processes within the task scheduling and offloading strategy, the HMCTS method maintains higher adherence to SLAs, ensuring better service quality and user satisfaction in Mobile Augmented Reality (MAR) applications, especially under high-load conditions. The comparative analysis of SLA violations for all the algorithms are presented in Table 11 to demonstrate the proposed model better performance. Fig. 12 and Table 12 presents the average energy consumption in kilojoules (kJ) for different task offloading schemes (MMCT, MCDS, GA, DRL, Closure, and the proposed HMCTS) as the number of tasks increases. The proposed HMCTS consistently demonstrates the lowest average energy consumption across all task volumes. At the highest load of 6000 tasks, the HMCTS achieves an average energy consumption of approximately 1290 kJ, which is significantly lower than other methods, such as GA and MCDS, which exceed 1400 kJ. The superior performance of the HMCTS in terms of energy efficiency can be attributed to its hybrid approach that combines genetic algorithms with Monte Carlo Tree Search. This hybrid model optimizes task offloading decisions and resource allocation, minimizing the computational overhead and energy requirements. By effectively balancing exploration and exploitation in the offloading process, the HMCTS method reduces unnecessary migrations and scheduling delays, leading to lower energy consumption. Consequently, the HMCTS approach enhances the energy efficiency and overall performance of Mobile Augmented Reality (MAR) applications, particularly under high-load conditions. 5 Inferences from the research • The paper introduces a novel task offloading scheme that combines a genetic algorithm-based local search procedure with a Monte Carlo Tree Search (MCTS) algorithm. This hybrid approach enhances the efficiency and effectiveness of task offloading in Mobile Augmented Reality (MAR) applications, which is a significant advancement over traditional methods. • The HMCTS model demonstrates superior performance across key metrics such as energy consumption, response time, migration time, scheduling time, and SLA violations. Experimental results validate that the proposed method outperforms existing techniques like MMCT, MCDS, GA, DRL, and Closure, exhibiting its robustness and scalability under high-load conditions. • Incorporating YoloV7 for object recognition within the MAR application adds to the novelty by ensuring fast and accurate object detection. This integration supports the overall goal of reducing computational load and improving the user experience. • Specifically, the proposed HMCTS method incorporating YoloV7 demonstrated a reduction in energy consumption by approximately 8 % compared to the GA-based method, a 25 % improvement in response time, and a 50 % reduction in migration time. Additionally, YoloV7's scalability across different computational environments ensures consistent performance on both mobile devices and edge servers. These attributes collectively enhance the robustness, reliability, and overall performance of the MAR system, providing superior user experience. • The paper includes a detailed experimental setup using Microsoft Azure Virtual Machines and Azure B2s edge servers, providing a realistic evaluation environment. The comprehensive analysis and comparison with state-of-the-art methods further highlight the effectiveness of the proposed HMCTS approach. • The HMCTS approach demonstrates excellent scalability, handling up to 6000 tasks with minimal performance degradation. Its flexibility in adapting to various task loads and environments makes it highly suitable for dynamic and resource-constrained MAR applications. • The paper not only addresses current challenges in task offloading for MAR but also opens avenues for future research by suggesting the integration of deep learning techniques to further optimize the offloading process. • While the proposed HMCTS method significantly improves the performance and efficiency of task offloading in Mobile Augmented Reality (MAR) applications, it is essential to consider the privacy and security implications of offloading sensitive MAR data to edge and cloud servers. In real-world applications, MAR systems often handle sensitive and personal data, such as location information, visual inputs, and user interactions, which need to be protected against unauthorized access and potential breaches. Ensuring data privacy and security during the offloading process involves implementing robust encryption protocols, secure communication channels, and access control mechanisms. Additionally, edge and cloud servers should comply with data protection regulations and standards to safeguard user data. Future work should focus on integrating advanced security features, such as homomorphic encryption and secure multi-party computation, to enhance the privacy and security of MAR data during offloading, ensuring that the benefits of the HMCTS method are realized without compromising user trust and data integrity. In summary, the proposed HMCTS model has better ability in offloading the tasks to suitable edge servers and ensures the optimal utilization of resources. The reduced response time and minimum energy consumption over existing methods indicates that the proposed attained its primary objective of improving MAR application performance. Similarly by offloading the computationally intensive objective recognition tasks performed by Yolo V7 ensures that reduced burden of local mobile devices. Faster object recognition and smoother user experience are attained through the proposed model which indicates that proposed model satisfies the objective of enhanced performance and accuracy in object detection. The extensive experimental results validate the proposed model superior performance through various metrics. The results emphasize that the proposed model achieves its objective of reduced energy consumption, response time and migration time in task offloading. 6 Limitations While the proposed HMCTS model shows significant improvements in task offloading for MAR applications, it has certain limitations. The simulations were conducted under controlled conditions assuming ideal network stability and consistent resource availability, which may not reflect real-world scenarios with network variability and dynamic user behavior. The model might not perform as well in highly complex task dependency structures or in environments requiring extremely fast decision-making due to the added computational overhead of the genetic algorithm integration. Additionally, scalability beyond 6000 tasks remains to be tested. 7 Conclusion and future scope This research presents a Hybrid Monte Carlo Tree Search (HMCTS)-based task offloading scheme for multi-user edge computing in Mobile Augmented Reality (MAR) systems. By combining a genetic algorithm with Monte Carlo Tree Search, the proposed method achieves significant improvements in reducing response time, scheduling time, energy consumption, and migration time. Comparative experiments validate that the HMCTS model outperforms existing offloading schemes, enhancing the overall performance of MAR applications. Future research should focus on incorporating realistic conditions, reducing computational overhead, and exploring deep learning techniques for further optimization and scalability. Funding The author did not receive support from any organization for the submitted work. CRediT authorship contribution statement Godfrey Winster Sathianesan: Supervision. Anitha Jebamani S: Writing – review & editing, Writing – original draft, Visualization, Validation, Software, Resources, Project administration, Methodology, Investigation, Funding acquisition, Formal analysis, Data curation, Conceptualization. Declaration of Competing Interest The author has no relevant financial or non-financial interests to disclose. APPENDIX-I The implementation of the Hybrid Monte Carlo Tree Search (HMCTS) algorithm involves combining a genetic algorithm (GA) with Monte Carlo Tree Search (MCTS) to optimize task offloading in Mobile Augmented Reality (MAR) applications. The algorithm begins with the initialization of the MCTS framework, where the task offloading problem is represented as a search tree. Nodes in the tree represent states of task allocation, and edges represent offloading operations. The MCTS proceeds through four main phases: selection, expansion, simulation, and backpropagation. During the selection phase, the most promising node is selected using the Upper Confidence Bound (UCB) formula. In the expansion phase, child nodes are generated from the selected node. The GA is integrated here to enhance the quality of child nodes. Crossover and mutation operations are applied to create new offspring, using order crossover to preserve task sequences and swap mutation to introduce variability. During the simulation phase, the generated nodes are evaluated through random simulations to estimate their outcomes. Finally, in the backpropagation phase, the results of the simulations are propagated back to update the values of the nodes. This iterative process continues until the stopping criteria, such as a maximum number of iterations or a convergence threshold, are met. The combination of GA with MCTS ensures efficient exploration and exploitation, leading to optimized task offloading decisions. class HMCTS code: def __init__(self, tasks, population_size, crossover_rate, mutation_rate, max_iterations): self.tasks = tasks self.population_size = population_size self.crossover_rate = crossover_rate self.mutation_rate = mutation_rate self.max_iterations = max_iterations def selection(self, node): # Select the node with the highest UCB value return max(node.children, key=lambda n: n.value + 1.41 * (math.sqrt(math.log(node.visits) / n.visits))) def expansion(self, node): # Generate child nodes using GA crossover and mutation children = [] for _ in range(self.population_size): child = self.crossover(node) if random.random() < self.mutation_rate: child = self.mutate(child) children.append(child) node.children.extend(children) def simulation(self, node): # Simulate the outcome of the node return random.uniform(0, 1) # Replace with actual simulation logic def backpropagation(self, node, reward): # Update node values with the simulation result while node is not None: node.visits += 1 node.value += reward node = node.parent def crossover(self, node): # Order crossover implementation parent1, parent2 = random.sample(node.children, 2) cross_point = random.randint(0, len(self.tasks) - 1) child = parent1[:cross_point] + [task for task in parent2 if task not in parent1[:cross_point]] return child def mutate(self, child): # Swap mutation implementation i, j = random.sample(range(len(child)), 2) child[i], child[j] = child[j], child[i] return child def run(self): root = Node(self.tasks) for _ in range(self.max_iterations): node = self.selection(root) self.expansion(node) reward = self.simulation(node) self.backpropagation(node, reward) return max(root.children, key=lambda n: n.value) # Example usage hmcts = HMCTS(tasks=[…], population_size=100, crossover_rate=0.8, mutation_rate=0.1, max_iterations=1000) best_solution = hmcts.run() print("Best solution:", best_solution) Ethics Approval The paper is an original contribution of research and is not published elsewhere in any form or language. Consent Statement All authors mentioned have contributed towards the research work, drafting of the paper as well as have given consent for publishing of this article. Consent to publication all authors listed above have consented to get their data and image published Code Availability Since, future works are based on the custom codes developed in this work, the code may not be available from the author. The authors have no relevant financial or non-financial interests to disclose. No Humans or Animals were involved in the experimentation
REFERENCES:
1.
2. FRAGALAMAS P (2018)
3. CAO J (2023)
4. JAMRUS M (2019)
5. NAOURI A (2021)
6. ISLAM A (2021)
7. CHAKRABARTI K (2021)
8. YIXUEHAO (2018)
9.
10. CHEN G (2021)
11. JIANG H (2023)
12. ZHANG N (2020)
13. LI Q (2022)
14. GONG Y (2023)
15. LI Y (2023)
16. MAHENGE M (2022)
17. HOSNY K (2023)
18. REKHA V (2022)
19. MANOHARAN J (2021)
20. LIAO Z (2023)
21. MASHHADI F (2020)
22. MANASHKUMARMONDAL S (2024)
23. YU X (2024)
24. HAO J (2024)
25. ELTNER A (2020)
26. CHEN R (2023)
|
10.1016_j.ecoinf.2025.103322.txt
|
TITLE: Anuran call synthesis with diffusion models for enhanced bioacoustic classification under data scarcity
AUTHORS:
- Manrique, José Sebastián Ñungo
- Gómez, Francisco
- Hernández-Romero, Freddy
ABSTRACT:
Addressing data scarcity in anuran bioacoustic monitoring, this study introduces diffusion probabilistic models (DPMs) for generating realistic calls and demonstrates their efficacy in data augmentation. A DPM framework combined with FID-based selection ensured synthetic call quality, with initial realism confirmed for Boana faber by human A/B tests. We then generated calls for nine species and evaluated DPM-based augmentation on severely imbalanced multi-species datasets. Our approach significantly improved F1 classification scores, especially for underrepresented species, outperforming a WGAN baseline. This highlights DPMs as a potent method to produce valuable synthetic bioacoustic data, enhancing AI-driven biodiversity monitoring.
BODY:
1 Introduction Global anthropogenic biodiversity loss is a significant threat to our planet’s health, with amphibians, particularly anuran populations (frogs and toads), being notably at risk ( Luedtke et al., 2023 ). Over 40% of amphibian species are threatened with extinction, making them among the most endangered vertebrates globally ( Luedtke et al., 2023 ). Consequently, biodiversity conservation efforts to protect their habitats and mitigate threats are critical for ensuring their survival ( Mathwin et al., 2024; Vidal et al., 2024 ). Anuran vocalizations are essential for communication, mate attraction, territory defense, and species identification ( De Araújo et al., 2024; Pijanowski, 2024 ), making the characterization of their sounds vital for developing effective conservation strategies ( Browning et al., 2017 ). However, this characterization remains challenging, often requiring highly specialized experts ( Dena et al., 2020; Emmrich et al., 2020 ). Automated monitoring systems, leveraging acoustic data and machine learning (ML), have emerged as innovative solutions for anuran sound characterization ( Huang et al., 2014; Colonna et al., 2016; Xie et al., 2016a; Strout et al., 2017; Gan et al., 2021; Cañas et al., 2023 ). These systems analyze extensive acoustic datasets, typically acquired in natural environments ( Cañas et al., 2023 ), to discern species distributions, monitor anthropogenic impacts, and detect subtle biodiversity shifts, thereby informing targeted conservation efforts ( Browning et al., 2017 ). However, a critical bottleneck for these automated systems is the availability of large, high-quality, and well-balanced datasets. Such datasets are particularly challenging to acquire for anurans due to their often unbalanced and heterogeneous distributions and the inherent difficulties of data collection in remote environments ( Villon et al., 2022; Abu-Mostafa et al., 2012 ). These conditions frequently result in long-tail datasets, where a few species are well-represented while many others are scarce ( Villon et al., 2022 ). The long-tail dataset problem significantly impedes ML-based systems ( He and Ma, 2013 ), as models trained on such data often exhibit biased performance, excelling on majority classes but struggling with underrepresented ones. The scarcity of data for many rare species further hinders ML model efficacy, as these models perform best with sufficient per-class samples ( Villon et al., 2022 ). Data augmentation, which provides more training data by modifying the existing one ( Cui et al., 2015; Haba, 2023 ), can help but risks introducing artifacts ( Kaur et al., 2021 ). Data generation, creating entirely new data via generative models ( Prince, 2023 ), offers an alternative to expand datasets while aiming to preserve underlying data structures, though ensuring the realism and complexity of synthetic data and under small training sample regimes is crucial. This work introduces a novel strategy for generating realistic anuran calls using Diffusion Probabilistic Models (DPMs) ( Ho et al., 2020 ) and demonstrates its practical utility for data augmentation in multi-species classification tasks. Unlike prior works primarily focused on direct species classification from existing data ( Huang et al., 2014; Colonna et al., 2016; Xie et al., 2016a; Strout et al., 2017; Gan et al., 2021; Cañas et al., 2023 ), our primary aim is to enrich imbalanced datasets with diverse and realistic synthetic samples. Our approach involves two main steps: first, generating candidate anuran sounds for multiple species using DPMs; second, implementing a post-processing selection strategy based on the Fréchet Inception Distance (FID) metric ( Heusel et al., 2017 ) and supervised clustering to retain only the most realistic generated audio. We initially developed and validated our generation and selection methodology using Boana faber as a focal species, confirming the realism of generated calls through human A/B testing where listeners struggled to distinguish them from real recordings. Building upon this, we significantly extended our work to synthesize calls for nine distinct anuran species. Crucially, we then conducted extensive experiments to evaluate the impact of augmenting severely imbalanced datasets with these DPM-generated calls on the performance of an automated classifier. The proposed approach is particularly relevant as computational bioacoustics continues to see advancements in sophisticated classification techniques that require higher-quality training data, such as the transfer learning approaches for bird species identification reported in Swaminathan et al. (2024) . We compared our DPM-based augmentation against a baseline (no augmentation) and augmentation using an established Wasserstein Generative Adversarial Network (WGAN) technique ( Park et al., 2020 ), previously used to generate anuran acoustic samples. The main contributions of this work are: (1) the development of a DPM-based generative model capable of producing realistic calls for multiple anuran species; (2) a robust selection strategy for ensuring the quality of synthetic data; and (3) empirical evidence demonstrating that augmenting datasets with our DPM-generated calls significantly improves classifier performance, particularly for underrepresented species, and outperforms a comparable GAN-based augmentation method. This approach offers a promising pathway to address data scarcity and imbalance, thereby enhancing the capabilities of automated biodiversity monitoring systems for anuran conservation. 2 Materials and methods This study aims to generate novel audio within the domain of bioacoustics. Fig. 1 outlines the methodological workflow employed. The process starts with a dataset of anuran sounds (A). This data was preprocessed to extract the sound portions representing frog croaks (B). Subsequently, the processed data was used to train diffusion probabilistic models (C) capable of producing new audio samples for individual anuran species. The trained models were then employed to generate sets of novel audio samples. For an initial detailed evaluation of generation quality and realism, samples from one species ( Boana faber ) were systematically selected using the Fréchet Inception Distance (FID) metric and underwent human perceptual evaluation (D). Building on this, the generative approach was extended to nine species to create synthetic data for evaluating its impact on a multi-species classification task through data augmentation, as detailed in Section 2.6 . 2.1 Data The anuran sound data utilized in this study originates from acoustic recordings deposited in the Fonoteca Neotropical Jacques Vielliard (FNJV). 1 Specific recordings and their manual annotations were selected and curated for this study based on the annotated sound dataset provided by 1 Comprehensive metadata for these original recordings is publicly accessible, and for complete traceability, our accompanying code repository https://fagomez.github.io/anura_dpm/ includes manifests linking our processed audio samples directly to their corresponding FNJV identification numbers. Arcila Pérez et al. (2024) . The sound database contains audio recordings of nine anuran species, each identified by a unique code (see Table 1 ). Each audio file (.wav format) is accompanied by a text file containing manual annotations. These annotations provide information about the initial and final times (in seconds) of each frog croak, and the corresponding species label. 2 This study only considered some samples labeled as clear. 2 When an annotation is registered, a quality assessment is added: C : Clear, M : Medium, F : Far. The nine frog species, found across South America’s forests, savannas and wetlands, each bring their own unique voice to the soundscape: Adenomera marmorata ( AmphibiaWeb, 2025a ), a tiny leaf-litter dweller, peeps rapidly from under the forest floor; Boana albomarginata ( AmphibiaWeb, 2025b ), B. faber ( AmphibiaWeb, 2025c ), B. leptolineata ( AmphibiaWeb, 2025d ) and B. raniceps ( AmphibiaWeb, 2025e ) ranging from tree-clinging to burrow-calling giants—offer trills, deep wraahs, nasal notes or pulsed whistles above ponds and streams; Dendropsophus cruzi ( AmphibiaWeb, 2025f ) and D. elegans ( AmphibiaWeb, 2025g ), small canopy frogs, buzz and trill from floating vegetation and bromeliads; Physalaemus cuvieri ( AmphibiaWeb, 2025h ) builds a floating foam nest and emits a gravelly “qua-kek”; and Scinax fuscomarginatus ( AmphibiaWeb, 2025i ), a slender marshland species, delivers a steady pulse of notes from the reeds. Together, their varied calls represent a soundscape that reflects each species’ habitat preferences and breeding behaviors ( Wells, 2007 ). As shown in Fig. 2 , Boana faber ( Integrated Taxonomic Information System (ITIS), 2025 ) (BOAFAB) is the most represented species, with approximately 2.5 h of annotated recordings. For the initial development and detailed demonstration of our audio synthesis methodology, we will focus on generating new audios specifically for BOAFAB. After filtering the BOAFAB recordings by quality and selecting only those with a “Clear” (C) assessment, we obtained approximately 24 min of usable annotated data. 2.2 Data preprocessing To construct the final training dataset we use equal-sized audios (one second) at a sampling rate of 16 kHz. Since the original recording could have a different length, we apply the following preprocessing: • If the audio length is below (or equal) to one second : The audio clip was extended to one second using the original recording where the annotation exists. Any remaining space to reach the one-second length is filled with adjacent unsymmetrical tails in the original audio. The process also serves as data augmentation, allowing us to get multiple training examples from a single annotated audio. For example, 100 new audios can be obtained from a single annotated instance. At last, 11,700 audio clips of BOAFAB were produced. • If the audio length is above one second : apply a rolling window to extract one-second clips from the audio. The window is positioned to ensure that it contains some part of the annotated clip. The window starts 0.5 s before the annotation’s initial time and moves forward until its end position exceeds the annotation’s final time. From this, 2960 audio clips of BOAFAB were obtained. This preprocessing resulted in a training dataset with 14,657 equal-sized audio clips of BOAFAB at a sampling rate of 16 kHz. 2.3 Training and sampling with diffusion models A generative model is a statistical construct that generates new data points by learning the underlying distribution of a dataset. Unlike discriminative models, which classify data, generative models capture how data is generated and can produce new samples that resemble the training data. Common applications include image and text generation, anomaly detection, and data imputation, making generative models powerful tools in machine learning and artificial intelligence. Notable generative models include GANs ( Goodfellow et al., 2014 ), Normalizing Flows ( Rezende and Mohamed, 2015 ), Variational Autoencoders (VAEs) ( Kingma and Welling, 2013 ), and, more recently, Diffusion Models. Of these, diffusion models hold particular promise for generating complex audio signals ( Chen et al., 2024 ). Therefore, we adopted them as the primary generative framework for producing novel frog croaks. In the following two subsections, we furnish a detailed examination of diffusion probabilistic models and provide specific information about the strategy for implementing of these models within our context. 2.3.1 Diffusion probabilistic models: General framework Diffusion models are a class of generative models that have proven to be highly successful in generating images. Their potential was initially explored in Sohl-Dickstein et al. (2015) and they gained wider popularity in Ho et al. (2020) due to their ability to create impressive images from the CIFAR10 dataset. DPMs operate by gradually introducing noise into the training data over a series of incremental stages ( Forward Process ). This process induces a ( Reverse Process ). The model learns how to reverse this noise injection process, reconstructing the data step-by-step ( Learning the Reverse Process ). This iterative denoising allows diffusion models to capture complex data distributions, making them highly effective for generating new realistic samples from random noise ( Sampling New Data ). Fig. 3 illustrates the DPMs general framework. Following, some of the mechanism’s main technical features of the DPMs are described (for further details, refer to Luo (2022) , for instance). Let represent the true distribution of data examples q ( x 0 ) , where x 0 ∈ R L is the data dimensionality. A diffusion model operates on a sequence of latent variables L , indexed by diffusion timestep { x t } t = 0 T ⊂ R L , and involves two key processes: a fixed forward process that gradually adds noise to data, and a learned reverse process that generates data by removing noise. t Forward Process (Adding Noise): This is a Markov chain that gradually transforms a data sample into a sequence of intermediate latent variables x 0 . The transition governing the dynamics at each step is defined by: x 1 , x 2 , … , x T where q ( x t | x t − 1 ) = N o r m a l x t ; 1 − β t x t − 1 , β t I is a predefined variance schedule with small positive values, which controls the amount of noise added at step { β t } t = 1 T ⊂ [ 0 , 1 ] . The joint probability of the latent variables given the start t is: x 0 The latent variable q ( x 1 , … , x T | x 0 ) = ∏ t = 1 T q ( x t | x t − 1 ) . at any arbitrary timestep x t can be directly sampled from the initial data t , without needing to compute intermediate steps, as follows: x 0 where q ( x t | x 0 ) = N o r m a l ( x t ; μ t ≔ α t x 0 , Σ t ≔ ( 1 − α t ) I ) , . This allows direct sampling of α t ≔ ∏ s = 1 t ( 1 − β s ) using: x t Note that as (1) x t = α t x 0 + 1 − α t ϵ t , where ϵ t ∼ N o r m a l ( 0 , I ) approaches zero, α t approaches a standard Gaussian distribution, effectively destroying the original data structure. x t Reverse Process (Removing Noise): The core idea of the diffusion model is to learn the reverse of the forward process. This involves starting from pure noise and iteratively removing noise to generate a sample x T ∼ N o r m a l ( 0 , I ) that resembles the original data distribution x 0 . This learned reverse process is also modeled as a Markov chain, parameterized by learnable parameters q ( x 0 ) : θ Each reverse transition p θ ( x 0 , … , x T − 1 | x T ) = ∏ t = 1 T p θ ( x t − 1 | x t ) is defined as p θ ( x t − 1 | x t ) (2) p θ ( x t − 1 | x t ) ≔ N o r m a l x t − 1 ; μ θ ( x t ) , Σ θ ( x t ) . Approximating the distribution in (2) with is intractable since we cannot compute the marginal distribution q ( x t − 1 | x t ) . q ( x t − 1 ) 3 However, including information of 3 depends on the unknown data distribution q ( x t ) . q ( x 0 ) (the true data point), we can approximate x 0 (2) by the tractable auxiliary distribution (see q ( x t − 1 | x t , x 0 ) Luo, 2022 ): with (3) q ( x t − 1 | x t , x 0 ) = N o r m a l x t − 1 ; μ ˆ ( x t , x 0 ) , Σ ˆ ( x t , x 0 ) , and μ ˆ ( x t , x 0 ) = 1 − β t ( 1 − α t − 1 ) 1 − α t x t + α t − 1 β t 1 − α t x 0 Σ ˆ ( x t , x 0 ) = σ t I = 1 − α t − 1 1 − α t β t I . Learning the Reverse Process (Training): The goal of training is to determine the parameters such that θ accurately approximate p θ ( x t − 1 | x t ) . To achieve this, a neural network is trained to minimize the KL-divergence between these two distributions. q ( x t − 1 | x t , x 0 ) L t ≔ D K L q ( x t − 1 | x t , x 0 ) ∥ p θ ( x t − 1 | x t ) = E x 0 1 2 ‖ Σ θ ( x t ) ‖ 2 2 ‖ μ ˆ ( x t , x 0 ) − μ θ ( x t ) ‖ 2 2 . From Eq. (1) we have that . Then, if we assume x 0 = x t − 1 − α t ϵ t α t is given, we get the following equivalent optimization problem: ϵ t where L t ≔ D K L q ( x t − 1 | x t , x 0 ) ∥ p θ ( x t − 1 | x t ) = E β t 2 β t ( 1 − α t ) ‖ Σ θ ( x t ) ‖ 2 2 ‖ ϵ t − ϵ θ ( x t ) ‖ 2 2 , is a neuronal network learning the noise ϵ θ ( x t ) , which define ϵ t from x t in x 0 (1) . Finally, according to the simplifications made in Ho et al. (2020) we seek a unique neuronal network such that minimize the cost function: ϵ θ (4) L ≔ E t ∼ U n i f ( 1 , T ) E ‖ ϵ t − ϵ θ ( α t x 0 + 1 − α t ϵ t , t ) ‖ 2 2 . The whole training procedure is summarized in Algorithm 1. Sampling New Data ( Generation, see Fig. 3 ) : Once the noise prediction network is trained, new data samples can be generated by simulating the learned reverse process ϵ θ . Generation starts with sampling pure noise p θ ( x t − 1 | x t ) from the prior distribution x T . Then, for each timestep N o r m a l ( 0 , I ) from t down to T , we iteratively sample 1 from x t − 1 . p θ ( x t − 1 | x t ) Using the trained network , the mean ϵ θ of the reverse transition μ θ ( x t , t ) can be calculated (derived from matching p θ ( x t − 1 | x t ) to p θ and using q ( ⋅ | x t , x 0 ) (1) ): The sampling step then becomes: μ θ ( x t , t ) = 1 1 − β t x t − β t 1 − α ̄ t ϵ θ ( x t , t ) . where x t − 1 = μ θ ( x t , t ) + σ t z = 1 1 − β t x t − β t 1 − α ̄ t ϵ θ ( x t , t ) + σ t z for z ∼ N o r m a l ( 0 , I ) , and t > 1 for z = 0 . The variance t = 1 is typically set to σ t 2 or simply β ˆ t = 1 − α t − 1 1 − α t β t . After iterating this process down to β t , the resulting sample t = 1 is a generated data point approximating a sample from the original distribution x 0 . The generation process is outlined in Algorithm 2. q ( x 0 ) 2.3.2 Diffusion model training configuration We employed an unconditional diffusion model based on Kong et al. (2020) , training a neural network (bidirectional dilated convolutions) by minimizing the loss in Eq. ϵ θ (4) . Distinct training configurations were used for the initial Boana faber (BOAFAB) study and the nine species models in the subsequent data augmentation experiments (Section 2.6 ). For the detailed BOAFAB generation, we set diffusion steps with a linear noise schedule ( T = 200 ). Training utilized an NVIDIA A100 GPU (40 GB vRAM, ColabPro+), Adam optimizer, batch size 8, learning rate β t ∈ [ 1 × 1 0 − 4 , 0 . 02 ] , for 75 epochs (approx. 16 min/epoch, 20 h total). 1 × 1 0 − 3 For the nine species models used in augmentation, parameters were adapted to accommodate the limited sample size per species. These models used diffusion steps with a five-degree polynomial noise schedule for T = 200 . This type of schedule introduces noise very gradually in the initial steps of the forward process, which aids in capturing finer features from the original audio signals, a beneficial characteristic when working with limited training data. Training utilized a NVIDIA RTX-4090 GPU (24G) and was performed using the Adam optimizer with a batch size of 4 and a learning rate of β t ∈ [ 1 × 1 0 − 4 , 0 . 02 ] for 50 epochs for each species-specific model (approx. 5 min/epoch, 4.2 h total per specie). These parameters were chosen after preliminary experimentation to balance generation quality with feasible training times for multiple models. 1 × 1 0 − 3 2.4 Systematic selection for generated audios Following the generation of a set of audio samples, a systematic selection process was employed to identify those samples that most closely resemble real bioacoustic BOAFAB croaks. This selected subset of samples then underwent human evaluation. X g e n Our goal is to select a subset of generated audio samples S in order to minimize the Fréchet Inception Distance (FID) between X g e n and the training audios S . The FID quantifies the similarity between two distributions, with lower FID values signifying greater similarity. We achieve this selection by employing a combination of K-means clustering based on audio embeddings and FID evaluation. X t r a i n 2.4.1 Audio embedding with ResNeXt To generate embeddings for our audio samples, we utilized a ResNeXt classifier architecture ( Xie et al., 2017 ). ResNeXt enhances standard residual blocks by employing a “split-transform-merge” strategy, where each block is divided into several parallel, lower-dimensional convolutional paths (cardinality) whose outputs are then aggregated. This design allows the model to capture a wider variety of features more efficiently without a significant increase in computational cost, while still benefiting from residual connections to ease training of deep networks ( Xie et al., 2017 ). While ResNeXt models are renowned for image classification, they can be effectively adapted for audio tasks. For our application, each one-second audio clip was first converted into a 32 × 32 pixel mel-spectrogram, representing time versus frequency. This mel-spectrogram was then treated as a single-channel (grayscale) image input to the ResNeXt model. The primary adaptation from the original vision architecture involved modifying the first convolutional layer to accept single-channel inputs instead of the standard three channels for RGB images. All other components of the ResNeXt architecture, including the grouped convolutions, bottleneck blocks, residual connections, network depth, and width, remained consistent with the original design proposed by Xie et al. (2017) . This allowed the 2D convolutional architecture to learn from the time–frequency patterns present in the audio directly. Using this adapted ResNeXt architecture, we trained a binary classifier. The training dataset comprised BOAFAB audio recordings (positive class) and recordings from other anuran species, collectively labeled as ‘OTHER’ (negative class). The audio clips for both classes were constructed as described in Section 2.1 . The model was trained following the configuration recommended in an open repository that has successfully applied ResNeXt to audio classification tasks ( Xu and Tuguldur, 2017 ). This classifier achieved an accuracy of 98.54% on the training set and 99.93% on the test set. Leveraging the strong discriminative power of this trained model, we extracted a 1024-dimensional embedding vector, denoted as for an input audio F ( x ) , from an intermediate layer to represent each audio sample. x 2.4.2 Fréchet inception distance Then, we use the embedding to compute the Fréchet Inception Distance (FID), between a sample of train data ( ) and a subset X t r a i n of generated audio samples ( S ). FID computes the Wasserstein-2 distance between Gaussians fitted to X g e n and F ( X t r a i n ) as, F ( S ) where FID [ S , X t r a i n ] = ‖ μ g − μ t ‖ 2 + Tr ( Σ t + Σ g − 2 ( Σ t Σ g ) 1 2 ) , and μ t , Σ t are the mean and the covariance of μ g , Σ g and F ( X t r a i n ) , respectively. F ( S ) We chose FID (audio adaptation Kilgour et al., 2019 ) because it evaluates the distributional similarity (mean and covariance of embeddings) between generated and real sample sets, providing a holistic measure of generative quality, realism, and diversity, often correlating with human perception. This contrasts with metrics like cosine similarity, which assess pairwise vector relationships, making FID more established and appropriate for evaluating overall generative quality in our selection task. 2.4.3 Systematic selection We use K-Means clustering to partition the generated samples into distinct groups based on their embedding representation. The K-means algorithm, denoted as for this paper, partitions a set of elements K M ( X ; k ) into X subsets: k , where K M ( X ; k ) = { C i } i = 1 k . We then identify subset X = ⋃ i = 1 k C i where S = ⋃ i ∈ I C i and C i ∈ K M ( X ; k ) for some value I ⊂ { 1 , … , k } , such that k is minimized. The formal process is described in Algorithm 3. FID [ S , X t r a i n ] 2.5 Human perceptual evaluation (A/B test) Evaluating generative models in bioacoustics benefits from a multifaceted approach that blends objective metrics with human perception. While objective measures like FID quantify signal properties, they may not fully capture the intricate details and perceived naturalness of real sounds. As noted in Cooper et al. (2024) , relying solely on objective metrics can offer an incomplete assessment. Furthermore, limitations of other approaches for gauging naturalness, as Mean Opinion Score tests, have been highlighted, with paired comparison or A/B tests suggested as potentially more sensitive and reliable methods ( Shirali-Shahreza and Penn, 2018 ). Therefore, to incorporate subjective assessment, we conducted an A/B human evaluation of selected generated audio samples (derived as described in Section 2 ) against real Boana faber (BOAFAB) recordings. This approach aligns with trends in generative model evaluation, emphasizing a holistic assessment. For the evaluation, participants were presented with sets of audio clips and asked to distinguish between real and generated samples. The experiment involved 60 participants. These participants were primarily undergraduate and postgraduate students from technical disciplines (e.g., mathematics, computer science, engineering) and were considered non-experts regarding specific anuran bioacoustics or BOAFAB calls. To mitigate this lack of specific expertise and allow for familiarization, the evaluation application provided 20 real BOAFAB reference croaks that participants could listen to before and during the task. Each participant was exposed to ten one-second audio clips in a given trial. The clips within each set of ten were either all real BOAFAB recordings (randomly selected from ) or all generated BOAFAB samples (randomly selected from X t r a i n ). After listening to each clip, participants were asked to indicate whether they believed it was real or generated. The evaluation application was developed using Streamlit and was initially made available at X g e n https://apptesis.streamlit.app/ . We compared the proportion of correct identifications between the real and generated audio groups for statistical analysis. 2.6 Classification performance with data augmentation To quantitatively evaluate the practical utility of our proposed diffusion-based generative model for data augmentation, particularly in scenarios characterized by significant class imbalance resulting from data scarcity, we assessed the impact of augmenting limited datasets with synthetically generated anuran calls on the performance of a multi-species classifier. We compare our diffusion approach against a baseline (no augmentation) and augmentation using a WGAN technique proposed in Park et al. (2020) used in a similar augmentation task in anurans’ sounds. 2.6.1 Classifier description For the multi-species classifier, 1-s audio clips resampled to 16 kHz were represented by the mean of 13 Mel Frequency Cepstral Coefficients and used as input to a 100-tree Random Forest classifier ( Barchiesi et al., 2015 ). We deliberately chose a Random Forest classifier for its computational efficiency and its ability to provide a clear, direct evaluation of the augmentation techniques, ensuring that performance improvements could be attributed primarily to the quality of the synthetic data rather than the complexity of the classifier’s architecture. These computationally lightweight models were trained on augmented/generated training samples and evaluated on a held-out validation set using macro-, micro-, and weighted F1-scores as well as confusion matrices. 2.6.2 Dataset and experimental setup Following the same preprocessing pipeline employed for Boana Faber (detailed in Section 2.2 ), we constructed a multi-species dataset. This dataset comprises one-second audio clips, sampled at 16 kHz, for nine distinct anuran species (labels to s 1 in s 9 Table 1 ). The audio data for each species was partitioned into training, validation, and test sets, as shown in Table 2 . We systematically introduced severe class imbalances to investigate the efficacy of data augmentation on multiple unbalanced classification problems. We considered all possible subsets of one, two, and three species from nine. Then, we artificially reduced their respective training samples to just 20 instances for each selected subset of species. This process yielded 129 distinct, strongly imbalanced training datasets. The classifier (detailed in Section 2.6.1 ) was trained on each of the 129 imbalanced datasets under the following three conditions: • Baseline (Imbalanced): Using the strongly imbalanced training set directly. • Diffusion Augmentation: Augmenting the 20 samples of the reduced species with synthetic calls generated by our diffusion model (trained solely on those 20 samples for the respective species, until the sample count matched that of the majority class in the original (pre-reduction) nine-species training set. • WGAN Augmentation: Performing the same augmentation as before, but using synthetic calls generated by the WGAN approach proposed in Park et al. (2020) (again, trained only on the 20 samples for the respective species). The performance of the classifiers under these three conditions was compared using the F1-weighted score. 3 Results This section presents the outcomes of our experiments, organized into three main parts. First, we evaluate the quality of anuran calls generated by our diffusion model, focusing on the Boana faber species as a representative example. Second, we detail the results of the human perceptual evaluation designed to assess the realism of these generated calls. Finally, and most critically, we present the findings from our multi-species classification experiments, demonstrating the impact of data augmentation using our synthetically generated calls on classifier performance, particularly under conditions of data imbalance. 3.1 Diffusion model generation results Following 75 training epochs, the diffusion model generated high-quality samples comparable to real BOAFAB recordings. Fig. 4 shows BOAFAB-like waveforms and spectrograms for authentic and generated sounds. The reverse process, Fig. 5 , effectively shows noise reduction and the emergence of the BOAFAB croak frequency resulting from the learned inverse diffusion process. Moreover, Fig. 6 shows that lower training loss for the network correlates with clearer generated BOAFAB audio. ϵ θ For the Systematic Selection for generated audios we generate 1.200 audios from the model ( ) and randomly select the same amount of audios from the training dataset ( X g e n ). After applying Algorithm 3, we found that the optimal subset X t r a i n consists of three clusters from a k-means partition with five clusters ( S ), resulting in a minimum FID of 71.9. k = 5 Our analysis identifies clusters , C 0 , and C 1 as exhibiting the most realistic generated audios, forming the selected set C 4 . Conversely, the excluded clusters S and C 2 represent outliers, containing either white noise or unconvincing BOAFAB croaks. Specifically, clusters C 3 contained primarily pure white noise, while cluster C 3 consisted of highly realistic BOAFAB croaks that lacked the background noise characteristic of natural recordings. C 2 To visualize the relationships within this data, we perform dimensionality reduction using Principal Component Analysis (PCA ( Tipping and Bishop, 1999 )). We reduce the 1024-dimensional embedding of the clustered data to a two-dimensional plane for plotting, which facilitates understanding the relationships within the data (see Fig. 7 , where blue points are sample of and X t r a i n are the partition generated by S i ). While we perform dimensionality reduction using PCA for visualization purposes, this step is independent of the selection method, which relies on K-means clustering and FID scores. K M ( X g e n ; 5 ) 3.1.1 A remark on clustering with K-means Given the known concerns about applying K-means to high-dimensional data (curse of dimensionality), we performed some comparative experiments. Specifically, we applied K-means clustering directly to the original 1024-dimensional embeddings and after reducing dimensionality using t-SNE ( van der Maaten and Hinton, 2008 ) and Isomap ( Tenenbaum et al., 2000 ). These experiments showed that applying K-means to the original high-dimensional embeddings yielded superior sample selections (i.e., lower FID scores) compared to using the reduced-dimension embeddings (see Table 3 ). This counterintuitive finding led to further investigation into the data structure. For this, we estimated the intrinsic dimensionality of the embeddings (precisely defined to be if the data lie closely to a n -dimensional manifold embedded in n with little information loss) using two methods, LPCA and TwoNN ( R d Bac et al., 2021 ), obtaining values of approximately 6.40 and 11.77, respectively. This relatively low intrinsic dimensionality confirms that the data, while embedded in a 1024-d space, likely concentrates near a lower-dimensional manifold. This provides deeper insight into why direct clustering on the original embeddings was more effective for our specific selection task, as dimensionality reduction techniques, despite their general utility, risked discarding subtle information crucial for distinguishing sample quality in this context. 3.2 Human perceptual evaluation results To assess whether participants could reliably distinguish between real and generated BOAFAB audio samples, we analyzed the proportion of correct identifications for each group. A statistical t-test was employed for this comparison. The distributions of the counts of “real” identifications given real audios versus “real” identifications given generated audios are presented in Fig. 8 . The t-test yielded a statistic of with a t = 0 . 166 -value of p . This high p = 0 . 87 -value indicates no statistically significant difference in the proportion of correct identifications between the real and generated audio groups. This result suggests that, under the conditions of our A/B test, the non-expert listeners were unable to reliably distinguish between the real BOAFAB calls and those generated by our diffusion model. p 3.3 Classification performance with data augmentation results To provide a concrete example of the impact of data augmentation on the classification task, Fig. 9 displays the confusion matrices for a specific imbalanced scenario where the training samples for species , s 3 , and s 6 were reduced to 20 instances each. In this case, augmenting the data for these species with samples generated by the WGAN method had a minimal effect on subsequent classifier performance. In contrast, augmentation with samples generated by our diffusion model shows a considerable positive impact, particularly improving the recognition of species s 9 and s 3 , with their respective F1-weighted scores increasing by over 4.7% in this instance. s 9 The F1-weighted scores, aggregated from the 129 experimental configurations involving imbalanced datasets, are summarized in the violin plot in Fig. 10 . This figure illustrates that, on average, data augmentation using our diffusion model consistently leads to higher F1-weighted scores ( 0 . 912 ± 0 . 03 mean ± 1 std ) compared to both the baseline (imbalanced, no augmentation) condition and augmentation using the WGAN approach 0 . 90 ± 0 . 03 . The distribution of scores for the diffusion augmentation is visibly shifted towards superior performance. 0 . 90 ± 0 . 03 Furthermore, the quality of audio samples generated by our diffusion model, even when trained on extremely few initial samples (20 instances, as described in Section 2.6.2 ), is illustrated in Fig. 11 . This figure shows representative waveforms and mel-spectrograms, demonstrating the fidelity of the generated audio. 4 Discussion This work introduces a novel strategy for synthetic anuran call generation using generative diffusion models, coupled with a systematic FID-based selection process to ensure perceptual quality. Our primary advancement lies not only in the generation of realistic audio but, more critically, in demonstrating the profound practical utility of these synthetic calls for data augmentation in challenging bioacoustic classification scenarios. The scarcity of large, high-quality datasets, particularly for rare species, is a well-recognized bottleneck in bioacoustic monitoring ( Cañas et al., 2023; Luccioni and Rolnick, 2023 ), a field vital for AI-driven conservation efforts ( Tuia et al., 2022 ). Traditional generative approaches for bioacoustics have often faced limitations due to fixed frequency structures or restricted audio variety ( Park et al., 2020; Yella et al., 2022; Kim et al., 2023; Herbst et al., 2024 ). Our diffusion-based method, by learning a flexible audio representation directly from data, overcomes these limitations, enabling the generation of diverse calls that capture time-varying frequencies and environmental context, essential characteristics of anuran vocalizations ( Xie et al., 2016b; Akbal et al., 2023; Goutte et al., 2013 ). The results presented in Section 2.6 compellingly demonstrate the efficacy of our diffusion model for data augmentation. Across 129 diverse, artificially imbalanced multi-species scenarios, augmenting training data with our DPM-generated calls consistently outperformed both a baseline (no augmentation) and augmentation using a WGAN approach ( Park et al., 2020 ) ( Fig. 10 ). This significant average improvement in F1-weighted scores underscores the ability of DPMs to generate synthetic data that is not merely perceptually plausible but also highly effective in enhancing the discriminative capabilities of machine learning classifiers. The confusion matrices for specific imbalanced cases ( Fig. 9 ) further highlight this superiority; while WGAN augmentation showed minimal impact, our diffusion-generated samples led to substantial improvements in recognizing underrepresented species. This suggests that our DPM captures more salient and useful acoustic features crucial for classifier training than the WGAN baseline. A key methodological aspect contributing to this success, especially when dealing with species with extremely limited initial data (e.g., 20 samples), was our data duplication strategy prior to DPM training. By extensively duplicating these few samples (e.g., 650 times to yield 13,000 instances), we created a larger initial dataset. This approach is particularly well-suited to the DPM training paradigm. Unlike GANs or VAEs, which can suffer from mode collapse on such low-diversity duplicated data, DPMs learn to reverse a noise corruption process applied independently to each sample instance across many timesteps ( ). Each duplicate thus traverses a unique path through the noising and denoising process, effectively providing distinct training trajectories. This inherent property allows the DPM to robustly learn underlying characteristics even from very few unique examples, a critical advantage for ecological datasets. The quality of calls generated under these low-data conditions ( T = 200 Fig. 11 ) further validates this approach. The systematic selection mechanism, employing sound embeddings and the Fréchet Audio Distance (FID) ( Gui et al., 2024; Heusel et al., 2017 ) — a metric effectively transferred from music generation ( Gui et al., 2024 ) — remains crucial for ensuring the quality of generated samples used for augmentation. While human perceptual evaluations (Section 3.2 , Fig. 8 ) provided supplementary validation of the generated sounds’ realism (showing near-indistinguishability from real calls), the primary evidence for the quality and utility of our generated data now firmly rests on the objective improvement demonstrated in the classification tasks. This alignment between perceptual realism and utility strengthens the overall validation of our approach. Our work also highlights the advantages of directly generating audio waveforms using architectures like the bidirectional dilated network ( Kong et al., 2020 ), over synthesizing spectrograms ( Van Den Oord et al., 2016 ), for achieving greater realism and flexibility. The computational efficiency of our approach is an important consideration for real-world applicability. While the initial training of the diffusion model is computationally intensive, requiring approximately 4 h per species-specific model (Section 2.3.2 ), this is a one-time, offline investment. As with many deep generative models ( Van Den Oord et al., 2016 ), once trained, our diffusion model can generate new audio samples efficiently. Specifically, generating each one-second audio sample takes only 1.2 s on an NVIDIA RTX-4090 GPU (24 GB), a speed that could be considered viable for many near real-time or batch generation tasks. Furthermore, the primary application demonstrated herein — data augmentation for classification (Section 2.6 ) — can leverage this offline generation. The augmented dataset can then be used to train downstream classifiers that are themselves computationally inexpensive. For instance, a lightweight classifier based on Mel-Frequency Cepstral Coefficients (MFCCs) and a Random Forest, as employed in our study, is optimized for quick inference. This combination of efficient sample generation (post-training) and the potential use of a compact, fast classifier (enhanced by our augmented data) makes the overall system practical for deployment, even in settings with limited computational resources once the initial generative model training is complete. The scalability of our approach presents two aspects. In terms of species expansion, the training cost scales linearly as we train a dedicated model per species, a tractable strategy for targeted conservation applications. Regarding dataset size, our method is fundamentally designed for data scarcity. Our data duplication strategy, which leverages the unique training properties of DPMs, proved highly effective for these low-data regimes. Conversely, for well-represented species with abundant data, a simple subsampling of the initial training set would be applied, ensuring the framework’s applicability across varied data availability contexts, although its primary contribution remains in addressing the challenge of scarce data. Despite these advancements, limitations persist. The specific FID settings and the bidirectional dilated network architecture’s effectiveness might show some variability across a wider range of anuran species and acoustic environments than those tested here ( Hamer et al., 2023 ). While our data duplication strategy proved effective for low-data regimes in DPMs, the computational cost of training diffusion models remains a consideration for very large-scale applications involving hundreds of species ( Chen et al., 2024 ). Future research will focus on expanding the dataset diversity to further test generalizability ( Cañas et al., 2023 ). We will also explore alternative selection mechanisms and perceptual quality metrics to complement FID ( Jayasumana et al., 2024 ), and investigate integrating our method with other data augmentation techniques ( Chen et al., 2024 ). Applying these augmented datasets to a broader range of AI tasks (e.g., call-type identification, population density estimation ( Kay et al., 2022 )) will be crucial for fully assessing the practical benefits for bioacoustic monitoring. The ability to generate high-quality, useful synthetic data as demonstrated here holds significant potential for enhancing AI models used in species identification ( Van Horn et al., 2022 ) and habitat monitoring ( Beery et al., 2018, 2022 ), ultimately enabling more proactive and evidence-based conservation decisions, such as protecting critical habitats ( Ayoola et al., 2024 ), forecasting ecological trends ( Levy and Shahar, 2024 ), and ensuring compliance with environmental regulations ( Sabia et al., 2020 ). While this study validates DPMs against a strong GAN-based baseline, we acknowledge that other generative frameworks, such as VAEs, also offer potential solutions. Recent work ( Rajasekar et al., 2025 ) demonstrates how generative autoencoders can be effectively used in a preprocessing pipeline to enhance the quality of existing noisy data. While their approach focuses on improving the quality of the initial dataset, our method centers on augmenting its quantity. A promising future research direction lies in exploring the synergy between these two strategies: using an autoencoder-based model to denoise and enhance the initial few-shot samples before employing our DPM framework for high-fidelity data generation. Furthermore, enhancing the interpretability of the generative process is a critical next step. While discriminative models, such as the attention-based network proposed in Kumarappan et al. (2024) , can highlight which input features drive a prediction, interpreting a generative model involves understanding what constitutes a realistic output. Future work could focus on adapting visualization techniques, such as analyzing the model’s latent space or exploring attention-like mechanisms within the denoising process, to provide clearer insights into the acoustic features learned and prioritized by our DPM. This would not only build trust in the synthetic data but also potentially reveal key bioacoustic markers. While our current study successfully validates the utility of DPM-based augmentation, an important next step will be to benchmark our generated data with more advanced, state-of-the-art classifiers, such as deep convolutional neural networks (CNNs) or audio-specific models like Wav2Vec ( Baevski et al., 2020 ), to assess its impact in a high-performance setting. An innovative avenue for future research involves integrating our framework into a dynamic, adaptive system, inspired by the reinforcement learning (RL) approach used in Sivamayilvelan et al. (2024) for process optimization. In such a framework, an RL agent could be trained to learn an optimal data generation policy, where the reward is derived from the performance gains of the downstream classifier, creating a self-improving generation-selection-feedback loop. While this represents a significant extension, it establishes a clear path toward fully automated and optimized bioacoustic data augmentation. To bridge the gap between our findings and practical field deployment, future work will focus on creating a deployable pipeline. A potential roadmap includes: (1) integrating our DPM-based data augmentation into an end-to-end classification system that can run on edge computing devices used in remote monitoring; (2) testing the system’s robustness against a wider range of real-world environmental noise and acoustic conditions not present in the curated training data; and (3) collaborating with conservation organizations to validate the performance uplift in active bioacoustic monitoring programs. 5 Conclusions This study successfully demonstrated the significant potential of diffusion probabilistic models (DPMs) for generating high-quality, realistic anuran calls and their practical utility in data augmentation for improving automated species classification, particularly under conditions of severe class imbalance. We show the ability of DPMs to synthesize diverse audio samples for nine distinct anuran species, capturing essential acoustic characteristics that proved valuable for downstream tasks. Our key findings indicate that: • Diffusion models can effectively generate synthetic anuran calls that are perceptually realistic, as supported by human evaluation where listeners often could not reliably distinguish them from real recordings. • Augmenting imbalanced training datasets with DPM-generated calls leads to a significant improvement in multi-species classification performance (F1-weighted score) compared to both a baseline (no augmentation) and augmentation using an established WGAN technique. • A simple data duplication strategy, when combined with the inherent properties of the DPM training process, enables effective model training and high-quality sample generation even from extremely limited datasets, a crucial advantage for rare species. These results highlight DPMs as a powerful and advanced tool for addressing data scarcity challenges in computational bioacoustics. The ability to generate useful synthetic data directly addresses a critical bottleneck in developing robust AI-driven biodiversity monitoring systems. Future work should focus on extending this approach to a wider array of species and acoustic environments, further refining selection metrics, and exploring the impact of these augmented datasets on other ecological applications. Ultimately, this research contributes towards more effective and data-rich tools for anuran conservation efforts. CRediT authorship contribution statement José Sebastián Ñungo Manrique: Writing – original draft, Visualization, Software, Methodology, Investigation, Formal analysis, Data curation, Conceptualization. Francisco Gómez: Writing – review & editing, Writing – original draft, Supervision, Methodology, Investigation, Conceptualization. Freddy Hernández-Romero: Writing – review & editing, Writing – original draft, Validation, Supervision, Methodology, Investigation, Formal analysis, Conceptualization. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgments We thank Juan Sebastian Ulloa and Juan Cañas from the Humboldt Institute for providing the annotated data and for the insightful discussions that contributed to this paper. Francisco Gómez thanks the project Ampliación del uso de la mecánica cuántica desde el punto de vista experimental y su relación con la teoría generando desarrollos en tecnologías cuánticas útiles para metrología y computación cuántica a nivel Nacional with HERMES code 56573.
REFERENCES:
1. ABUMOSTAFA Y (2012)
2. AKBAL E (2023)
3. AMPHIBIAWEB (2025)
4. AMPHIBIAWEB (2025)
5. AMPHIBIAWEB (2025)
6. AMPHIBIAWEB (2025)
7. AMPHIBIAWEB (2025)
8. AMPHIBIAWEB (2025)
9. AMPHIBIAWEB (2025)
10. AMPHIBIAWEB (2025)
11. AMPHIBIAWEB (2025)
12. ARCILAPEREZ L (2024)
13. AYOOLA V (2024)
14. BAC J (2021)
15. BAEVSKI A (2020)
16. BARCHIESI D (2015)
17.
18.
19. BROWNING E (2017)
20. CANAS J (2023)
21. CHEN M (2024)
22.
23. COOPER E (2024)
24. CUI X (2015)
25. DEARAUJO C (2024)
26. DENA S (2020)
27. EMMRICH M (2020)
28. GAN H (2021)
29. GOODFELLOW I (2014)
30. GOUTTE S (2013)
31. GUI A (2024)
32. HABA D (2023)
33. HAMER J (2023)
34. HE H (2013)
35. HERBST C (2024)
36. HEUSEL M (2017)
37. HO J (2020)
38. HUANG C (2014)
39. INTEGRATEDTAXONOMICINFORMATIONSYSTEMITIS (2025)
40.
41. KAUR P (2021)
42. KAY J (2022)
43. KILGOUR K (2019)
44. KIM E (2023)
45. KINGMA D (2013)
46. KONG Z (2020)
47. KUMARAPPAN J (2024)
48. LEVY O (2024)
49.
50. LUEDTKE J (2023)
51. LUO C (2022)
52. MATHWIN R (2024)
53. PARK S (2020)
54. PIJANOWSKI B (2024)
55. PRINCE S (2023)
56. RAJASEKAR E (2025)
57. REZENDE D (2015)
58. SABIA R (2020)
59. SHIRALISHAHREZA S (2018)
60. SIVAMAYILVELAN K (2024)
61. SOHLDICKSTEIN J (2015)
62. STROUT J (2017)
63. SWAMINATHAN B (2024)
64. TENENBAUM J (2000)
65. TIPPING M (1999)
66. TUIA D (2022)
67. VANDENOORD A (2016)
68. VANDERMAATEN L (2008)
69. VANHORN G (2022)
70. VIDAL M (2024)
71. VILLON S (2022)
72. WELLS K (2007)
73. XIE S (2017)
74. XIE J (2016)
75. XIE J (2016)
76. XU Y (2017)
77. YELLA N (2022)
|
10.1016_j.asej.2025.103294.txt
|
TITLE: Evaluating the role of critical success factors of Total quality management (TQM) implementation through SmartPLS in industrialized building projects (IBS)
AUTHORS:
- Alawag, Aawag Mohsen
- Alaloul, Wesam Salah
- Mohamad, Hisham
- Liew, M.S.
- Awang, Mokhtar
- Baarimah, Abdullah O.
ABSTRACT:
This study develops a model for Total Quality Management (TQM) in Malaysian Industrialized Building System (IBS) projects, examining the impact of critical success factors (CSFs) on TQM application. Within the scope of a survey carried out among construction experts, 371 valid answers were processed, which is regarded as adequate for performing Structural Equation Modeling (SEM) analysis. Through a survey among construction experts in Malaysia and subsequent PLS-SEM analysis, six CSFs were identified as significantly influencing TQM implementation in IBS projects. These factors underscore the importance of strategic focus on key areas to enhance TQM practices within the construction industry. Collectively, these CSFs account for 70.7 % of the effectiveness in implementing TQM, indicating their substantial role in the success of TQM strategies. The outcomes of this research have significant implications for the decision-making process in the construction field, particularly in the context of Malaysian Industrialized Building System (IBS) projects.
BODY:
1 Introduction The concept of Total Quality Management (TQM) was initially established in the 1950 s by Edward Deming in Japan [1] . It has garnered substantial recognition for its ability to enhance quality implementation; yet, it is predominantly used in large corporations with little use in small and medium-sized businesses [2–4] . It is widely acknowledged as the driving force for improving performance in the construction sector [5] . In the 1990 s, the Malaysian government recognized TQM as a viable method for improving the efficiency and effectiveness of the public sector [6] . At that point, TQM had been used and experienced across all of Malaysia's economic domains [7] . TQM was used in larger Malaysian organizations, particularly in the electrical and electronic industries, and subsequently spread to other industrial and service sectors [8] . The Prime Minister's Quality Awards are officially secretariates by the Malaysian government with the establishment of the Malaysian Administrative Management and Modernization Unit (MAMPU) [9] . MAMPU has made a substantial contribution to the evaluation and improvement of TQM procedures in Malaysia [7,8] . In the building sector, achieving acceptable standards of quality has always been difficult [10,11] . The construction organizations encounter several obstacles and an intricate TQM implementation procedure [12] . It encompasses all individuals at all levels and stages and requires a comprehensive comprehension of each individual behaviour [13] . To attain consistent and ongoing enhancement of quality, it is essential to undergo administrative and cultural reform [8] . A construction organization using TQM would not only thrive but also be able to play in the universal market [14] . In addition, the construction sector has a reputation for being of lower quality when compared to other industries like manufacturing and services [11,15] . In addition, construction operations now often face challenges from claims, modifications, cost overruns, conflicts, and failures [16,17] .. Projects to improve quality have been launched by corporate enterprises worldwide to advance quality [18] . However, later on, the quality trend spread to other sectors of the economy, including authority, health care, insurance, nonprofit organizations, and the financial and educational sectors. Quality training-based TQM patterns often include many guiding principles or essential elements, such as customer focus, top-level management leadership, workforce engagement, education, tools for continuous improvement, coordination etc. [19] . The construction industry faces significant challenges in achieving satisfactory levels of quality, with the implementation of TQM being particularly complex and fraught with difficulties [20,21] . The issue is further exacerbated by the industry's reputation for worse quality in comparison with different industries, such as functions and manufacturing. Common challenges encountered in the construction sector are complaints, modifications, excess costs, conflicts, and setbacks [22] . Despite the potential benefits of TQM, its successful implementation requires complete mindfulness of every single action, involvement of every individual at every level and phase, as well as cultural and administrative changes to achieve persistent and common growth of quality [23–25] . The challenge lies in navigating the intricate journey of TQM adoption in the construction industry to enhance overall quality and competitiveness. The construction sector is a vital sector of the Malaysian economy, aiding extensively in its growth and development. However, the industry faces various challenges, including environmental concerns and the need for green construction practices [26] . In response, the concept of TQM has garnered attention as a means to improve project performance and sustainability [27] . Sustainable residential offsite building projects, which promote efficiency and reduce environmental impact, represent a significant improvement in the Malaysian construction sector [28] . The success of such projects is dependent on various factors, often referred to as CSFs. This research explores the relationship between TQM deployment and the CSFs for sustainable residential offsite construction projects. In the construction sector, TQM entails the systematic integration of quality concepts and practices into project management and execution [29] . The concepts of TQM contain a strong importance on customer satisfaction, ongoing enhancement, committed participation of employees, effective practice control, and making decisions based on data analysis. TQM has shown promise in improving project performance, reducing defects, and enhancing overall quality [30] . The Malaysian construction industry is undergoing a transformative shift concerning the embracing of IBS technologies, particularly in the construction of sustainable residential projects [31] . IBS techniques, which encompass pre-fabrication and offsite construction methods, are seen as a viable means of improving construction efficiency, sustainability, and quality [32] . However, to optimize the outcomes of these IBS building projects, a comprehensive understanding of the relationship between TQM deployment and the CSFs specific to sustainable residential IBS building projects is imperative. The present literature reveals a significant gap in knowledge regarding how TQM practices affect the CSFs essential for the accomplishment of IBS projects in the construction industry of Malaysia. Thus, there is a pressing need to explore and model this intricate relationship for the advancement of the construction industry. The main purpose of this study article is to examine and model the affiliation connecting TQM deployment and the CSFs for sustainable residential building projects in the Malaysian construction division. The construction sector, especially in Malaysia, has a never-ending struggle to enhance the management of quality concerns, most especially in the implementation of IBS projects [33] . Even though TQM brings about its effectiveness, a good number of projects do not achieve optimal results as predominant their conditions. Most of such projects do not have a clear ‘strategic’ direction and do not apply any specific detailed model on the TQM application. Many factors are recognized as critical to ascertain the success of TQM in construction activities. However, there is an inadequate understanding of factors that are most likely to affect TQM results within the scope of Malaysian IBS projects. Besides, there are models which explain in detail the CSFs but their relationships and effects on TQM success are not sufficiently explained. While earlier studies have recognized different facets contributing to the implementation of TQM in construction, such studies tend to ignore the peculiarities of IBS projects, especially in the context of construction in Malaysia. It is also important to note that TQM is applied in any country without considering the country’s diversities and therefore, a framework that meets the Malaysian requirement but has capabilities of being modified to different countries has to be designed. The fact that there is limited knowledge with regards to TQM and its application in the IBS projects in relation to the critical success factors and the absence of a coherent model that is suitable universally give reasons for more investigation. In line with that, the current research aims to address the issue in relation to the Malaysian IBS sector by determining the most important critical success factors influencing the success of TQM and its implementation in different international settings. In recent years, Partial Least Squares Structural Equation Modeling (PLS-SEM), utilized in SmartPLS software, has garnered acclaim for its adaptability and efficacy in managing intricate models, particularly in domains such as management of construction and industrialized building systems (IBS) [34] . In contrast to other SEM methodologies, such as CB-SEM (Covariance-Based SEM), SmartPLS is particularly advantageous for exploratory research aimed at theory building rather than theory validation [35] . This study aims to assess CSFs in the implementation of TQM and to establish complex relationships throughout the context of Industrialized Building Projects, where data frequently display non-normal distribution and multicollinearity issues. Although TQM has been thoroughly examined in both the building and manufacturing sectors, the distinct problems and success determinants related to TQM in IBPs are still inadequately investigated, especially in developing countries such as Malaysia [36] . The current study seeks to address a significant deficiency in comprehending the impact of TQM adoption on project performance within the IBP framework, a domain of growing importance as countries globally embrace more industrialized and sustainable building methodologies. Moreover, while Malaysia's construction industry offers a significant context for examining the effects of TQM, the results of this study aim to be generalizable and useful for wider applications. Numerous countries, both advanced and developing, are adopting TQM concepts to augment quality, minimize waste, and boost overall project efficiency [37–39] . Consequently, the findings of this research about the key success factors (CSFs) of TQM may guide IBPs globally, where analogous industry transitions are taking place. This study identifies CSFs and their impact on TQM outcomes in IBPs, therefore filling the gap in Malaysian TQM literature and enhancing worldwide understanding of TQM's significance in industrialized construction. This expands the study's ramifications, ensuring the results are meaningful and applicable in both Malaysian and non-Malaysian situations. This study is original in establishing a theoretical model suited for the prevailing IBS environments in Malaysia. Consequently, while much of the literature on TQM focuses on a generalized model applicable in several industries, our models address the unique IBS sector propositions existent in Malaysia. In this regard, the study is more contextual in relation to TQM implementation in the construction of IBS by pinpointing and accentuating CSFs of the construction environment. In doing so, it guarantees the practical relevance and use of the model within the setting of constructing buildings in the Malaysian context which otherwise is the fate of other models that may not be cognizant with the good or the needs of the projects of IBS. This examination is original in establishing a theoretical model suited for the prevailing IBS environments in Malaysia. Consequently, while much of the literature on TQM focuses on a generalized model applicable in several industries, our models address the unique IBS sector propositions existent in Malaysia. In this regard, the study is more contextual in relation to TQM implementation in the construction of IBS by pinpointing and accentuating CSFs of the construction environment. In doing so, it guarantees the practical relevance and use of the model within the setting of constructing buildings in the Malaysian context which otherwise is the fate of other models that may not be cognizant of the good or the needs of the projects of IBS. This investigation is very significant in both professional and academic contexts since it examines critical components of the Malaysian construction sector. The construction sector is crucial for the country's economic progress, and it is presently facing significant hurdles such as green concerns and the demand for environmental practices. Implementing TQM has become a strategic approach to improve project performance and long-term viability in the construction industry. TQM, as a management philosophy emphasizing continuous improvement and holistic organizational involvement, has demonstrated promise in improving project performance, reducing defects, and enhancing overall quality. The study recognizes the evolving landscape of the Malaysian building sector, specifically the adoption of IBS techniques, and aims to fill a notable knowledge gap regarding how TQM practices influence the CSFs essential for the success of IBS building projects. The outcomes of this study are predictable to make a considerable contribution significantly to the advancement of the construction business by providing insights into optimizing the connection between TQM deployment and CSFs in the context of sustainable offsite building projects. Thus, this exploration is not only relevant for practitioners in the construction sector seeking to improve project outcomes but also adds valuable knowledge to the academic discourse, bridging a critical gap in understanding the complexities of this relationship within the Malaysian construction context. The study primarily aims to assess the impact of CSFs on the effective implementation of TQM in IBS projects in Malaysia. The study intends to investigate and model the correlation between TQM practices and CSFs via SmartPLS, with the primary objective of improving both the effectiveness and sustainability of IBS projects. By addressing this aim, this study addresses the knowledge gap about the relationship between TQM and CSFs, offering scholarly perspectives and practical suggestions to enhance project results in the construction sector. This study is directed by three fundamental research questions to accomplish its objective: What is the effect of TQM implementation on the critical success factors necessary for the success of IBS projects in the Malaysian construction industry? Which critical success factors are most substantially impacted by TQM approaches in the context of sustainable residential Integrated Building Systems projects? How can the link between TQM implementation and CSFs be properly modeled using SmartPLS to provide meaningful information for practitioners and policymakers? The study establishes a few particular objectives based on these questions. The primary objective is to identify and examine the existing TQM procedures used in IBS projects within the Malaysian construction industry. Secondly, it aims to identify the essential important success factors vital for the success of sustainable residential IBS projects. The study assesses the correlation between TQM deployment and these critical success factors, using SmartPLS for an in-depth analysis. Fourth, it aims to create a prediction model that associates TQM implementation with CSFs, providing strategic insights for enhancing project performance. The research seeks to provide practical ideas for practitioners to improve the successful integration of TQM in IBS initiatives. 2 Literature review This section reviews the key success factors of TQM, focusing on logistics, sustainability, safety, communication, cost efficiency, project timeliness, long-term achievement, and adaptability. It highlights how these elements are crucial for TQM's effectiveness in boosting organizational performance and sustainability. 2.1 TQM background The construction sector is crucial to the development of any country [40] . Altayeb and Alhasanat identify this business as vital for advancing the infrastructure and economy of emerging nations [41] . Nonetheless, it is a multifaceted, competitive, and high-risk enterprise owing to the constraints of conventional project delivery methods [42] . TQM is a strategic concept that a business adopts and implements continually, even while awaiting the initiation of a new project. The overall quality culture differs among companies and industries; nonetheless, its objectives remain consistent: cost savings, reputation improvement, and increasing shareholder value. Total quality goals are inherently dynamic, necessitating continuous updates [43] . In recent years, TQM has acquired significant popularity among construction firms globally. The implementation of TQM in construction sectors presents more challenges due to its distinct developmental processes compared to TQM in the manufacturing sector [44] . The manufacturing industry is defined by a distinct steady-state process, whereas the construction industry typically comprises dynamic processes and possesses unique characteristics, including workforce mobility, diversity in project types and formats, geographical dispersion, contractual relationships, frequent project prototyping, and minor waste that is challenging to identify and manage [45] . Furthermore, TQM promotes effectiveness and attains outcomes that satisfy all stakeholders of the firm. The primary goals of TQM are to accomplish satisfaction among clients, enhance cost-effectiveness, and operate defect-free, promoting an incessant quest of waste elimination. The consumer is pleased only if the product exhibits an exceedingly low failure rate (almost none or zero) and is competitively priced compared to alternatives from other suppliers. TQM obtains consumer fulfilment by emphasizing the enhancement of processes, stakeholder participation, collaboration, and training. TQM reflects a culture dedicated to unwavering customer satisfaction by perpetual enhancement and innovations across all company dimensions [46] . In an optimal culture, the customer encompasses not only the final consumer of the firm's goods or services but also a person or department that acts as a stakeholder inside the corporation [47] . 2.2 TQM in the construction sector TQM was initially implemented by the manufacturing sector and is now being used across several other sectors [48] . The implementation of TQM in the construction industry presents significant challenges due to the distinct nature of each project, variability in personnel, and the involvement of multiple stakeholders, all influenced by factors such as atmospheric conditions and official project standards [49] . Clients need construction firms to enhance service quality, expedite construction timelines, and implement technological advancements [50] . Motivated by client expectations for quality, a rising number of enterprises in this sector are adopting TQM to enhance product quality and elevate customer satisfaction [51] . Nonetheless, firms within the construction sector encountered challenges in executing TQM in their projects [29] . According to [52] that the application of TQM in the construction sector is viable; nevertheless, managers must address technical and human learning challenges. The literature distinctly illustrates the convergence of “Technique” and “Human,” whereby technical and technological awareness is compared with human concerns. Nevertheless, the majority of managers lack the preparedness and motivation to execute this merger, compounded by insufficient circumstances that make learning in the construction sector unfeasible. The implementation and maintenance of TQM is a significant difficulty [53] . The TQM idea was first created in the manufacturing sector, known for its steady processes, but the construction business has dynamic processes that might vary based on the specific project. TQM's culture varies among companies and industries; nevertheless, its overarching objectives are consistent: eliminating waste, reducing costs, enhancing credibility, and increasing shareholder value [54] . Tey and Ooi [5] identify six principal impediments to the application of TQM in the construction sector: insufficient qualified personnel, an insufficient supply mentality, insufficient interaction, additional costs and time demands, inadequate support from senior management, and challenges in assessment. The authors also indicate that poor interaction and insufficient support from upper management are not substantially correlated with the extent of TQM adoption. Furthermore, García Bernal and García Casarejos [55] indicate that the construction industry has been poorly impacted by the ongoing recession, resulting in a substantial decline in demand. In this context, recognizing critical success factors for TQM implementation is fundamental for achieving TQM success in the construction sector [56] . 2.3 TQM critical success factors The idea of Critical Success Factors (CSFs) originated in 1978 [20] . John F. Rockart, the originator of the concept, included the Critical Success Factors (CSF) framework throughout the entire pyramid of management techniques. Critical Success Factors (CSFs) are the characteristics that offer the greatest advantage to consumers and effectively distinguish rivals within a certain sector [21] . They are the factors that influence the varying degrees of success of firms in the competitive marketplace. The determination of Critical Success Factors (CSFs) is essential in the manner of strategic planning, as they will dictate the extent of accomplishment of the defined goals. A corporation has a competitive edge when it excels in a particular Critical Success Factor (CSF) [22] . The IBS is a construction method which emphasizes the utilization of prefabricated components and systematic construction processes [57] . In the context of the Malaysian construction sector, the application of TQM critical success factors in IBS projects holds significant importance [25] . TQM can play a fundamental role in confirming the delivery of high-quality projects within budget and on schedule. The influence of TQM's key success factors in the construction field lies in their ability to enhance project performance, Enhance customer happiness and cultivate a culture of ongoing improvement [58] . Here are some key factors and their significance: 2.3.1 Logistics and operations management Effective supply chain management is crucial in construction to ensure the timely delivery of materials, equipment, and resources [59] . TQM in supply chain management involves selecting reliable suppliers, maintaining clear communication, and fostering collaborative relationships to enhance the overall quality of the construction process [60] . Well-managed supply chains contribute to project efficiency, reduce delays, and enhance the quality of construction projects [61] . 2.3.2 Sustainable Improvement TQM emphasizes the need for well-defined processes, continuous monitoring, and improvement [62] . In construction, this involves establishing clear construction processes, implementing quality control measures, and regularly reviewing and refining these processes for enhanced efficiency and quality [63] . Sustainable improvement helps in minimizing errors, optimizing resource utilization, and ensuring consistency in project outcomes [64] . 2.3.3 Safety and Compliance TQM promotes a strong focus on safety and compliance with industry standards and regulations [65] . This includes implementing safety measures, providing training for workers, and ensuring that construction practices adhere to legal and regulatory requirements. Prioritizing safety and compliance not only protects workers and the environment but also contributes to the complete quality of the project by minimizing risks and potential disruptions [66] . 2.3.4 Better Communication Clear and open communication is a cornerstone of TQM. In construction, efficient communication confirms that all stakeholders, involving clients, contractors, and workers, are on the same page concerning project requirements, changes, and expectations [67] . Improved communication reduces misunderstandings, enhances collaboration, and promotes a shared commitment to quality throughout the construction process [68] . 2.3.5 Cost Savings TQM seeks to optimize processes and reduce waste, leading to cost savings. In construction, this involves efficient resource utilization, preventing rework, and minimizing delays that could result in additional expenses [69] . Cost savings contribute to the financial success of construction projects and can be reinvested in quality improvement initiatives [70] . 2.3.6 Timely project Completion TQM emphasizes the importance of meeting deadlines and completing projects on time [71] . This involves effective planning, scheduling, and monitoring to ensure that construction milestones are achieved as per the project timeline [72] . Timely project completion not only meets client expectations but also enhances the reputation of the construction firm and contributes to overall project success [73] . 2.3.7 Long-Term Success TQM is a long-term philosophy that focuses on sustained improvement rather than short-term fixes [74] . In construction, this involves building a culture of continuous improvement, learning from past projects, and applying lessons to future endeavours. Long-term success in construction is achieved by consistently delivering high-quality projects, satisfying clients, and adapting to industry changes [75] . 2.3.8 Flexibility and Adaptability TQM encourages flexibility and adaptability to respond to changes in project requirements, technology, or external factors [76] . In construction, this involves the ability to adjust plans, processes, and resources as needed. The construction industry is dynamic, and being flexible and adaptable allows construction firms to navigate challenges, incorporate new technologies, and stay competitive [77] . In summary, applying TQM critical success factors in the Malaysian construction sector, specifically in relation to IBS projects, is crucial for ensuring the success and sustainability of modern construction methods. By emphasizing quality, collaboration, standardization, and continuous improvement, TQM contributes to the effective and operative implementation of IBS, ultimately enhancing the total performance of the Malaysian construction industry. In this section, comprehensive literature research was used to collect data in this area. These methodologies were used to discover the structures and critical success factors (CSFs) of TQM adoption in IBS projects. The literature research focused on identifying the critical success factors for TQM in the context of IBS projects. Articles were obtained via the Web of Science. Scopus databases, books, journal articles, international conference papers, government seminars from the Construction Industry Development Board (CIDB), and online resources. After the identification of relevant CSFs, the data received additional refinement and validation to guarantee its correctness and relevance to the study objectives. Current studies on TQM in the construction sector have substantially aided the identification of Critical Success Factors that promote quality enhancements [78] . Nonetheless, the majority of this research has been concentrated on ordinary construction projects, offering little insights into IBPs, a sector characterized by unique constraints resulting from its dependence on prefabrication, standardization, and modern technology [79] . The distinctive requirements of this sector indicate that traditional TQM implementations may be inadequate or need modification, a topic that is yet limited examined in the existing literature [80] . Moreover, despite the identification of CSFs, the inadequacies in comprehending the interrelations and impacts of these elements on overall TQM performance in dynamic and technologically sophisticated environments, such as IBPs, underscore a notable deficiency [58] . Limited research has used sophisticated modeling techniques, such as PLS-SEM, to elucidate these links, resulting in deficiencies in both methodological frameworks and practical applications for the successful implementation of TQM in IBPs [38,81] . 3 Research methods The study adheres to a four-step procedure, as illustrated in Fig. 1 . The research has numerous critical components, which are outlined in the following stages. Initially, a comprehensive examination of extant scholarly works reveals crucial determinants of achievement, facilitators, and variables associated with IBS. Furthermore, a pilot study is conducted to evaluate the reliability and comprehensiveness of the main survey. Thirdly, a thorough questionnaire survey is conducted to evaluate the significance of CSFs in achieving IBS. In conclusion, the use of partial least squares structural equation modelling (PLS-SEM) is utilized to analyze the correlation between CSFs and the successful deployment of TQM within IBS residential projects. 3.1 Identification of CSF and implementation of TQM in IBS projects The study used a broad literature review (BLR) technique to methodically find and assess relevant material in order to determine the main CSFs that influence the implementation of TQM. A variety of academic databases were used by the researchers, including “Web of Science”, “Taylor and Francis”, “ASCE Library”, “Science Direct”, and “Scopus”. They employed specific keywords such as “success factors”, “drivers”, “enablers”, “off-site building”, “precast building”, “Industrialized Building System Projects”, “sustainability”, and “sustainable construction” for making sure comprehensive coverage of pertinent literature. The aim was to ascertain a thorough selection of relevant articles. Table 1 presents the six CSFs together with their corresponding enablers. The research considers these enablers as independent variables (IDV), whereas the TQMI is regarded as the dependent variable (DV). 3.2 Hypothesis development This research examines the influence of key success factors (CSFs) on the successful implementation of TQM in IBS projects in Malaysia. Six hypotheses are offered to assess these interactions, grounded in theoretical underpinnings and a comprehensive literature assessment. The significance of Top Management Commitment (TMC) is universally acknowledged as crucial for the success of TQM procedures. Executive leadership establishes strategic direction, distributes essential resources, and cultivates a culture that emphasizes excellence. Research constantly underscores the substantial influence of leadership on the adoption of TQM, especially within intricate building systems like IBS (Author, Year). It is posited that Top Management Commitment favorably affects TQM implementation in IBS projects. Continuous Improvement (CI) is a fundamental idea of TQM, highlighting the perpetual enhancement of processes to align with changing standards and elevate organizational performance. Research indicates that firms using continuous improvement techniques get enduring advantages in quality management, especially in sectors such as construction, where flexibility is essential (Author, Year). This research posits that Continuous Improvement favorably impacts TQM implementation in IBS projects. The significance of Customer Satisfaction (CS) as a crucial factor in the success of TQM is extensively recorded in the literature. Customer satisfaction demonstrates an organization's capacity to fulfill or beyond client expectations, consequently improving project results and establishing enduring relationships (Author, Year). It is predicted that Customer Satisfaction favorably affects TQM adoption in IBS initiatives. Process Management (PM), defined by explicit processes and operational uniformity, is essential for attaining quality goals within TQM frameworks. Studies demonstrate that process optimization is crucial for improving the efficiency and efficacy of IBS initiatives, where accuracy and uniformity are essential (Author, Year). This research posits that Process Management favorably affects TQM implementation in IBS projects. The significance of leadership in the implementation of TQM is often highlighted in the literature. Effective leadership fosters excellent processes, inspires people, and guarantees alignment with company objectives. Nonetheless, data indicates that the impact of leadership on TQM results may differ according to organizational environment and external difficulties particular to IBS projects (Author, Year). Therefore, it is posited that leadership has a favorable impact on the implementation of TQM in IBS projects. Ultimately, teamwork is an essential facilitator of TQM performance, promoting cooperation, communication, and collective responsibility among project stakeholders. Research indicates that collaboration markedly improves problem-solving and fosters a unified strategy for attaining quality objectives, especially in intricate building endeavors such as IBS (Author, Year). Consequently, it is posited that teamwork has a beneficial impact on the implementation of TQM in IBS projects. These hypotheses jointly provide a comprehensive framework for analyzing the influence of critical success factors on the enhancement of TQM procedures in the context of integrated building systems projects. The research seeks to empirically evaluate these assumptions to provide significant insights into the strategic planning and execution of TQM in the construction sector. 3.3 Pilot survey development A pilot study was performed to measure and estimate the utility, consistency, and thoroughness of the questionnaire [120,121] . According to [122] , it is important to follow this technique as it ensures that the methodology used in the study effectively demonstrates the attainment of the research goals. The initial study requires a minimum sample size of 10 contributors, as indicated by previous studies [123,124] . Consequently, a pilot survey was created and conducted among a group of thirty individuals, consisting of seventeen professionals from the construction sector and thirteen academics with over a decade of experience in their respective fields. The participants were assigned the responsibility of assessing the language of the questions, identifying questions that posed problems, verifying the survey's compilation of CSFs, facilitators, and offering comments. 3.4 Checking the consistency and reliability of the response The measurement of consistency for the initial survey is assessed utilizing Cronbach's alpha coefficient, which is a statistical degree of internal reliability. The computation is performed in order to guarantee the reliability of participant replies while analyzing the influence of CSFs, enablers, and TQMI factors on IBS projects. Furthermore, it aims to establish the reliability of the Likert scale testing procedures. The pilot sample had a Cronbach's alpha coefficient of 0.93, implying a high level of consistency. Cronbach's alpha coefficient is widely utilized in the social sciences to evaluate the internal consistency, or reliability, of a psychometric instrument. This coefficient is pivotal in confirming that a set of test items or scales consistently measures a single latent construct. Cronbach's alpha values range between 0 and 1, with higher values denoting a stronger internal consistency among the test items. An alpha value within the range of 1 to 0.9 is indicative of excellent internal consistency [125] . This suggests that the test items are highly cohesive and effectively measure the underlying construct, rendering the instrument highly reliable for research applications. When Cronbach's alpha falls between 0.9 and 0.8, it signifies good internal consistency. Instruments within this range, while not reaching the pinnacle of excellence, are still deemed reliable and are appropriate for most research endeavours. A Cronbach's alpha value ranging from 0.8 to 0.7 is considered acceptable. This indicates a reasonable level of cohesion among the test items in their measurement of the latent construct. However, instruments falling within this range may be scrutinized more closely, especially in research demanding high precision. Values between 0.7 and 0.6 reflect questionable internal consistency, suggesting that the test items may not be uniformly measuring a single construct. Such instances typically necessitate a thorough examination and potential revision of the instrument to enhance its reliability. Poor internal consistency is indicated by alpha values ranging from 0.6 to 0.5. This suggests that the test items may not be effectively measuring the same construct, leading to recommendations against the use of such instruments in serious research without significant modifications and revalidation. Alpha values below 0.5 are deemed unacceptable, pointing to a severe lack of internal consistency. This implies that the items do not cohesively measure the same construct, thereby compromising the reliability of the instrument. Such instruments require extensive review and significant revisions before they can be considered suitable for research purposes. It is essential to recognize that Cronbach's alpha is merely one among various reliability measures. Its interpretation should be contextualized within the broader research framework, incorporating other validity and reliability assessments. Moreover, the coefficient presupposes uniform item reliability and can be influenced by the number of items in the scale, necessitating a nuanced interpretation that takes these factors into account. 3.5 Questionnaire survey development The key purposes of the questionnaire study were twofold: (1) to scrutinize the existing utilization and efficiency of different CSFs in IBS projects, and (2) to evaluate the impact of CSFs and their facilitators on TQM implementation in IBS residential projects inside Malaysia, a country undergoing development. The questionnaire had five components. The first component of the survey is dedicated to the provision of general information by the respondents. The subsequent portion comprises closed-ended inquiries that assess the participants' feedback about the suitability of several CSFs and their effectiveness in addressing the condition of TQM. The final segment comprises closed-ended inquiries that assess the participants' feedback about the influence of CSFs on IBS projects. Section four of the survey includes closed-ended questions that aim to assess participants' responses to various variables related to TQM implementation, namely those pertaining to the environment, society, and the economy. Section five of the survey presents an open-ended question, allowing participants to include any other CSFs or enablers that they deem relevant and should be considered. The participants were provided with three variations of Likert scales, each consisting of a maximum of five points, to assess their responses to the questionnaire. Initially, the term “grade of application” was employed to assess the current level of implementation of CSFs and their enablers in relation to the deployment of TQM in IBS projects. In this evaluation, a scale of “5″ denoted a high degree of applicability, while a score of ”1″ indicated a complete lack of applicability. Furthermore, the term “degree of efficiency” was employed to assess the extent of effectiveness of CSFs and their enablers in facilitating the implementation of TQM in IBS projects. In this evaluation, a rating scale ranging from “1″ denoting little effectiveness to ”5″ representing a high level of effectiveness was utilized. Furthermore, the concept of “degree of impact” was used to measure the amount of influence that the CSFs had on the acquisition of TQM implementation. On this scale, a ranking of “5″ indicated a very prominent level of influence, while a ranking of ”1″ denoted a very minimal level of influence. 3.6 Targeted participants The survey was circulated among experts in the Malaysian construction sector via several methods, such as the mailing list for the cooperative network of the Construction Industry Development Board (CIDB) and the professional networking platform LinkedIn. Based on the criteria outlined in Reference [22] , the participants of the survey were prequalified. To be considered, individuals are required to have at least a bachelor's qualification or a comparable degree in the field of civil engineering and construction management. Furthermore, it is required that respondents possess the least possible five years of expertise that is relevant to the subject of construction management in Malaysia, either as practitioners or scholars. 3.7 Sample size In examining the qualifications, the majority of respondents in the construction industry possess master’s degrees (35.6 %), indicating a high level of education among professionals. The distribution of experience levels reveals a significant portion with less than 5 years (34.2 %), suggesting a relatively young workforce, while 19.4 % have more than 20 years of experience, reflecting a seasoned segment. The nature of business highlights a predominant involvement in consulting (35.8 %), emphasizing the consultancy's pivotal role in the construction sector. In terms of participation in IBS projects, a substantial 79.0 % have contributed to 1 to less than 10 projects, indicating a widespread but potentially diverse engagement. Designations illustrate a varied workforce, with roles ranging from site/residential engineer to academician, underscoring the multidisciplinary nature of the industry. These findings contribute nuanced insights into the educational background, experience, professional roles, and project involvement of construction industry practitioners, offering a robust foundation for further academic exploration and practical implications. Given the relatively recent introduction of TQM deployment in IBS projects in Malaysia, this study utilized a random probability sample methodology. Consequently, any competent located in Kuala Lumpur, Johor Bahru, Penang and Perak, Malaysia had an equal opportunity of being selected for participation. This methodology is used in the research to aid the writers in acquiring dependable and precise responses [126,127] . The determination of the sample size for the study is contingent upon the goals of the study, as stated in Reference [128] . Accordingly, it is advised that the smallest sampling size of 100 instances or above be used when using the SEM approach [129,130] . A sum of 371 valid responses was obtained from construction specialists via the processing of a questionnaire, which is considered sufficient for conducting the SEM study. Table 2 displays the demographic characteristics of the participants. 3.8 Development of model The primary aim of Structural Equation Modelling (SEM) is to present a clarification and verification of a hypothesized fundamental hypothetical model, involving the simulation of interactions among multiple constructs represented by predefined components. SEM distinguishes itself from other modelling approaches by its capability to investigate both the straight and indirect impacts of hypothesized causal linkages [131] . SEM uses a two-step approach to get validation. In the first stage, the validity of the assessment paradigm is validated utilizing Confirmatory Factor Analysis (CFA), in which observable indicators give supporting evidence for the necessary components. The subsequent phase entails the development of the structural model, wherein study hypotheses are assessed via the use of route analysis [132] . In its most basic form, SEM is a robust analytical method that permits scholars to examine their hypotheses and assess causal connections in an effective and systematic manner. The use of this strategy has been extensively utilized in several disciplines, such as administration, organizational behaviour, and construction management [131] . There are two primary approaches for implementing SEM, element-based SEM, also known as Partial Least Squares SEM (PLS-SEM), and covariance-based SEM (CB-SEM). PLS-SEM is often regarded as more advantageous than CB-SEM for many reasons. Firstly, empirical evidence suggests that PLS-SEM yields more accurate predictions. Secondly, PLS-SEM provides a more robust statistical approach for evaluating numerous components. Lastly, PLS-SEM has superior capability in accounting for experimental variation [34,133] . SmartPLS is especially advantageous for small to medium sample sizes, a common situation in construction research when collecting large data across projects poses difficulties [134] . PLS-SEM is beneficial for managing foundational constructs, a characteristic pertinent to this study's emphasis on the important success aspects of TQM, which may lack shared indications but together influence TQM results. Moreover, SmartPLS offers benefits compared to other software such as AMOS and LISREL, which need typically distributed data and emphasize confirmatory analysis [135] . This work utilizes SmartPLS to harness these advantages, permitting rigorous modeling of TQM's multifaceted effects on IBS Projects, so providing more practical insights for both practitioners and scholars. Therefore, The main purpose of this study is to evaluate and validate a hypothetical model by analyzing its predictive abilities and exploring the relationships between different variables. Therefore, the researchers selected to utilize the PLS-SEM methodology. The chosen methodology was used in order to construct a framework capable of prioritizing and evaluating the interrelationships between CSFs and their corresponding enablers in the context of obtaining TQMI for IBS projects. 3.9 Model of measurement The following tests must be run inside the PLS-SEM measurement model in order to appraise and analyze the constructs utilized in this investigation both discriminant and convergent validity [136] . There are four different approaches to calculating convergent validity, and all of them need the reflected measurements model. Outer Loading, Composite Dependability, Cronbach's Alpha Indicator Dependability, and AVE are the first four factors [133,137] . i. The external loadings represent the associations within the reflective measurement models, and they play a crucial role in assessing the extent to which every item contributes to its designated construct [137] . External loadings greatly influence the assessment of profound measurement models and can be applied for confirmatory measurement models [133] . The indication (i.e., item) which reaches an outside loading of 0.4 should be satisfactory, whereas a score of 0.5 is regarded acceptable [133] . While [65] stated that an indication with an exterior loading value of 0.7 or higher is deemed extremely good. Consequently, it is determined that in order for the exterior loading value to be approved, it must be more than 0.4 (outer loading >0.4) [138] . ii. Since Cronbach's Alpha (α) has been known to have performance issues and inconsistencies, the composite reliability (ρc) is a more effective technique [139] . Cronbach's Alpha assigns equal weight to each indication without taking into account their factor loadings. By comparison, Cronbach's Alpha is not as good as composite dependability as it takes into account the factor loadings of all indicators [140] . As a result, there is a need to use substitutes for Cronbach's Alpha, the confirmatory reliability metric [139] . As indicated in [141] , a composite reliability exceeding 0.6 is recommended for an exploratory model (ρc > 0.6), and for a confirmatory model, it should surpass 0.7 (ρc > 0.7). It is considered to have strong dependability if the compound dependability value is 0.8 or above.; if it is 0.9 or higher, it is termed perfect dependability [142] . iii. A statistical measure known as Cronbach's Alpha (α) is utilized to assess the dependability or internal consistency of every indication pertaining to the designated idea [143] . Despite its inconsistent and problematic performance, Cronbach alpha attains the equivalent value as compound dependability (α > 0.6) [141] . A substantial average inter-correlation implies that the indices are successfully evaluating the same core concept, pointing to a greater degree of scale dependability. In contrast, a minor average inter-correlation proposes that the indicators may need to be revised or removed from the scale since they are not adequately evaluating the same concept [144] . iv. The convergent validity of the model's constructs is assessed using the average variance extracted (AVE) [144] . Moreover, the AVE in an insightful model reflects the average cohesion for each component. Agreeing to Ref. [145] ; at least 0.5 (AVE >0.5) must be present in the average variance retrieved (AVE) in order to assess the convergent validity of the constructs. Nevertheless, all Composite Reliability tests were automatically assessed and computed in this study using SmartPLS software. This program has the capability to enhance efficiency and minimize mistakes by automating intricate computations and statistical analysis. Furthermore, it has the capability to provide graphical depictions of the outcomes, facilitating the comprehension and dissemination of the findings. In general, the practice of SmartPLS may improve the effectiveness and precision of the study procedure. Furthermore, each model's constructs are unique, stand-alone entities that capture phenomena that the other constructs in the model do not address, according to the concept of discriminant validity [144,146] . Additionally, each construct in a model is distinguished from the others by its indications of discriminant validity [147] . 3.10 Structural model (path model) Upon establishing the validity and reliability of the construct measurements, the next stage is to assess the structural model. The study intended to inspect the impact of TQM’s CSFs on the attainment of TQM employment for IBS projects. The findings were analyzed by collinearity analysis. As per [148,149] ; this assessment aids in examining the intercorrelations among the theoretical elements in the study. This procedure basically involves inspecting the relations between variables, both direct and indirect, in order to ascertain their interdependence and their influence on the research's overall outcomes, as shown by the factors' explanatory power (R 2 values) and expected significance (Q 2 value). (1) The grade of variation among the independent variables (IDVs) in the dependent variable determines the explanatory capacity of the structural model, which is expressed in terms of R 2 values. In simpler terms, as the R 2 value advances, the prediction capacity of the structural model increases [150] . R 2 values more than 0.02 are regarded as poor, and those greater than 0.13 as reasonable. Conversely, an R 2 value of 0.26 or above is deemed to be significant [151,152] . (2) The predictive significance (Q 2 value). Evaluating the model's predictive relevance is an essential element of a structural model. The blindfolding experiment is employed to assess the predictive significance of dependent variables (DVs). This methodology is utilized for analyzing the cross-validated redundancy measurements [150] . According to Ref. [153] , to be considered, The value of Q2 ought to be greater than zero (Q 2 > 0). 4 Data analysis and results This part of the article offers an extensive investigation and presentation of the data analysis and findings from the investigation's techniques. To evaluate the reliability and precision of the measurement model and the relationships between the variables in the structural model, the data analysis will include a variety of statistical tests and evaluations. The study results will be shown in tables and figures, and their significance will be discussed. The results of the data analysis and analysis will offer insightful information on the research questions and hypotheses, significantly adding to the body of knowledge already available on the topic. In summary, this part will be crucial for comprehending the significance and consequences of the research. The study had a 93 % response rate, so assuring substantial data dependability and reducing non-response bias. The elevated rate indicates the efficacy of the survey methodology and the representativeness of the results. A study by [154] examining TQM procedures in building projects within poor nations revealed a response rate of 72 %, which they attributed to strategic distribution and proactive follow-up methods. Another studyconducted by [155] on the implementation of IBS in Malaysia attained a response rate of 68 %, underscoring issues associated with participant availability and survey fatigue. A recent study by [156] examining sustainable practices in construction achieved a response rate of 75 %, highlighting the industry's growing openness to research on sustainability-related topics. In contrast to past studies, the response rate in the present study indicates successful engagement tactics and the topic's pertinence to the target audience, which includes stakeholders from CIDB-registered construction enterprises (G5 and above) and regulatory bodies. The notably elevated response rate bolsters the reliability and generalizability of the results, hence augmenting their significance in identifying the important success elements for TQM adoption within the construction industry. 4.1 Measurement model The purpose of measurement model evaluation is to assess the reliability and validity of the measurement model. This involves evaluating the concept and its indicators in this study's convergent validity, discriminant validity, and composite reliability. 4.2 Convergent validity 4.2.1 Outer loading Table 3 and Fig. 2 demonstrate that measured variables with an outer load of 0.7 or above are deemed extremely satisfactory, with a substantial percentage of 81 %; however, variables with an outside load of less than 0.7 should be eliminated [157,158] . Nevertheless, the cut-off value acceptable for the outer loading was 0.70 and above for this research. The outer loads after removing the lower loadings items ranged from 0.727 to 0.871 as shown in Fig. 2 . Accordingly, 4 loadings were lower than 0.70 and thus were deleted. The deleted loadings were: TQM7 (0.138), TQM8 (0.467), TMC5 (0.653), and CI5 (0.583). After deleting these items, all items that remained that measured a given construct were highly loaded on that construct and loaded low on others. Hence, the validity of the construct was affirmed as well. 4.2.2 The Composite reliability (ρc) As displayed in Table 3 , all constructs have met the acceptable requirements for Composite reliability ( pc ) value (ρc > 0.7). All the constructs in Table 3 have achieved the necessary criteria for Composite reliability (ρc) value, with ρc being more than 0.7. Consequently, all values are considered acceptable. The Composite reliability score for the constructs CI, PM, LD, and TW is more than 0.8. As a result, these constructs are regarded to have excellent reliability. Contrary to TMC, both CS and TQM have attained a Composite dependability score that is equal to or above 0.9, indicating flawless reliability. 4.2.3 The Cronbach’s alpha (α) According to Table 3 , every construct is above the essential threshold for Cronbach's Alpha (α) value (α > 0.6). For this reason, any value is regarded as acceptable. 4.2.4 The average variance extracted (AVE) Table 3 displays that every construct satisfies the appropriate threshold for the extracted average variance (AVE) (AVE > 0.5). They all achieved a high score and were accepted as a consequence. The results mentioned above suggest that the analytical model exhibits convergence and consistency, hence demonstrating that the variables in the model are dependable and precise representations of the corresponding concepts. Additionally, a detailed investigation of the relations among the components of the investigative model has produced evidence to corroborate the theoretical connections put out in the research. 4.2.5 Statistcal Tests Descriptive statistics were conducted using SPSS to summarize the data. Calculations for measures of central tendency (mean) and dispersion (standard deviation) were conducted for all variables. The results indicated that the mean values varied from 3.57 to 4.29, reflecting moderate to high views among respondents about critical components, with top management commitment exhibiting the highest mean value of 4.29. Respondents agreed that the application of TQM is an essential method for enhancing IBS projects in Malaysia. Multicollinearity was evaluated using SPSS by examining tolerance values and the variance inflation factor (VIF). Tolerance values varied from 0.264 to 0.326, while VIF values ranged from 3.067 to 3.782, all conforming to the permitted limits of >0.1 for tolerance and <10 for VIF. The findings validated the lack of multicollinearity among the independent variables in the proposed model. The normality of the data was assessed using skewness and kurtosis metrics in SPSS. The skewness and kurtosis values for all variables were within the permissible range of ±3, indicating that the data adhered to a normal distribution. This confirmed the suitability of use of PLS-SEM for further analysis since the normalcy assumption was satisfied. While this study did not use a particular statistical test for common method bias (CMB), many procedural remedies were included in the research design to alleviate possible bias. The confidentiality of respondents was maintained through the data-gathering procedure to mitigate social desirability and response biases. Secondly, the survey instrument was meticulously crafted to include clear, impartial, and non-leading questions, hence reducing the likelihood of priming effects. Moreover, temporal separation was implemented between the collection of predictor and criterion variables by designing the survey to promote deliberate replies instead of rapid, possibly biased ones. These methods were established to proactively mitigate any CMB issues and guarantee the authenticity and veracity of the gathered data. Subsequent research may use statistical analyses, such as Harman’s single-factor test or the common latent factor method, to further substantiate these results [159] . 4.3 The discriminant validity The non-response bias assessment is an essential procedure in survey research to determine whether the replies accurately reflect the intended population. It investigates if notable disparities exist in the replies of early respondents (those who reply quickly) and late respondents (those who respond after follow-ups). Late respondents are often used as substitutes for non-respondents, based on the presumption that their attributes may mirror those of non-respondents. This test evaluates the potential impact of non-response bias on the results. A comparative analysis of early and late responders was performed on essential research variables, including TQM, Top Management Commitment (TMC), Continuous Improvement (CI), Customer Satisfaction (CS), Performance Measurement (PM), Leadership Development (LD), and Teamwork (TW). The findings reveal negligible disparities in average scores between the two groups. The mean TQM score was 4.0894 for early responders and 4.0297 for late respondents, whereas the mean CI values were 4.2300 and 4.1688, respectively. In a similar vein, early respondents indicated a mean Customer Satisfaction (CS) score of 4.2937, but late respondents reported a mean of 4.2566. The standard deviations and standard errors were similar across groups, showing uniform variability. The results indicate little variations between early and late responders, suggesting that non-response bias is improbable and reinforcing the data's reliability and representativeness. In the context of measures, discriminant validity denotes the extent to which items differentiate among constructs or measure dissimilar notions. As indicated by Awang [160] discriminant validity specifies that AVE for every latent construct must be greater than the construct’s maximum squared connection with another latent construct namely the criterion. Moreover, they suggested that the square root of the AVE of every latent variable should be more than the relationships amongst the latent variables. For this research, the measures’ discriminant validity was assessed using the criterion. As contained in the correlation matrix depicted by the following Table 4 , the diagonal elements comprise the average variance square root extracted from the latent constructs. Should the diagonal elements be greater as opposed to other off-diagonal components within the rows and columns, then, there is discriminant validity. Implicitly, it is disclosed that the square root of the average variance extracted (AVE) for each of the seven latent constructs surpasses their association with any other construct in the study model. This was the circumstance in the correlation matrix, and hence, confirmed discriminant validity. Evidence from Table 4 indicated that all seven factors satisfied the discriminant validity requirements. Moreover, Table 5 presents the bootstrapping results, assessing the impact of various factors on TQM implementation in IBS projects. The findings indicate that top management commitment (TMC), continuous improvement (CI), customer satisfaction (CS), project management (PM), and teamwork (TW) significantly contribute to TQM, as evidenced by their positive β-values and p-values below 0.05. Among these, CI exhibits the strongest effect (β = 0.302, p = 0.000), highlighting its critical role in sustaining quality improvements. Conversely, leadership (LD) does not show a statistically significant impact (p = 0.446), suggesting that other factors may play a more dominant role in driving TQM success. These insights emphasize the importance of management commitment, collaboration, and continuous improvement in enhancing TQM adoption in IBS projects. 4.4 The structural model (path analysis) 4.4.1 Bootstrapping analysis The PLS-SEM method was used to examine and assess the impact of employing the CSFs of TQM Implementation on achieving TQM for IBS advancements, as seen in Fig. 3 . The bootstrapping approach was utilized to assess the relevance of the model's hypothesis. The bootstrapping technique generates additional data samples of equal size to the original dataset by random resampling. The bootstrapping approach creates up-to-date information samples of the same size as the original dataset by randomly resampling. This approach considers the statistical significance, the reliability of the dataset, and the imprecision of the regression coefficients [150] . (1) Explanatory power (R 2 value) The study utilizes the smart-PLS algorithm approach to estimate the explanatory power value, as seen in Fig. 2 . In this model, the updated R2 for the dependent variable TQMI is 0.707. According to the findings, the six independent variables (CSFs) account for 70.7 % of the variance in the TQMI. Simply, the six key success factors (CSFs) influence TQMI significantly, accounting for 70.7 % of its entire effect. (2) The Predictive relevance (Q 2 value) Predictive relevance refers to the degree to which the independent variable can effectively forecast or anticipate changes in the dependent variable. Moreover, it offers additional confirmation that the predictive significance of the route model was adequate for the endogenic variable. The data presented in Table 6 indicates that the predictive relevance value for TQMI in this investigation is 0.459, which is a favourable outcome. Consequently, one might conclude that the model has a high degree of predictive significance. 5 Discussion The study outlines a comprehensive data analysis and results section from a study on the implementation of TQM in IBS projects. It covers the reliability and validity of the measurement model, including convergent validity, discriminant validity, and composite reliability. The study details the loading values of various constructs, such as Top Management Commitment, Continuous Improvement, Customer Satisfaction, Process Management, Leadership, Teamwork, and TQM Implementation in IBS projects. This indicates a thorough examination of these constructs' reliability and validity. The structural model analysis includes a bootstrapping analysis to assess the impact of critical success factors on TQM implementation in IBS projects, with the results indicating significant relationships for most paths except for Leadership. The explanatory power (R^2 value) shows that the identified critical success factors account for 70.7 % of the variance in TQM implementation. The predictive relevance (Q^2 value) further validates the model's predictive capability. Comparing these results with existing literature and similar studies will involve evaluating how these findings align with or differ from prior research on TQM in construction or similar industries, especially in the context of Malaysian IBS projects. It would include a discussion on the identified critical success factors, the statistical significance of the relationships between these factors and TQM implementation, and the overall model's explanatory and predictive power in the context of existing theories and empirical studies. In this study, convergent validity was assessed through outer loadings, with a cut-off value of 0.70, which is consistent with the recommendations by K Ismail [161] in their SEM methodologies. Similar studies, such as those by Fornell and Larcker [162] , also emphasize the importance of high outer loadings (0.70 or above) for establishing convergent validity. The act of removing items with loadings below 0.70 to improve the measurement model's validity is a common practice in SEM-based research. For example, a study by [153] on the use of PLS-SEM in marketing research also followed similar steps to ensure high-quality measurement models. Discriminant validity is another critical aspect often evaluated in similar studies. Fornell and Larcker [162] suggest comparing the square root of the AVE with the correlations among constructs to assess discriminant validity. It would be interesting to compare how the provided study addresses discriminant validity with such approaches. The study reports Cronbach’s Alpha and Composite Reliability values for each construct, which are crucial indicators of internal consistency. The benchmark for Cronbach’s Alpha is typically 0.70 or above for early-stage research, as suggested by Nunnally [163] , which is met by all constructs in the study. Composite Reliability values also exceed the commonly accepted threshold of 0.70, indicating good internal consistency. This approach aligns with the practices of other SEM studies, like those reported by Hair [164] , where both Cronbach’s Alpha and Composite Reliability are used to assess the reliability of constructs. The AVE values reported in the study are above 0.50 for all constructs, meeting the threshold suggested by [162] for adequate convergent validity. This is a common criterion used in SEM studies to ensure that, on average, the construct explains more than half of the variance of its indicators. The detailed presentation of data analysis results, including tables and figures to demonstrate the study's findings, is a hallmark of thorough research. Similar studies often utilize tables to summarize construct reliabilities, loadings, and AVEs, and use figures to illustrate the structural model and its path coefficients. The practice of discussing the significance of the results in the context of the research questions and hypotheses, as mentioned in the provided study, is also a common and essential aspect of reporting in SEM research. The study reports composite reliability values exceeding 0.7 for all constructs, with some constructs achieving values above 0.8 and 0.9, indicating excellent to flawless reliability. This aligns with the recommendations by [164] in their guidelines for SEM, where a ρc value above 0.7 is considered acceptable. Many SEM studies, such as those in marketing or organizational research, report similar findings, suggesting that the provided study's constructs are reliably measured. For instance, a study by Wong [165] on service quality in the hotel industry also reported composite reliability values above 0.8, reflecting high internal consistency similar to the current study's findings. The study's finding that all constructs exceed the threshold of 0.6 for Cronbach's alpha is somewhat lenient compared to the more commonly accepted threshold of 0.7, as suggested by [163] . However, it's important to note that some researchers accept a lower threshold for exploratory research. The values reported in the provided study suggest adequate internal consistency. Still, a comparison with studies adhering to a 0.7 threshold might indicate a slightly more conservative approach to reliability in those cases. With all constructs meeting the AVE threshold of >0.5, the study demonstrates good convergent validity, indicating that the constructs explain a significant portion of the variance in their indicators. This is consistent with the criteria established by [162] and is a common benchmark in SEM research to ensure that constructs are adequately measured by their indicators. For example, a study by [166] , also emphasized the importance of AVE values exceeding 0.5 to confirm the adequacy of construct measurements. The study assesses discriminant validity using the Fornell-Larcker criterion, which is a well-accepted method in SEM research. The criterion requires that the square root of the AVE of each construct should be higher than its correlation with any other construct. This approach ensures that constructs are distinct and measure different concepts. The confirmation of discriminant validity in the study is consistent with best practices in SEM and is crucial for the validity of the research findings. Similar studies, such as by [167] , also employ this criterion to establish discriminant validity in their measurement models. The use of the PLS-SEM method and bootstrapping for hypothesis testing in the structural model is a robust approach to assessing the relationships between constructs. Bootstrapping, a non-parametric resampling method, allows for estimating the accuracy of sample estimates and is widely used in SEM to assess the statistical significance of path coefficients. This approach is consistent with SEM studies that aim to explore complex models and test hypotheses regarding the relationships between constructs. For example, a study by [35] on modelling strategies in PLS-SEM also utilized bootstrapping to assess the significance of path coefficients, similar to the provided study. The methodologies and findings of the provided study align well with established practices in SEM research. The study's approach to evaluating reliability, validity, and structural model relationships using PLS-SEM and bootstrapping is consistent with similar research in fields like marketing, organizational behaviour, and management. The high standards of reliability and validity reported in the study suggest robust and reliable constructs, contributing meaningfully to the research domain. The path coefficients in the provided study suggest the strength and significance of relationships between CSFs and the implementation of TQM in IBS. Notably, all but one path (LD −> TQM in IBS) were found not significant, indicating weak support for the hypothesized relationships. This is inconsistent with SEM studies in similar domains, where significant path coefficients validate theoretical models. The non-significance of the leadership (LD) approach needs more investigation. A possible reason for this outcome is that leadership may not have a direct or predominant influence on TQM implementation in IBS initiatives within the Malaysian setting. The impact of leadership may be moderated by other aspects, including corporate culture, the dedication to continuous development, or the extent of collaboration and communication within the project team. Consequently, leadership may not be as pivotal in influencing TQM results as other factors, like as top management commitment or process management, which are often seen as more directly impactful in the realm of IBS projects. Future study may investigate the interplay between leadership and these other elements, or explore various industrial contexts where leadership may assume a more significant role. For instance, a study by [168] on consumer engagement in brand communities also found significant path coefficients supporting their hypotheses, demonstrating the effectiveness of SEM in uncovering relationships between constructs. The non-significance of the leadership (LD) path is particularly noteworthy and warrants further comparison. In contrast, a study by [169] on the effects of innovation characteristics on firm performance found leadership to have a significant impact. This discrepancy could suggest contextual differences in how leadership influences TQM implementation in IBS, warranting further investigation into the specific role of leadership in this context. The R 2 value of 0.707 in the provided study indicates that the model explains 70.7 % of the variance in TQM implementation in IBS, which is considered substantial in behavioural sciences research. This high level of explanatory power is indicative of a well-specified model that captures the key factors affecting TQM implementation. For comparison, a study by [134] , on the use of PLS-SEM in marketing research considered R2 values of 0.75, 0.50, and 0.25 as substantial, moderate, and weak, respectively. Thus, the R2 value in the provided study would be considered substantial, demonstrating strong model adequacy. The Q2 value of 0.459 for TQM implementation in IBS construction in the provided study indicates that the model has good predictive relevance. This metric assesses the model's ability to predict the dependent variable and a Q2 value greater than 0 indicates that the model has predictive relevance for the endogenous construct. This is in line with the guidelines provided by [170] for PLS-SEM analysis. For instance, a study by [158] , on the application of PLS-SEM in marketing found Q 2 values ranging from 0.2 to 0.5, indicating moderate to high predictive relevance, similar to the findings of the provided study. Comparing these findings with other studies, it is clear that the provided study demonstrates a robust use of PLS-SEM in evaluating the relationships between CSFs and TQM implementation in IBS, with significant path coefficients, high explanatory power, and good predictive relevance. The non-significance of the leadership path contrasts with some literature, suggesting a unique context for TQM in IBS that may differ from other settings. The substantial explanatory power and predictive relevance of the model underscore its effectiveness in capturing the critical factors influencing TQM implementation in IBS, aligning with best practices in SEM research. The uniqueness of the model presented lies in the fact that it provides a reasonable approach to examine the critical success factors of TQM, in relation to Malaysian IBS projects. Most TQM models are applicable in virtually all industries; however, this model is more concerned about the operational and managerial issues unique to the IBS industry. This brings forth a clearer picture of the conceptualization of the CSFs to successfully implement TQM in this environment. More importantly, the study uses PLS-SEM which is an advanced analytical approach that is used for relationships between the CSFs and TQM results quite accurately. Such an approach gives a high level of precision and integrity making the model not only relevant to the context but, also providing practical solutions for the enhancement of TQM in the IBS projects. The interplay of industry dynamics and sophisticated techniques of modeling further distinguishes our approach from similar ones in the literature. In summary, the provided study's findings contribute valuable insights into the factors influencing TQM implementation in IBS, with strong methodological rigour and comparability to other SEM studies in similar domains. The notable exception of leadership's influence invites further research into the contextual factors that might affect this relationship in the IBS context. The study's focus on the Malaysian construction industry, and IBS projects in particular, fills a critical gap in the literature, which has often been criticized for its Western-centric focus. The insights derived from this research can therefore serve as a valuable guide for both practitioners and researchers looking to implement TQM in similar contexts, potentially influencing policy and strategic planning in the construction industry not only in Malaysia but also in other countries with comparable industrial and cultural settings. By comparing the findings with the existing body of knowledge, the study not only contributes new insights but also prompts further investigation into the nuanced interplay of CSFs in TQM implementation across different cultural and industrial landscapes. 6 Conclusion The study focuses on the adoption of TQM in IBS projects in Malaysia, a developing country. While TQM has gained traction in developed countries, its implementation in Malaysia is in its early stages. The objective of this study is to propose a model for investigating the successful adoption of TQM in IBS projects, with a particular emphasis on its use in the realm of Malaysian construction projects. Using the PLS-SEM method, the researchers examined the relationships between CSFs and TQM, utilizing data from 371 construction project experts in Malaysia. The findings reveal six significant CSFs contributing to TQM in IBS projects: “top management commitment”, “continuous improvement”, “customer satisfaction”, “process management”, “leadership”, and “teamwork”. The study supports all six hypotheses, suggesting a substantial impact of each CSF on TQM. Notably, this research fills a gap by discovering the connection between TQMCSFs and IBS in the Malaysian context, an area that has not been extensively studied. This research offers a detailed guide for the application of TQM in IBS project, especially in narrowing down the six CSFs that affect TQM implementation the most. The results of Partial PLS-SEM state that the above-mentioned CSFs account for 70.7 % of change in TQM effectiveness, which greatly affirms their importance in the success of TQM strategies. It determines the need for such strategic emphasis on these critical areas for any initiatives directed towards the improvement of quality management in the construction sector. In addition, this research is also beneficial outside Malaysia as it provides a flexible framework that can be used by policymakers everywhere in the world to enhance TQM implementation in IBS initiatives. The study's theoretical contributions include its novel investigation within the Malaysian context, addressing the lack of attention to TQM implementation in construction engineering management and the relationship between TQM and IBS. The mathematical model proposed in the study could assist Malaysian professionals in adopting TQM practices, ensuring the success of IBS projects. Practical implications involve providing Malaysian construction firms with a reference point to integrate TQM into their projects, aiding effective project planning. The study offers a guide for building professionals, contributing to long-term viability in their endeavours. Stakeholders, including clients, developers, consultants, civil engineers, and construction management personnel, can benefit from the identified determinants enhancing TQM implementation in the construction sector and promoting sustainable building practices. Overall, the study not only contributes to theoretical knowledge but also offers practical insights for industry professionals in Malaysia, guiding them towards successful TQM application in IBS projects. Extending further the study findings and their implications by employing such strong empirical data, this study offers useful recommendations on how to elevate TQM practices and strengthens the basis for subsequent investigations and implementation of the concept of construction project management. 7 Implications The study acknowledges specific constraints that provide potential avenues for further investigation. The generalizability of the findings is limited due to the trivial sample size and the study's restricted geographic reach. To enhance the comprehensiveness and inclusivity of future research, it is suggested that a wider and more varied range of construction professionals be included in the study. This should encompass individuals from other governorates in Malaysia, including end-users and representatives of customers. Furthermore, it is worth noting that the present study used a quantitative methodology. However, it is recommended that future investigations consider utilizing qualitative or mixed methods approaches to get an additional comprehensive comprehension of the CSFs for TQM methods. This is mainly significant in the context of other developing countries where similar studies have not yet been performed. These methodologies have the potential to augment the completeness and depth of the results, so offering more comprehensive perspectives on the factors that allow and contribute to the critical success of implementing TQM in the construction sector. 7.1 Theoretical implications of this study Corporations in industrialized nations benefiting several advantages from the effective application of the TQM concept inside their businesses. Conversely, SMEs in underdeveloped countries significantly lag in realizing the advantages of TQM [171–173] . The results obtained in this research add value greatly to the existing theory of TQM and its practice in the construction industry with particular emphasis on IBS oriented projects. This theory is enhanced in How TQM is effective with the Critical success factors CSFs by bridging TQM to the manner in which IBS differs from TQM. This study also elaborates on the existing literature by identifying the six most important CSFs and their role in the effective implementation of quality management systems in offsite modular and prefabricated construction systems. As such, this brings more insight to the existing theoretical debate on the issue. In addition, this work further develops the existing CSF model by providing evidence of the significance of these factors within the IBS landscape, which leads to a better picture of the interplay of these factors in TQM implementation. The application of PLS-SEM improves the quality of the study and shows that complex interrelations within construction management can be dealt with using sophisticated statistical methods. This contribution in methodology provides an example that can be utilized in any further efforts that aim to study similar phenomena in construction and allied areas. Another important theoretical implication is the potential to use the model in various settings. Although this study is undertaken in Malaysia, the results serve as a basis for revising the model for other economies, cultures and jurisdictions thereby enabling comparative studies and making the results applicable in the foreign context. Furthermore, the research explores the relationship between TQM strategies and sustainability in the case of IBS projects, which enriches the existing literature on sustainable construction. This integration further emphasizes the need for quality management to be seen as a means towards the end of achieving inefficient use of environmental and other resources in the construction sector. Lastly, the presented study offers a theoretical justification for the adoption of CSFs as a strategic challenge in construction management. By taking a stand, the research explains the relevance of the application of these factors, to make internal relations as well as external relations vis a vis the quality improvement efforts of the organization. Such benefits collectively offer better development of the theoretical bases of TQM and IBS which can aid in the improvement of construction management research and practice across the world. 7.2 Practical implications of this study Insufficient understanding of the fundamental variables for TQM implementation results in the failure adoption of the TQM concept inside any organization [174,175] . This study has many practical implications to the various players engaging in the construction sector, especially in the IBS types of projects. By outlining and testing six CSFs regarded as critical for TQM implementation, the work presents useful recommendations for those in practice who wish to improve quality and performance of projects. To begin with, in the present study, various construction managers and decision-makers are provided with a strategic framework that helps them understand where to focus their energies and provide resources to such key processes as improvement, resource allocation, and process alignment, among others. These should also include TQM elements which experts recommend as useful pragmatic checklists to conform to while adjusting to changes in these elements with a number of interventions aimed at improving quality. Second, the results also point out the importance of TQM for the successful achievement of sustainability in IBS projects. Addressing the TQM critical success factors from the management perspective, this study enables the organizations to become environmentally efficient, minimize waste as well as optimize the use of the resources in line with their sustainability objectives. This places TQM as not only a quality-enhancing concept but also a means to promote green building revolution in the construction industry [176] . Third, the model outlined can be seen as an efficient decision-making system, which can be adapted to different regional and economic situations. To conclude, the research provides a solid structure for enhancing TQM practices in IBS projects, empowering practitioners to propel quality enhancement, foster sustainability, and engage in strategic decision-making. These contributions to practice also ensure that the result applies not only to the scholars but also to the actual construction industry as well. CRediT authorship contribution statement Aawag Mohsen Alawag: Writing – original draft, Validation, Software, Methodology, Investigation, Formal analysis, Data curation, Conceptualization. Wesam Salah Alaloul: Writing – review & editing, Visualization, Supervision, Resources, Project administration, Funding acquisition, Conceptualization. Hisham Mohamad: Writing – review & editing, Validation, Supervision, Resources, Project administration, Methodology, Investigation, Conceptualization. M.S. Liew: Writing – review & editing, Visualization, Methodology, Investigation, Conceptualization. Mokhtar Awang: Writing – review & editing, Visualization, Formal analysis, Data curation, Conceptualization. Abdullah O. Baarimah: Writing – review & editing, Visualization, Data curation. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgements The authors would like to thank Universiti Teknologi PETRONAS (UTP) for the support from the (cost centre 015PBC-004) awarded to Wesam Salah Alaloul. Appendix Questions related to the success factors to implement TQM in IBS construction projects in Malaysia: Please tick (√) your level of agreement on the significance of the following factors towards implementation of total quality management in IBS construction projects. ECTION 4: NO Factors Strongly disagree (1) Disagree (2) Neutral (3) (3) Agree (4) Strongly Agree (5) F1. Top Management Commitment TMC1 Providing quality vision and creating culture change within the organization TMC2 Encouraging education and training programs for employees TMC3 Promoting quality culture among organization team TMC4 Keep track of the entire progress of the project TMC5 Detailed strategic directions of the plans TMC6 Open communication and cooperation throughout the organization F2. Continuous Improvement CI7 Participation and feedback at all levels in the organization CI8 Effective cost-benefit analysis at all stages of the projects CI9 A coordination team representing all units in the organization CI10 Smart data collection, monitoring, and controlling system CI11 Appropriate limit of authority to all staff according to rules and responsibilities CI12 Cooperation among stakeholders CI13 Strategic planning during the early stages of a project F3. Customer Satisfaction CS14 Right mechanism to evaluate customer needs at most satisfaction CS15 Effective communication for defining the project concept CS16 Achieving competitiveness through quality CS17 Adopting a quality-oriented strategy CS18 Maintaining delivery level with well-timed information and speedy reaction to diminishing complaints CS19 The company responds to clients’ needs and complaints effectively CS20 The organization motivates employees to satisfy clients/customers CS21 Organizations consider customers’ requirements as the basis for quality F4. Process Management PM22 Smart processes of communication among various stakeholders, consultants (engineers & architects), contractors, suppliers, and client PM23 Smart progress planning and scheduling, management, and monitoring system PM24 System for performance analysis and rating for achieving the project objectives PM25 Process of a timely and correct communication to all related staff of the project F5. Leadership L26 The commitment of leadership at all levels or implementation of the TQM program L27 Communication of company policy for TQM to the employees L28 Well-defined rules and responsibilities L29 The organization provides policies for promoting customer satisfaction L30 Managers and supervisors encourage employees and help them perform at a better level in their task F6. Teamwork TW31 Regular meetings among stakeholders involved in projects delivery and other functions TW32 Independent quality assurance committee to support staff in improving quality TW33 The company ensures efficient coordination among various departments of the project sites TW34 The company establishes peer review teams on site TW35 The team is supervised impersonally by its mission statement and by the quality improvement roadmap
REFERENCES:
1. KIRAN D (2016)
2.
3. EHIGIE B (2005)
4. LIMPIADA R (2016)
5. TEY L (2014)
6. FEI T (2003)
7. BON A (2012)
8. ROHAIZAN R (2011)
9. KHALID S (2008)
10. ALALOUL W (2020)
11.
12. ALALOUL W (2016)
13. SAHA P (2022)
14. MUSTAPHA M (2011)
15. EPHANTUS E (2015)
16. LESTARI I (2024)
17. ALI A (2010)
18. MITREVA E (2016)
19. DISTRICT K (2015)
20. TALAPATRA S (2019)
21. ANSAH S (2023)
22. JOHNSON R (2020)
23. TAYLOR W (2003)
24. ZEHIR S (2023)
25. ALAWAG A (2023)
26. ALSALAHEEN M (2024)
27. ALTAF M (2021)
28. ALAIDROUS A (2021)
29. ALAWAG A (2023)
30. ABOUELSOOD H (2022)
31. ALAWAG A (2023)
32. MOHSEN A (2019)
33.
34. HAIR J (2019)
35. SARSTEDT M (2020)
36.
37. KHOSRONIYA M (2024)
38. IBRAHIM M (2024)
39. MOHSIN M (2024)
40. THOMAS B (2017)
41. ALTAYEB M (2014)
42. NIXON P (2012)
43. MALJUGIC B (2024)
44. EGWUNATUM S (2021)
45. LI J (2024)
46. ERSHADI M (2019)
47. HARRINGTON H (2012)
48. JIMOH R (2019)
49. SHOSHAN A (2018)
50. ALSOLAMI B (2022)
51. TWUM K (2022)
52. DILAWO R (2019)
53. OTHMAN I (2020)
54. RIAZ H (2023)
55. GARCIABERNAL J (2014)
56. TAN Y (2014)
57. KASIM N (2019)
58. RIAZ H (2023)
59.
60. ZENG W (2018)
61. HAN Y (2022)
62. JUNG J (2009)
63. HARRIS F (2021)
64. OMER M (2024)
65. ALVAREZSANTOS J (2018)
66.
67. ALIAKBARLOU S (2018)
68. PREBANIC K (2021)
69. ALZOUBI H (2022)
70. KENG T (2016)
71. JONG C (2019)
72. MUBARAK S (2015)
73. ZAMAN U (2022)
74. BOURKE J (2017)
75. ROSAKSZYROCKA J (2022)
76. ALOLAYYAN M (2011)
77. JAMWAL A (2021)
78. ZARAY A (2023)
79. RAUF A (2023)
80. KHALAFALATEYYAT S (2024)
81. MAGNO F (2024)
82. KEINAN A (2018)
83. IQBAL J (2017)
84. AZMAN N (2018)
85. NEYESTANI B (2016)
86.
87.
88. KHENI N (2015)
89. KEINAN A (2018)
90. MASHHADI M (2008)
91. IQBAL J (2017)
92. ISORAITE M (2017)
93. HSIEH C (2012)
94. LAI K (2003)
95.
96.
97. MASHWAMA N (2017)
98. OLEMAT A (2014)
99. KERZNER H (2002)
100. HOLZER D (2016)
101.
102. NASSAR N (2014)
103.
104. TALIB F (2010)
105. MAZHER U (2015)
106. AHMED A (2021)
107. ALAWAG A (2023)
108. LOVE P (2004)
109.
110. MOHAMMADMOSADEGHRAD A (2006)
111. VANHOUCKE M (2011)
112. AKIMOVA E (2017)
113. RAHMAN S (2014)
114. LENGYEL P (2021)
115. HWANG B (2013)
116. BUNIYA M (2021)
117. HANDFORD M (2014)
118. SOBOTKA A (2016)
119. ABEYSINGHE N (2022)
120. KINEBER A (2021)
121. DAOUD A (2021)
122. TABATABAEE S (2022)
123. DAOUD A (2023)
124. ALI A (2023)
125. DAOUD A (2021)
126. SHURRAB J (2019)
127. OLANREWAJU O (2022)
128.
129. DOVALLE P (2016)
130. YIN R (2014)
131. FAN Y (2016)
132. XIONG B (2015)
133. HAIR J (2021)
134. HAIR J (2011)
135. HENSELER J (2017)
136. THAM K (2019)
137. LEGUINA A (2015)
138. MEMON A (2014)
139. DURDYEV S (2021)
140. CHO G (2023)
141. HOCK C (2010)
142. GARSON G (2016)
143. MOHANDES S (2022)
144. HAIR J (2013)
145. MEMON A (2014)
146. KINEBER A (2023)
147. ABHAMID M (2017)
148. CHILESHE N (2018)
149. SARSTEDT M (2014)
150. KINEBER A (2021)
151.
152. OLANREWAJU O (2021)
153. HAIR J (2014)
154. JONG C (2019)
155. NAWI M (2023)
156. KINEBER A (2022)
157. GOEHE R (2012)
158. HAIR J (2012)
159. NAJI G (2021)
160. ABDHAIRAWANG J (2013)
161. ISMAIL K (2020)
162. FORNELL C (1981)
163. NUNNALLY J (1978)
164. GRONEMUS J (2010)
165. WU H (2013)
166. HENSELER J (2009)
167. HAIR J (2020)
168. HAIR J (2017)
169. SADIKOGLU E (2010)
170. PURWANTO A (2021)
171. CARNERUD D (2018)
172. ZAID A (2024)
173. GIMENEZESPIN J (2013)
174. MCADAM R (2019)
175. RAMADHANTY D (2023)
176. ALAWAG A (2024)
|
10.1016_j.heliyon.2023.e19709.txt
|
TITLE: Simple solutions for improving thermal comfort in huts in the highlands of Peru
AUTHORS:
- Mejia-Solis, Enrique
- Arias, Jaime
- Palm, Björn
ABSTRACT:
In the Peruvian mountains, hundreds of thousands of rural households living in poverty live in cold indoor environments, close to 0 °C. Indoor cold causes thousands of respiratory diseases and excess of winter deaths. In this study, we numerically calculated the impact of simple low-cost refurbishments on discomfort time during a year.
Using EnergyPlus and Python, we modelled a typical one-room hut used as bedroom built with a metal-sheet roof, adobe walls, dirt floors, and high infiltration rates. Then, 9 individual solutions were studied, and their combination resulted in 215 different hut designs. The model was calibrated with field measurements to estimate the infiltration. All the numerical calculations included an uncertainty analysis based on Monte Carlo method, and a sensitivity analysis to assess the impact of reducing infiltration on discomfort time.
The base case had a discomfort time of 44% of time. The calibration of infiltration resulted in a mean hourly air exchange rate equal to 29.1 h−1 (SD = 17.0 h−1). Five different designs formed the Pareto front that optimized discomfort time and costs. The solution with the lowest discomfort time during a year, 37% of the time, was adding insulation to the roof (U = 0.83 W/m2•K) and the door (U = 1.00 W/m2•K); and its cost was 286USD. In this solution, when infiltrations were reduced to 4.1 h−1 (SD = 4.1 h−1) discomfort time decreased until 16%.
These results benefit those households that nowadays invest their limited resources to improve their living conditions but without technical guidance.
BODY:
1 Introduction In the Peruvian mountains, nighttime indoor temperatures in rural houses are as low as 1 °C [ 1–3 ]. Most houses are mud-brick huts with a thin metal roof, a thin metal door, and bare soil as a floor, and they exhibit high air leakage through holes in the windows, doors, and/or roofs [ 2 , 4–6 ] ( Fig. 1 & Fig. 2 ). Low indoor temperatures are related to cardiovascular problems, mental health disorders, respiratory infections, and excess winter mortality [ 3 , 7 ]. These effects are also related to other local risk conditions such as low socioeconomic status, vulnerability to outdoor winter temperatures, difficulty accessing health services, and indoor air contamination caused by dirt floors and cooking smoke [ 8 ]. All the risk conditions mentioned contribute to thousands of respiratory infections and hundreds of deaths among vulnerable people each winter [ 3 , 8 ]. To describe local demographics and housing conditions, we used data from Peru's 2019 National Demographic and Health Survey [ 9 ]. This survey was a sample survey designed according to an internationally known methodology [ 10 ]. We calculated that approximately 2.8 million people, a number which represents 9% of the total Peruvian population [ 11 ], lived in the rural mountains and were distributed in 990,000 households, which represents 12% of the total households in Peru [ 11 ]. 87% of these households lived in buildings that used mud or mud bricks as the main wall material, 66% lived in buildings that had a roof made of metal sheet roofing, fiber cement, or a similar material, 75% lived in buildings that had dirt floors, and 75% used biomass for cooking. 90% of the households that used biomass for cooking had a room specially designated for cooking, which is possible evidence that the use of biomass for heating is rare. Between 2017 and 2019, the Peruvian government invested close to 150 million USD to refurbish more than 6300 houses [ 12 ]. The government installed wooden floors, ceilings, new windows, small vestibules, and a Trombe wall for passive solar heating ( Fig. 3 b). In a refurbished house at 4 a.m., the indoor temperature was 16 °C and the outdoor temperature was −1 °C (building layouts and number of occupants were not reported) [ 13 ]. In addition, between 2019 and 2020, the government invested approximately 190 million USD to build 24,370 houses [ 8 ]. These houses had an area of 33 m 2 , with two bedrooms, one living room, an insulated envelope, double-glass windows, adobe or fired-brick walls, but no heating system [ 1 ] ( Fig. 3 a). In one house, nighttime indoor temperatures were between 6 °C and 9 °C, while outdoor temperatures were between 2 °C and 7 °C (the number of occupants during these measurements was not reported) [ 1 ]. Private institutions have also financed hundreds of house improvements [ 14 ]. Despite all these investments, thousands of households still need warmer houses. Local professionals have studied low-cost passive solutions using local materials to increase thermal insulation, airtightness, and solar heat gains (winter solar radiation is relatively high) ( Table 1 ) [ 15–18 ]. Weiser et al. [ 16 ] numerically calculated the impact of using different construction materials on indoor temperatures in a 14.8 m 2 one-room house. They tested: (1) fired clay brick walls with concrete roof; (2) adobe walls with wooden roof; (3) light earth wall (0.12 m thickness) with light earth roof; and (4) light earth wall (0.22 m thickness) with light earth roof. The most comfortable combination was light earth wall (0.22 m thickness) with light earth roof and it was comfortable during 94% of the time during winter. The researchers used the adaptive thermal comfort model defined by De Dear & Brager [ 19 ]. Other local professionals have studied low-cost mechanical heating systems. Molina et al. [ 6 ] tested a system made of a solar thermal collector, and a 65-L radiator made of 6-inch diameter PVC tube. Warm water flowed from the solar collector at 6 p.m. to the PVC tube and stayed there until 6 a.m. The test was done in a 9 m 2 × 2.2 m room with adobe walls, double doors, insulated floor, insulated roof, and double-glazed windows with shutters. Indoor temperatures went from 14 °C to 12 °C while nighttime outdoor temperatures went from 7 °C to 0 °C. Holguino et al. [ 20 ] studied latent heat storage in a floor made with four layers: a bottom layer of concrete; a layer of insulation; a layer of andesite stone with guano, as a heat-storage material; and an upper layer of wood. Warm water ran through tubes in the floor. The test was done in an empty 4.6 m 2 × 1.7 m room with adobe walls, double-glazed windows, plywood ceiling, insulated roof, and insulated walls. Nighttime indoor temperatures went from 10 °C to 6 °C while outdoor temperatures went from 5 °C to −2 °C. Similar housing problems affect people in other mountainous regions of the world with similar social conditions. In Ecuador, Miño-Rodriguez et al. [ 21 ] calculated that the indoor temperature in a low-cost house during a year dropped to a minimum of 3.3 °C and had a median temperature of 6.1 °C, while the minimum outdoor temperature was 1.9 °C. This one-story house had an area of 34 m 2 , a roof made of concrete tiles, a floor made of concrete with plaster, and walls made of compressed stabilized earth blocks. The researchers calculated that mounting a ceiling and reducing air infiltration increased the minimum temperature by 0.9 °C and the median by 1.7 °C. In Nepal, Thapa et al. [ 22 ] measured a mean nighttime indoor temperature of 10.3 °C in temporary shelters made of metal sheets while the mean outdoor temperature was 7.6 °C. The researchers numerically calculated that adding a 2 mm tarpaulin and 12 mm cellular polyethylene foam to walls and roofs raised indoor temperatures over 11 °C for 70% of the night. People living at high altitudes and with low indoor temperatures feel more satisfied with their indoor environment at lower temperatures than other populations living in more conventional conditions [ 23 ]. Mountainous populations living in cold houses have developed behavioral adaptations to the cold and expect to feel cold indoors [ 24 ]; these two characteristics help them to feel comfortable at lower temperatures [ 25 ]. For this reason, conventional thermal comfort models, like Fanger's Predictive Mean Vote (PMV) or ASHRAE's adaptive thermal comfort model, predict higher cold discomfort than the discomfort found in field studies focused on mountainous populations [ 23 , 26 , 27 ]. Furthermore, laboratory comfort models like the PMV do not include the influence of hypobaric conditions in the heat transfer between humans and the indoor environment [ 28 ]. In the Peruvian mountains, 46% of 10 people surveyed were satisfied in rooms with air temperatures between 5 °C and 12 °C [ 29 ]. However, the studied sample was arguably small, 10 people, to be used to generalize the results for the entire territory. In another study, 32 out of 43 households belonging to a public housing program ( Fig. 3 a) were satisfied with their indoor environment [ 30 ]. The researcher did not measure indoor temperatures but, according to another report, the mean nighttime indoor temperature could have been 7 °C [ 1 ]. Other mountainous populations also feel thermally satisfied at low indoor temperatures. In rural stone dwellings in Tibet, 76% of 327 people surveyed were satisfied with operative temperatures between 3.5 °C and 7.5 °C, while outdoor temperatures were between −5.6 °C and 10.3 °C [ 27 ]. In that survey, the calculated neutral temperature, defined using linear regression, was 12.9 °C. In Nepal, in temporary shelters made of zinc sheets, 46% of 695 people surveyed were satisfied with their indoor environment when indoor air temperatures were between 5 °C and 29 °C and outdoor temperatures were between 1 °C and 25 °C [ 31 ]. The winter neutral temperature, calculated using Griffiths’ method, was between 15.0 °C and 20.8 °C depending on the region. We have observed that some households in the Peruvian mountains have invested in refurbishments to increase indoor temperatures. These include reducing infiltrations, installing woven polyethylene ceilings, and/or installing semitransparent plastic sheets as skylights ( Fig. 4 ). A household will implement solutions based on empirical experience, but empirical evidence could be misleading. For example, skylights could be too small to cause a significant impact on nighttime temperatures. Thus, empirical decisions need to be supported by scientific evidence to guarantee that households can use their limited resources effectively. This scientific evidence must consider a variety of conditions such as house size, number of occupants, and weather. This evidence is rarely found in the literature. In this study, we evaluated simple low-cost refurbishments such as skylights and insulation, either as stand-alone or combined solutions. The main objective was to find which solution had the greatest impact on thermal comfort. The perceived comfort was numerically calculated including an uncertainty analysis. The impact of reducing infiltration in a refurbishment was calculated via a sensitivity analysis. Sensitivity analysis has been seldom applied in the thermal analysis of buildings located in tropical weathers and at high-altitude, like in our case of study. Two particular characteristics of this weather are high solar radiation during winter and high daily temperature oscillation during the whole year. Ordóñez et al. [ 32 ] did a sensitivity analysis of a 36 m 2 unconditioned dwelling located in Quito i.e. tropical weather and at 2800 m. They used a calibrated model and different boundary conditions to study the influence on thermal comfort of five parameters: windows to wall ratio, air infiltration, window orientation, window glazing, and envelope's thermal mass. The researchers found that window to wall ratio was the most sensitive parameter and a range from 20 to 40% resulted in the highest number of hours with thermal comfort. Window orientation and window glazing had the lowest influence. We consider this study to be a contribution not only to the inhabitants, for the reasons already explained, but also to science. Uncertainty and sensitivity analyses of building performance simulations is a research area that has been getting more attention in the last two decades [ 33 ]. However, these analyses are often done on conventional buildings. In our study, we focused on the uncertainties of construction materials and other conditions that are not common in the literature and looked at secondary sources uncommon in other building performance simulations. 2 Methodology Numerical calculations were performed using EnergyPlus v22.2., a building energy simulation program [ 34 ]. First, a base case building was defined, and 9 refurbishment solutions based on passive techniques were selected from the literature. Then, these solutions were combined to create 215 new building designs. Nighttime operative temperatures and comfort indices were then calculated for each design, with nighttime being considered the time between 8 p.m. and 5 a.m. The calculations were performed with a frequency of 1 h for one year. The calculations included an uncertainty analysis using a Monte Carlo analysis. To assess the impact of including a refurbishing solution in a building design, a sensitivity analysis was performed based on linear regression and the calculation of standard regression coefficients (SRC). The Python programing language and the Eppy library ( https://pypi.org/project/eppy/ ) were used for the parametric analysis with EnergyPlus, for the Monte Carlo analysis, and for the multivariable linear regression and SRC calculations. 2.1 EnergyPlus's model EnergyPlus uses the heat balance method to do transient heat transfer calculations assuming indoor air is at the same temperature throughout the entire volume of a thermal zone. It also assumes the existence of uniform surface temperatures, uniform irradiation, diffuse radiation, and one-dimensional heat conduction [ 34 , 35 ]. This method analyzes four processes: heat balance at the walls' outdoor surfaces, heat conduction through the walls, heat balance at the walls' indoor surfaces, and indoor-air heat balance. The indoor-air heat balance is the distribution of heat supplied to the thermal zone's indoor air. In a zone without mechanical heating or ventilation, the indoor-air heat balance was [ 36 ]: energy stored in zone air [ J / s ] = sum of convective internal loads [ J / s ] + convective heat transfer from zone surfaces [ J / s ] + heat transfer caused by air infiltration [ J / s ] where (1) ρ z V c p d T z d t = ∑ i = 1 N s l Q ˙ i t + ∑ i = 1 N s u r f a c e s h i t A i ( T s i t − T z t ) + m ˙ inf t c p ( T ∞ t − T z t ) = area of the zone surface A i [m i 2 ] = specific heat of the air [J/(kg•K)] c p = convective heat transfer coefficient at time h i t [W/(m t 2 •K)] = air infiltration mass flow rate [kg/s] m ˙ inf t = number of convective internal loads [−] N s l = number of surfaces in the thermal zone [−] N s u r f a c e s = heat flux from source of internal load at time Q ˙ i t [J/s] t = temperature of surface T s i t at time i [K] t = indoor air temperature at time T z t [K] t = outdoor air temperature [K] T ∞ t = volume of the thermal zone [m V 3 ] = air density [kg/m ρ z 3 ] Air infiltration rate ( ) was calculated using: V inf (2) V inf t = A L 1000 C s ( Δ T t ) + C w ( U t ) 2 = air infiltration rate at time V inf , m t 3 /s = effective air leakage area, cm A L 2 C s = stack coefficient , ( L / s ) 2 / ( cm 4 • K ) = indoor ( Δ T t ) – outdoor ( T i t ) temperature difference at time T o t , K t C w = wind coefficient , ( L / s ) 2 / [ cm 4 ( m / s ) 2 ] = local wind speed at time U t , m/s t The values of and C s , respectively, were 0.000319 and 0.000145, taking into account a one-story building with no obstacles around it [ C w 35 ]. was randomly selected from a probability distribution explain in the A L Model calibration and infiltration section below. Foundation: The Kiva model was used to calculate heat transfer to the ground . It performs two-dimensional finite difference calculations using information regarding the building layout, weather, sun position, zone temperatures, and zone radiation [ 37 ]. An adiabatic boundary condition under the ground at a depth of 14 m was assumed [ 38 ]. The TARP model was used to calculate convective heat transfer coefficients with internal surfaces, which considers surface orientation and temperature difference [ 36 ]. The DOE-2 model was used to calculate convective heat transfer coefficients with external surfaces, which considers surface orientation, temperature difference, surface roughness, and wind speed [ 36 ]. 2.2 Base case, retrofitting solutions, and uncertainty analysis The base case model was a detached bedroom with a typical building layout ( Fig. 5 ) and construction materials [ 4 , 39 ] ( Fig. 6 ). To define the base case, EnergyPlus required inputs related to the physical properties of the construction materials, the building layout, infiltration, internal heat gains, and weather. Nine refurbishing solutions were selected to modify the base case ( Fig. 6 ): adding a skylight, adding a lightweight ceiling, adding window or skylight shutters that were closed during the night, and adding insulation to the roof, door, floor, interior surface of walls, or exterior surface of walls ( Table 2 ). In the case of the solutions that included to add insulation to the envelope, insulation was added until complying with Peruvian standards for minimum U-values in this kind of weather: U walls ≤ 1.00 W/m 2 •K, U roof ≤ 0.83 W/m 2 •K, U floor ≤ 3.26 W/m 2 •K [ 40 ]. Calculation of each solution cost included construction material costs and manufacturing ( Table 2 ). These solutions were combined using full-factorial design, which resulted in 215 new building designs. Each of the 215 new house designs and the base case were defined by a set of inputs necessary for the calculations. The value of each input was randomly selected from probability distributions following the Monte Carlo methodology. According to this methodology, a random selection is performed for each input more than one time; thus, each design was represented by various sets of inputs with randomly selected values. All calculations were performed for each set of inputs. Several tests were run, and it was concluded that 200 sets of inputs per design resulted in consistent outputs. The probability distributions related to inputs about thermophysical properties were normal distributions with standard deviations (SD) of 5% ( Table 3 ), as is usually the case in uncertainty analyses of building performance simulations [ 33 , 41–46 ]. In this study, the maximum value for the absorptance properties was set to 0.95. For inputs related to building dimensions, building orientation, and material thickness, uniform probability distributions were used, and their values are indicated inside brackets in Fig. 5 & Table 2 . Uniform probability distributions are also common in uncertainty analyses of building performance simulations [ 33 , 41–46 ]. Inputs needing further explanation were weather, the adobe's thermophysical properties, the internal heat gains and mass, and the glazing materials' thermophysical properties. a) Weather The Peruvian mountains is classified in two regions according to the altitude [ 52 ], the highest altitude region, above 3500 m. a.s.l. was chosen for this study. According to the national census of 2017, there are 557 districts with rural population living in huts. Typical meteorological years weather files were created with Meteonorm 7.2 for these districts. These files show that, in this region, outdoor temperature variation along the year is low but during a day is high, and there is high solar radiation during the whole year ( Fig. 6 ), During the Monte Carlo analysis, each weather file had the same probability of being included as an input. b) Adobe's thermophysical properties Adobe bricks have different thermal properties depending on their manufacturing process. In Peru, adobe bricks are made artisanally, and their thermal conductivities and specific heat capacities are lower than international references ( Table 4 ). During the Monte Carlo analysis, each of the sets of properties shown in Table 4 had the same probability of being included as an input. In each set, solar and thermal absorptances were 0.94 and 0.70, respectively. Each property was assigned a normal probability distribution with an SD of ±5%, but the maximum admitted value for absorptance was 0.95. c) Internal heat gains and mass One person was assumed to be indoors all day, and that it would open the door, which increased the infiltration, when indoor air temperature was higher than 14 °C. This temperature was chosen assuming that these people feel comfortable at relatively low temperatures as studies in Nepal [ 57 ]. Additional people were assumed to be indoors between 8 p.m. and 5 a.m. in some cases. To estimate the number of people sleeping in the room, data from Peru's 2019 National Demographic and Health Survey [ 39 ] was used to calculate the number of bedrooms and household members per household ( Fig. 7 ). Most households had 1 or 2 bedrooms, and 1 to 5 members. Thus, the input number of people sleeping in a bedroom had a uniform probability distribution with values of 1, 2 or 3. The heat production of each person was selected from a normal probability distribution with a mean of 70 W and an SD of 10%. The people's total thermal insulation due to clothing, bed clothes, and type of bed was selected from the uniform probability distribution [3.3, 3.8, 4.3] clo. These were selected taking into account our observations and results from experimental measurements in the literature [ 58 ]. One light bulb was assumed as other internal gain, it was incandescent and turned on from 8 p.m. to 9 p.m., with a heat production of 50 W, and radiant and visible fractions equal to 0.8 and 0.1, respectively [ 59 ]. Internal mass was considered to be furniture modelled as a wood sheet measuring 4 m 2 × 0.013 m. d) Glazing material The glazing materials’ optical properties were assumed to be affected by soiling, decreasing transmittance down to 70% and reflectance down to 85% of their respective nominal values [ 60 , 61 ], thus increasing emissivity ( Table 5 ). The properties of Polyvinyl chloride(PVC) with yellow pigment were derived from those of PVC without pigments [ 62–64 ], assuming that yellow pigment reduced transparency by 96% [ 65 ] and mainly increased reflectance [ 66 ]. 2.3 Model calibration and infiltration A calibration of the EnergyPlus model using field measurements was used to find adequate values of air infiltration. Air infiltration in similar buildings has been registered in the literature but in a broad range of values. Williams & Unice [ 5 ] measured infiltrations in two houses in the Peruvian mountains using a tracer gas technique. These two houses were two-story buildings with higher-quality construction than that of our base case. AERs were from 0.5 h −1 to 6.2 h −1 with closed windows. In the Guatemalan mountains, researchers studied a hut used as kitchen and calculated AER between 7.3 h −1 to 52.9 h −1 using the rate at which particles of carbon monoxide were removed [ 68 , 69 ]. Considering that infiltration is one of the most influential parameters in the calculation of indoor temperatures in similar buildings [ 32 ], a model calibration was done using physical measurements. Two air temperature sensors (Elitech RC-4HC) were placed inside a hut in Langui district in Cusco, Peru ( Fig. 1 b). Outdoor environmental conditions were measured with a meteorological station (monitoring station Onset HOBO RX3004, solar radiation sensor Onset HOBO S-LIB-M003, air temperature/relative humidity sensor Onset HOBO S-THC-M002, Davis wind speed and direction sensor S-WCF-M003). Measurements were done for 8 days (from the 14th to the 21st October 2022) and data was averaged to 1-h bins to be consistent with our simulations results, which were done with 1-h frequency ( Fig. 7 ). The measured weather data were used to calculated indoor air temperatures inside the hut. This calculation included an uncertainty analysis of the thermophysical properties of the materials. The materials’ thermophysical properties were randomly sample 100 times creating 100 different simulation files. Each file was simulated 20 times with different effective air leakage area ( in Eq A L (2) ) values from 2000 to 7000 cm 2 with increments of 250 cm 2 . For each the Normalized Mean Bias Error (NMBE) and Normalized Root Mean Square Error (NRMSE) was calculated. For each file, A L that minimized NMBE was selected; in all cases, the values of NMBE and NRMSE were lower than those recommend in the literature, 5% and 10% respectively [ A L 32 ]. 2.4 Thermal comfort model Rijal has extendedly studied thermal comfort of people living in high altitude mountains in Nepal and in similar housing conditions than those described in the present study [ 22 , 31 , 57 , 70–72 ]. An adaptive thermal comfort model developed by Rijal [ 57 ] was chosen for this study to calculate comfort temperatures ( ), which are temperatures that cause a neutral thermal sensation. The model is: T c (3) T c = 0.808 T g + 4.4 ° C = globe temperature (°C) T g was assumed equal to the operative temperature T g calculated by Energy Plus. The comfortable range for 80% of acceptability was defined as T o ± 3.5 °C [ T c 73 ]. 2.5 Optimal solutions and infiltration sensitivity analysis Optimal solutions were the set of all retrofitting solutions in the Pareto front of two variables: investment costs ( Table 2 ) and discomfort time during a year. Discomfort time was defined as the total number of hours of the year when indoor air temperatures were outside the comfortable range. This total number of hours was calculated adding all the results for each of the 200 sets of inputs per design. The discomfort time during a year was expressed as a percentage over the total number of hours during a year (8760). Sensitivity analysis was applied to study the effect of reducing infiltration in the optimal solutions and the base case. Reducing infiltration was not included as a retrofitting solution because of its difficulty to control its implementation in real life, and lack of data of infiltration in this kind of building. Thus, this sensitivity analysis was included to assess the potential reduction in discomfort if measures to reduce infiltration were possible. 3 Results The maximum and minimum measured indoor temperature were 22.4 °C and 2.7 °C, respectively ( Fig. 8 ). Indoor temperature oscillation between day and night was close to the outdoor temperature oscillation ( Fig. 8 ). Around noon, indoor temperatures were higher than outdoor temperatures because of the high thermal transmissivity of the metal roof. Model calibration in combination with an uncertainty analysis resulted in 100 values of air effective leakage area ( ). These 100 values were approximately distributed as a normal distribution with mean = 4750 cm A L 2 and SD = 750 cm 2 . Indoor temperatures in the base case were calculated adding this probability distribution to the methodology described above ( Fig. 8 ). Most of the measurements were not in the range of “calculated hourly indoor temperature ± 2SD” ( Fig. 8 ). Despite this negative result, calculated values and measured values were arguably similar enough to consider the model calibration adequate. In the base case, 44% of discomfort time during a year was calculated. Five non-dominated solutions in terms of investment costs and discomfort were found ( Fig. 9 ). The minimum percentage of discomfort time during a year was 37% after investing 286 USD in adding insulated door and adding insulated roof ( Table 6 ). 84 out of the 215 tested designs had discomfort time lower than the base case. The importance of decreasing the roof transmissivity is notorious because the solution adding insulated roof was present in 73% of these 84 designs. Investing in retrofitting did not necessarily decrease discomfort ( Fig. 9 ), 131 of the 215 tested designs had discomfort times equal or higher than the base case. The highest discomfort time was 50% and it belonged to the design combining adding shutter to the window, adding walls interior insulation to walls, adding insulated floor, and adding skylight. Adding insulated floor was the solution most present among of these 131 designs, it was present in 56% of the designs. However, it's effect over the discomfort time was not necessarily negative because it also was present in a similar proportion (42%) among the solutions with lower discomfort time than the base case. The mean value of the hourly air exchange rate (AER) in the base case after calibration was 29.1 h −1 (SD = 17.0 h −1 ), and in the solution with the lowest discomfort (combining adding insulated roof and adding insulated door ) was 29.3 h −1 (SD = 17.5 h −1 ) ( Fig. 10 ). These high AER are a consequence of high = 4500 cm A L 2 (SD = 750 cm 2 ) and low room volume. An infiltration sensitivity analysis was done; the mean value until a mean AER equal 4.1 h-1 (SD = 4.1 h-1) was reached. With this level of infiltration, the minimum discomfort time was 16%. The calculated comfort temperatures ( A L in Eq T c (3) ), was 14.9 °C. Considering a comfortable range of ±3.5 °C, the design with adding insulated roof and adding insulated door had only cold discomfort ( Fig. 11 ). 4 Discussion Our study indicates that the lowest discomfort time is obtained after adding insulated roof and adding insulated door. The decrement of discomfort was 7% (613 h during a year) in comparison with the base case and an investment of 286 USD was needed. The base case can also improve with lower investment but lower decrement of discomfort. We have not found references to discuss a desirable ratio between discomfort reduction per invested dollar, but we believe that the goal should be zero discomfort time. In the studied context, the reduction of discomfort is limited by the low investment capacity of the households. Measure to decrease infiltration decreased discomfort in both the base case and in the design combining adding insulated roof and adding insulated door . Reducing infiltration in the base case reduced discomfort more than adding insulated roof and adding insulated door, 14% and 7% respectively. However, suggesting measures to reduce infiltration and their impact on the air exchange rate was out of the scope of this study. In addition, impact of low infiltration on human health, and on the moisture content of adobe walls have to be considered. Other researchers have also recommended the combination of infiltration reduction and roof insulation, such as those mentioned in the reference documents [ 15 , 16 , 74 ]. However, cold discomfort persisted in low infiltration levels, suggesting that other measurements should be included to improve these houses, e.g., roof with lower transmittance, passive heating techniques, or mechanical heating systems, such as in reference document [ 6 ]. Some modifications to the base case increased discomfort time, and, in other cases, more investment did not decreased discomfort. Thus, our results raised awareness about the importance of combining well-intended actions with numerical calculations. These approach is important especially in public housing programs that refurbish or build thousands of houses. Unlike other previous calculations of comfort in the Peruvian mountains [ 6 , 75 ], our work used an adaptive model developed for people living in Nepalese mountains with low indoor temperatures. This model defines a comfortable range with lower temperatures than those of other adaptive thermal comfort model such as De Dear & Brager's [ 19 ] model used in reference document [ 75 ]. Using a different comfort model would lead to different optimal solutions. We chose the Nepalese research because of the similar living conditions but there are other models developed for people living in high altitude mountains such as the aPMV model tested in field experiments for residential buildings [ 23 , 27 ]. It is important to do thermal comfort surveys for the rural Andean population in Peru to improve the evaluation of retrofitting solutions. The inclusion of a variety of conditions in our calculations made our results robust but also increased the uncertainty of the output. Some uncertainties are sparsely documented in the literature, such as effective air leakage areas, building layouts, internal heat gains, or the thermal properties of the local adobes. Further research will help to decrease the output's uncertainty. One of these uncertainties having significant importance on the calculations was infiltration. Two previous studies have modelled rural houses in the Peruvian mountains [ 6 , 16 ] using EnergyPlus. These researchers modelled houses with a constant infiltration of 1 h −1 . We consider that a transient calculation of the infiltration (Eq (3) ) is closer to reality. However, in one of those studies [ 6 ], indoor temperature calculations were compared with actual measurements, and the difference was less than 1 °C during nighttime hours; nevertheless, that studied building appeared to be more airtight than the traditional houses. 5 Conclusions In this study, we numerically calculated the impact of 9 refurbishing solutions to decrease discomfort time in rural houses in the Peruvian mountains. The solutions were defined as modifications to a typical one-room hut by adding insulation to a surface or adding a skylight. The information for a typical one-room hut was the base case scenario. The base case had a metal-sheet roof, adobe walls, dirt floor, and significant infiltration. Infiltration levels were approximated calibrating an EnergyPlus model with field measurements. The application of one or more solutions to the base case transformed the base case into 215 new house designs. Energyplus was used to calculated discomfort time during a year. The calculations included an uncertainty analysis of all inputs and a sensitivity analysis of air infiltration. In conclusion, to help the inhabitants affected by uncomfortable indoor temperatures, we recommend promoting the adding of insulation to the roof (U = 0.83 W/m 2 •K) and to the door (U = 1.00 W/m 2 •K) with an investment of 286USD. This was the solution that produced the lowest discomfort during a year, 37% of the time, among the calculated optimal solutions in terms of discomfort time and costs. Other four solutions formed the Pareto front and they are also recommended in cases when less expensive solutions are required. Our results showed that 61% of the tested designs had none or a negative impact on the discomfort time. By calibrating an EnergyPlus model with eight-days measurements of a typical hut, it was found that hourly air exchange rate (AER) levels with mean equal to 29.1 h −1 (SD = 17.0 h −1 ) reduced the error between the measurements and the calculations to adequate levels. Reducing infiltration reduced discomfort in both the base case and the solution with added insulation to the roof and the door. In the latter solution, discomfort time was reduced until 16%. when mean hourly AER was equal to 4.1 h-1 (SD = 4.1 h-1). All this discomfort time was cold discomfort. Specific measures to reduce infiltration were not listed because of lack of information about their impact. Although this was a theoretical study, the calculations offer relevant information regarding the households in relation to current actions being implemented to improve their housing conditions. Most households have limited economic resources to invest in refurbishing, and this situation limits the number of refurbishing solutions that they can afford. Households should continue making an effort to reduce infiltration, and prioritize investments by adding insulation to roofs and doors. Author contribution statement Enrique Mejia Solis: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper. Björn Palm: Jaime Arias: Analyzed and interpreted the data. Data availability statement No data was used for the research described in the article. Additional information During this study, Enrique Mejia-Solis had a scholarship for his PhD studies granted by Peru’s National Counsil of Science, Technology, and Technological Innovation ( CONCYTEC ) with contract number 303-2014-FONDECYT . Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
REFERENCES:
1. ALEJANDRO T (2019)
2. JIMENEZ C (2017)
3. GUTIERREZ A (2021)
4. GAYOSO M (2015)
5. WILLIAMS P (2013)
6. MOLINA J (2019)
7. LIMA F (2020)
8.
9.
10. RUTSTEIN S (2006)
11.
12.
13. SORIA J (2017)
14.
15. (2015)
16. WIESER M (2020)
17. WIESER M (2021)
18. MOLINA J (2021)
19. DEDEAR R (1998)
20. HOLGUINO A (2018)
21. MINORODRIGUEZ I (2016)
22. THAPA R (2019)
23. YU W (2017)
24. RIJAL H (2006)
25. BRAGER G (1998)
26. HUMPHREYS M (2013)
27. CHENG B (2018)
28. CHANG S (1996)
29. MOLINA J (2020)
30. REYNOSOACHAHUANCO V (2019)
31. THAPA R (2018)
32. ORDONEZ F (2021)
33. PANG Z (2020)
34. CRAWLEY D (2001)
35. (2017)
36. (2019)
37. KRUIS N (2015)
38. OKE T (1987)
39. (2019)
40. (2014)
41. HOPFE C (2011)
42. CHONG A (2015)
43. LOMAS K (1992)
44. SPITZ C (2012)
45. MAHAR W (2020)
46. CALLEJARODRIGUEZ G (2013)
47. COSTA V (2017)
48. PFUNDSTEIN M (2012)
49. OSSWALD T (2012)
50. SUEHRCKE H (2008)
51. (2000)
52. WIESERREY M (2011)
53. MAZRIA E (1979)
54. BALCOMB J (1983)
55. MINKE G (2006)
56. ABANTO G (2017)
57. RIJAL H (2021)
58. LIN Z (2008)
59. (2020)
60. SLEIMAN M (2011)
61. GURLICH D (2018)
62. NIJSKENS J (1985)
63. HU J (2019)
64. NIJSKENS J (1984)
65. YIN X (2017)
66. QI Y (2016)
67. INCROPERA F (1996)
68. SCHARE S (1995)
69. MCCRACKEN J (1998)
70. SHAHI D (2021)
71. GAUTAM B (2019)
72. RIJAL H (2010)
73. (2017)
74. NATIVIDADALVARADO J (2010)
75.
|
10.1016_j.jtemin.2025.100244.txt
|
TITLE: Chemometric assessment, seasonal variation and source apportionment of air pollutants in Islamabad's industrial area
AUTHORS:
- Anjum, Mavia
- Siddique, Naila
- Younis, Hannan
- Shafique, Munib Ahmed
- Munawar, Sadia
- Zubair, Mohsina
- Younas, Huzaifa
- Abbas, Ansar
- Faiz, Yasir
ABSTRACT:
Introduction
Global warming is intensified by atmospheric pollution, with industrial activities significantly contributing to this issue. This study investigates air pollution levels in the industrial area of Islamabad, the capital of Pakistan, a developing South Asian nation grappling with severe air quality threats.
Study area
This study was designed to assess the pollution levels in the air of industrial area of Islamabad, Pakistan
Methodology
Fine (size < 2.5µm: PM2.5) and coarse (size between 2.5 and 10 µm: PM2.5–10) particulate matter samples were collected on Polycarbonate air filters for three seasons in 2023. The elemental composition of PM was quantified using Inductively Coupled Plasma Optical Emission Spectroscopy (ICP-OES).
Results
The average PM2.5 (40.42 µg m−3) and PM2.5–10 (221 µg m−3) concentrations exceeded Pakistan Environmental Protection Agency (Pak-EPA) limits (PM2.5: 35 µg m−3; PM2.5–10: 150 µg m−3, respectively). In PM2.5, Na showed the highest mean concentration (8670 ng m−3), As the lowest (40 ng m−3); in PM2.5–10, Ca was highest (5476 ng m−3), Zr lowest (28 ng m−3). Seasonal variations revealed for PM2.5–10 Ca peaked at 7800 ng m−3 in autumn, with Mg, Si, Fe, and Al fluctuating significantly, while toxic elements As, Pb, Co and Cr decreased from spring to autumn. In PM2.5, Ca peaked at 7043 ng m−3 and Na remained elevated in spring, with crustal and toxic metals concentration decreasing from spring to autumn. Depositional flux was high for Ca and Ba in PM2.5–10, and Na, Cr, and Cu in PM2.5. The Enrichment factor and pollution index indicated higher contamination by Cr, Cu, Pb, S, Zn, Ni, As, Li, Mo, Sn, and Ag. Environmental Protection Agency Positive Matrix Factorization (EPA-PMF) identified steel mills, marble processing, e-waste incineration, Industrial dust, battery processing and vehicular emissions as primary sources. National Oceanic and Atmospheric Administration's Hybrid Single-Particle Lagrangian Integrated Trajectory model (NOAA HYSPLIT) confirmed local and transboundary contributions to elevated PM levels.
Conclusion
This study concludes that the air of Islamabad's industrial area has high levels of pollution, dependent upon various sources and mitigation of pollution can be achieved by strict enforcement of regulations and laws.
BODY:
1 Introduction A healthy environment is vital for human comfort, physical well-being, and overall health. The profound impact of air pollution on both human health and the environment is posing a significant risk for both developed and developing countries. Degradation of air quality has emerged as a critical concern worldwide, driven by a combination of natural and human-induced factors [ 1 ]. Anthropogenic human induced pollution sources release a complex mix of pollutants into the atmosphere, compromising environmental health and human comfort [ 2 ]. Central to this issue is particulate matter (PM), composed of solid particles that are suspended in the air. These particles can act as atmospheric contaminants when they are found in larger concentrations [ 3 ]. Particulate Matter is classified based on particle size, where PM 2.5–10 are particles having a mass ranging from 2.5 to 10 µm , while PM 2.5 is made up of particles with a mass of 2.5 µm or less. Elevated levels of PM in the atmosphere pose significant risks to the environment, ecosystem, and human health [ 4 ]. Particulate Matter can come from human activities, such as burning of fuels and biomass, operating oil refineries, driving automobiles, running power plants, emissions from various industries, etc. [ 5 , 6 ]. The Particulate Matter may also originate from natural sources such as sea salts, volcanic emissions, forest fires, and airborne dust [ 7 ]. Atmospheric PM can be produced by the process of gas-to-particle conversion, leading to the development of secondary aerosols. Alternatively, it can be directly released into the atmosphere as primary aerosols. The pollution due to elevated PM levels is widely recognized for its connection with lowered atmospheric visibility, increased health hazards, and its contribution to global climate change, and increase in regional as well as local pollution levels [ 8 ]. It is a well-established fact that industries around the world emit toxic and heavy metals into the environment, threatening local as well as global climate, ecology and have adverse effects on human health [ 9 , 10 ]. Studying the dynamics of PM in industrial areas is of utmost importance and researchers around the world have conducted exhaustive research on this topic [ 11–16 ]. Research suggests that when the level of PM 2.5 surpasses 10 µg m −3 , there is an increase of 4 % in deaths related to lung diseases, a 9 % increase in respiratory disorders, and a 17 % increase in cardiovascular diseases [ 17 , 18 ]. These airborne particulates also cause a decrease in radiation balance and reduce visibility [ 19 ]. Furthermore, there is a correlation between elevated levels of PM 2.5 and a rise in asthma attacks and worsening of pre-existing respiratory diseases. Extended exposure to PM 2.5 and PM 2.5–10 can result in chronic bronchitis, reduced lung function, and an elevated likelihood of heart attacks in the long run [ 20–22 ]. Heavy metals bound to atmospheric particulate matter, such as PM 2.5–10 and PM 2.5 , pose significant health risks due to their carcinogenic potential and widespread presence in urban and industrial environments [ 23 , 24 ]. Epidemiological studies have found that the concentration of airborne particles, as well as numerous meteorological conditions, are closely associated with a rise in respiratory and cardiovascular disorders [ 25–27 ]. Furthermore, heavy metals bound to these particles can be ingested, causing significant health risks because these toxic metals are difficult to break down within biological systems [ 28 , 29 ]. Specific metals, such As, can cause neurological and cardiovascular problems, while inhalation or direct skin contact can lead to cancer [ 30 ]. Additionally, metals, such as Cr, Ni, Cu, Pb and Zn, have been linked to kidney damage, abnormal cellular levels and subsequent development of cancer, neurological and digestive system impairments, and numerous cardiovascular problems [ 28 , 31 ]. Although a significant portion of heavy metals are bound to PM 2.5–10 , PM 2.5 bound heavy metals are more hazardous than those bound to PM 2.5–10 because their smaller size allows them to penetrate deeper into the respiratory system, reaching the alveoli, the tiny air sacs in the lungs where they can be more efficiently absorbed into the bloodstream [ 32 , 33 ]. As a result, it is critical to study the composition, distribution over time and location, and health effects of heavy metals in airborne particles. The newest air quality database from the World Health Organization (WHO) indicates that 98 % of global cities experiencing air pollution are located in low and middle income nations [ 34 ]. Being a developing nation with a significant degree of urbanization as compared to other South Asian nations, Pakistan suffers from chronic air pollution. Comprising 36.37 % of Pakistan's total population, the urban population is 77.42 million and is rising at a rate of 2.52 % annually [ 35 , 36 ]. Pakistan is among the three states worldwide with the highest deaths caused by air pollution, due to levels of aerosol mass concentrations (PM 2.5 ) continuously exceeding the air quality criteria set by WHO. If air pollution in Pakistan is not effectively managed, it is projected that by 2030, the average lifespan could be reduced by 100 months [ 35 ]. This alarming statistic underscores the urgent need for comprehensive environmental policies and interventions to curb pollution levels. The contribution of multiple sources provides crucial information for policymakers to develop successful emission control policies. Various statistical models have been employed successfully to ascertain the sources of air pollutants, specifically PM and PM bound heavy metals around the globe [ 37–40 ]. Positive Matrix Factorization (PMF) is a source apportionment model of air pollutants widely used by researchers [ 38 , 41 ]. Several source apportionment studies have been undertaken in Pakistan using a variety of techniques for PM characterization. According to Mansha et al., PM 2.5 levels in Karachi were found to be three times higher than the limits established by USEPA. Using PMF model they identified secondary aerosols, industrial emissions, soil/ road dust, and vehicular emissions being the major contributors to pollution. The city is also affected by humidity from the Arabian Sea. Trace metals Cr, Mo, and Ni from industrial sources show higher concentrations in the daytime. To address air pollution, mass transit systems should be implemented to curb private car demand and fossil fuel use [ 42 ]. Waheed et al. found that Islamabad's location, topography, and climate contribute to pollution in the city's atmosphere, reducing visibility and increasing allergies and respiratory problems. Gravimetric mass, black carbon (BC), and 29 trace elements were quantified in PM from a rural site. Three main pollution sources were identified using Principal Component Analysis (PCA); suspended soil particles, automobile-related sources (petrol and diesel combustion, traffic, tire wear), and combustion (wood and coal) [ 43 ]. Alam et al. found that the concentration of PM (PM 2.5 and PM 10 ) in Peshawar, Pakistan, is rising due to increased urbanization and industrialization. The 24-h average PM 2.5 and PM 10 concentrations were 6 to 9 times higher than WHO guidelines. The sources of aerosol pollution were identified using PMF as re-suspened road/soil dust, automobile emissions, small industry emissions, brick kiln emissions, and household combustion. NOAA HYSPLIT4 was also used to determine local and non-local sources [ 44 ]. The Environmental Protection Agency PMF model in conjunction with statistical analysis is a well-proven and useful approach for measuring heavy metal and pollution sources in a given area [ 45 ], and NOAA HYSPLIT may be used to distinguish between local and transboundary sources. Cities are densely populated areas that generate significant amounts of aerosols and trace gases through industrial and anthropogenic activities. These emissions can have a detrimental effect on air quality, visibility, and the chemical composition of the atmosphere at both local and regional levels. Pakistan has different industrial areas spread across different cities. Islamabad is the capital city of Pakistan. Islamabad has an array of sectors, with a dedicated industrial zone. Pollution level has been assessed in the soil and water of Islamabad's industrial area [ 46 ] but, there is no long term, systematic study of heavy metal contamination, pollution levels and source apportionment in the air of Islamabad's industrial region. This study seeks to fill this gap by conducting a detailed analysis of the elemental composition of the air in Islamabad's industrial sector over three seasons (spring, summer and autumn). This study also examines pollution levels and heavy metal source apportionment in Islamabad's industrial sector. The findings of this study are critical for understanding and addressing the presence of heavy metals in the air in Islamabad's industrial region, as well as their sources. These insights are essential for effectively planning, monitoring, and managing the area's environmental resources. 2 Material and methods 2.1 Study area Islamabad, Pakistan's capital city, is located at coordinates 33°28′ to 33°48′ N and 72°48′ to 73°22′ E. It has a population of more than two million. Islamabad is in the northern portion of the Potohar Plateau. The Surghar, Makarwal, Cherat, Rawalpindi, and Siwalik Groups are a collection of rock formations created between the Jurassic and Pleistocene eras. These formations include shale, claystone, limestone, sandstone, and iron or phosphate deposits. The soil in Islamabad is a mix of sand and loam [ 46 ]. The soil of Islamabad is mostly made up of sedimentary rocks with a calcified subsoil that were created by the action of water and wind [ 47 ]. The city is situated in a semi-arid climate, with warm and humid summers from May to June, a monsoon season in July and August, and frigid winters from November to March. Islamabad is separated into eight main sectors: administrative, diplomatic enclave, residential, educational and industrial sectors, commercial zones, and rural and suburban areas. Islamabad's industrial region consists mostly of three sectors: I-8, I-9, and I-10. The majority of industries are concentrated in sectors I-9 and I-10. This list includes industries such as steel mills, marble processing facilities, chemical manufacturing plants, battery manufacturers, auto repair shops, pipe manufacturers, polymer industries, pharmaceutical industries, and diesel generator manufacturers. Islamabad is situated on a plateau and is enclosed by mountains on two sides, effectively containing the pollutants within its boundaries. The PM 2.5 levels in the air are elevated due to these variables, resulting in reduced visibility and various types of allergies [ 43 ]. Fig. 1 shows the study area map while Fig. 2 shows the spatial distribution of several industries around the sampling site. 2.2 Air sampling A total of 180 (90 Coarse and 90 Fine) filters were collected from Islamabad's industrial area between April and November 2023 using a Gent-air sampler with a stacked filter unit (SFU) and a size-fractionating aerosol sampler. Filter pairs were collected seasonally: 24 during spring, 45 in summer, and 21 in autumn. Employing a progressive filtering method, the airborne PM was divided into two distinct size categories: coarse (particulate matter with a diameter ranging from 2.5 to 10 µm) and fine (particulate matter with a diameter <2.5 µm). The International Atomic Energy Agency (IAEA) supplied the polycarbonate membrane filters (Nucleopore, provided by Costar Corp. Cambridge, MA, USA) used for sample collection. Two to three samples were collected each week with the air sampler being run for twenty-four hours for collection of a pair of fine and coarse samples. The flow rate of the gent sampler was between 16–20 L min −1 . The sampling was stopped if the flowrate dropped below ∼15 L min −1 to avoid clogging. Before installing, each filter was weighed. The weight of PM 2.5 and PM 2.5–10 particles collected on filter paper were obtained by reweighing the filters after they were exposed (Gravimetric analysis). The loaded and unexposed filters were measured using the previously specified recommended protocols in a controlled environment with regulated temperature and humidity for a minimum of 24 h. The gravimetric mass of suspended particles in both coarse and fine fractions is obtained by comparing the weights before and after exposure, and PM in µg m −3 was calculated. 2.3 Black carbon measurement Elemental carbon, also known as Black Carbon (BC), is responsible for most of the light absorption in the atmosphere. It significantly contributes to pollution in urban, regional, and global ecosystems, requiring accurate assessments. The significance of fine particle BC is further emphasized by its impact on health, visibility, and climate change. The concentration of BC on the loaded filter was measured using reflectance method, utilizing an Evans Electro Selenium Limited EEL 43-D Smoke Stain Reflectometer. The reflectometer was calibrated using a blank filter from the same set as the loaded filters. Standard filters were used for calibration and for QA/QC purposes. The attenuation of the light by each filter was used to approximate its black carbon content. A fixed value for mass attenuation coefficient (5.27 m 2 g −1 ) was used for all seasons to calculate the black carbon content in each filter [ 43 , 48 ]. 2.4 Elemental analysis, filter digestion & quality assurance The present investigation involved the utilization of inductively coupled plasma optical emission spectrometry (ICP-OES) for elemental analysis. The specific instrument employed for this purpose was the ICAP 6500, manufactured by Thermo-Scientific in the United Kingdom. The experimental parameters for ICP-OES in the present study are presented in Table S1. For elemental analysis the polycarbonate filters were digested using the following methodology. First the filters were submerged in 10 mL deionized water in a Teflon beaker and heated at 100 °C for 10 min. After that 4 mL of HNO 3 and 2 mL of H 2 SO 4 were added to the beaker and the solution was heated at 150 °C near to dryness. After cooling the solution at room temperature, 3 mL of concentrated HNO 3 , 1.5 mL of concentrated H 2 SO 4 and 2 mL of concentrated HF were added to the Teflon beakers. Then the solution was transferred to plastic test tubes and sealed for 24 h to facilitate the breakdown of silicates by HF. Next day the solution was again heated for 20 min at 150 °C, and then after cooling the resulting solution, 1 mL of 10 % boric acid solution was added to neutralize any fluoride ions still present. The final solution was made up to 10 mL by adding deionized water and subsequently transferred to Plasma Spectroscopy Lab for ICP-OES analysis. A blank fine and coarse filter was digested alongside each batch of samples using the same protocol and the final elemental data was obtained by subtracting elemental contribution of blank filter. Organic materials were removed during sample digestion through sequential addition of concentrated HNO₃ and H₂SO₃, which act as oxidizing and dehydrating agents under high-temperature heating. This process effectively decomposes organic matter into volatile byproducts (e.g., CO₂, H₂O) while dissolving inorganic components. Subsequent treatment with HF targeted silicate minerals, and residual fluoride ions were neutralized with boric acid to prevent interference. Three replicate measurements were taken for each filter, and the mean value was used to calculate the final reported concentration. Coefficient of determination was calculated from calibration curve for each element, the values were in the range of 0.9998 to 0.9980. This high R² value indicates a strong linear relationship between signal intensity and concentration, confirming the reliability of the calibration process. The calibration curves were constructed using multiple standard solutions across a wide concentration range, ensuring robust and accurate measurements. The limit of detection (LOD) and limit of quantification (LOQ) values were calculated, and the details are given in Table S8. For quality assurance SRM NIST-2710a (Montana soil) was digested using four acid digestion methods alongside the filters [ 49 ]. The elemental results obtained were then compared with the standard values given by NIST and percentage recovery was calculated for each element. The percentage recoveries of each element are plotted in Fig. S1 and are given (in ppm) in Table S10. 2.5 Enrichment factor The Enrichment factor (EF) is a useful tool to quantify the enrichment of elements found in environmental samples with respect to the background value of some major elements (Al, Fe, Ca etc.). In this work, Al is used as a reference to calculate EF. (1) E F = ( C A l ) S a m p l e / ( C A l ) B a c k g r o u n d The variable ‘C’ represents the concentration of a specific element in a sample and background, while ‘Al’ represents the concentration of Aluminum in both the sample and background. The enrichment levels of elements are classified into five groups, based on the value of the EF. The classification can be found in Table S2 [ 50 ]. The background values for upper continental crust (UCC) are given in Table S4 by Wedepohl [ 51 ]. 2.6 Pollution index The degree of heavy metal pollution in the air of Islamabad's industrial site was evaluated by computing the pollution index (PI) based on the background values for each element. The following formula was used to calculate PI. (2) PI = C n B The variable ‘C’ denotes the concentration of each element, whereas the variable ‘B’ reflects the background value. Table S3 presents the categorization of pollution levels based on the PI [ 52 ]. The background values for both EF and PI were used from Wedepohl [ 51 ], the values are given in Table S4. 2.7 Statistical analyses The Pearson correlation method is a widely used approach for quantitatively assessing the relationship between variables in elemental data. It assigns a numerical value ranging from −1 to 1, where a value of 0 indicates no correlation, a value of 1 indicates a strong positive correlation, and a value of −1 indicates a strong negative correlation. In this work, Pearson's correlation matrix was calculated using the following formula. (3) r = ∑ i = 1 n ( a i − b ) ( c i − d ) ∑ i = 1 n ( a i − b ) 2 ( c i − d ) 2 Here, a i is the value of a variable in a sample and b is its mean while c i is the value of another variable with which we want to correlate the first variable, and d is the mean of the second variable [ 53 , 54 ]. Data normality was assessed using the Shapiro-Wilk test, executed via OriginPro's built in function, which provided the distributions of the datasets did not significantly deviate from normality. This analysis was applied separately to the PM 2.5 and PM 2.5–10 fractions as well as to the elemental data collected from Islamabad's industrial area. The p-values obtained confirmed the suitability of the data for the subsequent statistical analyses. 2.8 Seasonal depositional flux The seasonal deposition flux (SDF) of elements was determined by summing the daily values of fluxes, as described by the given equation. (4) S D F = C n × V d Here, SDF stands for the concentration of individual elemental species that are deposited in a season, measured in ng m −2 day. Cn refers to the concentration of trace elements in the daily samples, measured in ngm −3 . V d denotes the velocity at which PM particles are deposited. The average deposition velocities for dust-originated, anthropogenic, and other constituents in PM are 2.0 cm s −1 (1728 m day −1 ), 1.0 cm s −1 (864 m day −1 ), and 0.5 cm s −1 (432 m day −1 ), respectively [ 55–57 ]. Detailed values of V d are given Table S5. 2.9 Environmental protection agency positive matrix factorization (EPA-PMF) Receptor modeling software is frequently employed to identify and measure the sources of air pollutants at a specific site. Receptor modelling allows for the determination of multi-elemental composition (profile) of each identified source, as well as its proportionate contribution to the sample at a location during the study period. In this work, we used Positive Matrix Factorization for source apportionment of PM. The EPA-PMF is a powerful tool in air quality analysis for source apportionment of elements in the atmosphere. Positive Matrix Factorization involves decomposing a given matrix into two or more matrixes, with non-negative elements, which when multiplied together approximately the original matrix. It is a modified version of factor analysis in which the factors are restricted to have non-negative values. This approach has been utilized in multiple studies conducted worldwide to estimate the source of PM [ 58 ]. In the EPA-PMF framework, the observed concentration matrix X is modeled as the product of a source contribution matrix G and a source profile matrix F, plus a residual matrix E, expressed mathematically as (5) X = F G + E Here, X is an n × m matrix (with n samples and m species), G is an n × p matrix representing the contributions from p sources, and F is a p × m matrix containing the source profiles. The matrix E accounts for the unexplained variance or residual errors [ 59 ]. The model is solved by minimizing the objective function where e (6) Q = ∑ i = 1 n ∑ j = 1 m e i j 2 σ i j 2 ij is the residual for the ith sample and jth species, and σ ij is the uncertainty (standard deviation) associated with the corresponding measurement. In practice, EPA-PMF reports two variants of Q: 1. Q (True): Computed using the full weight of all data points. 2. Q (Robust): Computed with a down-weighting factor wij ≤ 1 for observations with excessively large residuals. The down-weighting in robust mode can be expressed as: (7) Q ( R o b u s t ) = ∑ i = 1 n ∑ j = 1 m w i j 2 e i j 2 σ i j 2 The normalized metric that is most critical for assessing model performance. The degrees of freedom (DOF) quantify the amount of independent information available for model fitting. For a dataset organized in an Q D O F n × m matrix, the total number of observations is: (8) N = n × m If we estimate the elements of G (an n × p matrix) and F (a p × m matrix), the total number of free parameters is approximately: (9) k = p × ( n + m ) Thus, the degrees of freedom in the model are given by: (10) D O F = N − k = n m − p ( n + m ) The normalized fit statistics, often interpreted as: (11) χ 2 = Q D O F A value of χ 2 ≈ 1 indicates that the residuals are consistent with the assumed measurement uncertainties and the model converges. In practical applications, there is a reduction in Q(Robust) compared to Q(True) (typically by 10–15 %) suggests that the down-weighting procedure is effectively reducing the influence of outliers or high-leverage points. The Factors in EPA-PMF are extracted via an iterative optimization procedure that alternates between estimating G and F under non-negativity constraints. Once the factorization converges, the source profiles in F are compared with known chemical signatures (e.g., heavy metals for industrial emissions, or high organic content for biomass burning) to identify the corresponding emission sources [ 60–62 ]. In this study EPA-PMF version 5.0 was used for source apportionment. For our work the concentration and uncertainty excel files were prepared carefully by keeping the number of cells and significant figures the same for each file. Elements with >50 % of missing values were excluded. The missing values of the elements included in the data were replaced by half of LOD (limit of detection). The detailed breakdown of the running parameters for EPA-PMF for this work can be seen in the supplementary data [ 60 ]. 2.10 NOAA HYSPLIT The Hybrid Single-Particle Lagrangian Integrated Trajectory (HYSPLIT) model, developed by NOAA, is a highly effective tool used for analyzing back trajectories. It provides valuable information on the movement of air masses and the possible origins of air pollutants. This method (HYSPLIT) employs a hybrid Lagrangian and Eulerian approach to compute the dispersion of particles, simulate deposition, and track the trajectories of air packets [ 63 ]. This allows for the analysis of the movement, merging, chemical changes, and settling of pollutants and hazardous substances. The model has played a crucial role in modeling investigations involving the discharge of atmospheric tracers [ 64 ]. The HYSPLIT4 has been able to accurately track various pollutants such as radionuclides, smoke from wildfires, volcanic ash, mercury, and wind-blown dust [ 65 ]. In this study, we have used NOAA HYSPLIT to calculate the back-trajectories of high episodes of PM and other elements, to quantify whether the source is local or transboundary. 3 Results The elemental composition and PM concentrations of fine and coarse samples are given in Tables 1 and 2 , respectively. The mean PM 2.5 and PM 2.5–10 values were 40.4 ± 32 and 221±68 µg m −3 respectively over three seasons (April-November) 2023. The total mean value of PM 2.5 is above the limit (35 µg m −3 ) set by Pak-EPA, while the PM 2.5–10 value exceeded the threshold value of 150 µg m −3 . The elemental concentrations in ng m −3 for PM 2.5 varied in the order: Na (8670), Ca (6261), S (5092), Fe (2991), Sn (2586), Zn (1579), Mg (1519), Ni (1467), Si (1465), Cr (1274), Al (1376), Ag (1004), K (765), Pb (85.5), Co (98.2), Ti (194), Mo (143), Mn (144), Zr (135), Cu (68.8), Ba (57.7), Sr (52.7), Li (46.8), As (40.5). The elemental concentrations in ng m −3 for PM 2.5–10 varied as: Ca (5476), Na(2300), S(2245), Si(1332), Mg(1200), Fe(1113), Zn(1060), Al(1005), Li(252), Cr(155), Ni(435), K(369), Ti(321), Sn(540), Cu(29.4), Pb(103), Mn(94.4), Ag(196), Ba(24.4), Co(9.1), Mo(39.4), Sr(38.7), As(13.5), Zr(27.6). In Fine PM, Na (8670 ng m −3 ) concentrations are the highest followed by Ca (6261 ng m −3 ), and S (5092 ng m −3 ). Subsequently, there are significant levels of Fe (2991 ng m −3 ), Sn (2586 ng m −3 ), and Zn (1579 ng m −3 ) also present in Fine PM. On the other hand, Coarse PM has, higher amounts of Ca (5476 ng m −3 ) and Na (2300 ng m −3 ) but are present in lower quantities as compared to Fine PM. Sulfur (2245 ng m −3 ) and Silicon (1332 ng m −3 ) are present in significant amounts in Coarse PM, with Silicon having a greater concentration compared to Fine PM. Additional elements such as Al (1376 ng m −3 in Fine PM and 1005 ng m −3 in Coarse PM), Fe (111 ng m −3 in Coarse PM), and Mg (1519 ng m −3 in Fine PM and 1200 ng m −3 in Coarse PM) exhibit notable levels in both PM fractions, but with varying concentrations. Notably, elements such as Cr and Ni are higher in Fine PM, indicating higher concentrations in finer particles. The PM 2.5 values were compared with the threshold values given by USEPA [ 66 ]. The PM 2.5 elemental composition data for Islamabad's industrial area indicates significant exceedances of USEPA regulatory limits for several toxic elements. Specifically, the average concentration of As is approximately 6.75 times higher than the 6 ng m −3 limit, Cr exceeds its 12 ng m −3 threshold by over 100 times, and Ni far surpasses its 0.24 ng m −3 guideline by >6000 times. In contrast, the levels of Cu, Fe, and Pb remain well below their respective limits, suggesting that the primary concerns in ambient fine particulate matter stem from elevated concentrations of As, Cr, and Ni. The health implications of these elevated concentrations are considerable. Chronic exposure to As is linked to carcinogenic and systemic toxic effects, while Cr, particularly in its hexavalent form, is associated with severe respiratory issues and an increased risk of cancer [ 30 , 67 ]. The extreme overexposure to Ni is alarming given its role as a respiratory sensitizer and carcinogen, which could lead to higher incidences of lung and nasal cancers among the local population [ 68 ]. 3.1 Seasonal variation The elemental concentrations, PM and BC for PM 2.5–10 in Islamabad show complex variations over the seasons, which are influenced by both major and toxic elements. Coarse PM levels vary significantly, with concentrations ranging from 180.00 µg m −3 in summer to 255.00 µg m −3 in autumn, indicating a peak in pollution during the cooler months. BC concentrations also exhibit variations, gradually increasing from 0.92 µg m −3 in spring to 1.34 µg m −3 in autumn. The fine PM levels were at 36.5 µg m −3 in spring, 28 µg m −3 in summer, and rising to 56 µg m −3 in autumn. The variability observed indicates that there are seasonal elements that affect the emissions and dispersion of fine particles, which may be influenced by industrial activities, weather patterns and contributions from other sources. Fine BC, which is a marker of emissions from combustion, exhibits a consistent pattern of increase, rising from 1.29 µg m −3 in spring to 1.41 µg m −3 in summer, and further to 1.78 µg m −3 in autumn. The seasonal variation of PM 2.5 and PM 2.5–10 is given in Fig. 3 . Since the industrial area of Islamabad is closed only on national holidays, the PM fluctuations on these holidays are also marked in Fig. 3 . On national holidays such as Eid-ul-Fitr (21–23 April), Eid-ul-Adha (28–29 June), Muhaaram (29 July) and Independence Day (14th August) the PM values decreased significantly because of closure of Industries. On 6 September (National Defense Day), the PM values increased, which may be due to the military parade which is held near the industrial area. Thermal inversions occur in winter when a layer of warm air above traps cool air and pollution near the ground, impeding the normal dispersion of pollutants. This occurrence is especially prevalent in urban areas situated in mountain basins or valleys. The thermal inversion continues into spring and onset of summer [ 69 ]. A study conducted in Fairbanks, Alaska, revealed that during surface-based temperature inversions, pollutants such as PM 2.5 gathered at lower altitudes [ 70 ]. Islamabad is a basin surrounded by Margala Hills, hence the higher PM concentrations during spring (end of the winter) and autumn (start of the winter) are due to thermal inversions. Moreover, during the Monsoon season the PM values are low due to the heavy rains which removes these particles from the air [ 71 ]. The percentage seasonal variations of each element for fine and coarse PM are plotted in Fig. 4 and the values are given in Table S6. For PM 2.5–10 , Ca showed significant seasonal fluctuations, reaching its highest level of 7800 ng m −3 during the autumn season. Mg and Si exhibit significant seasonal fluctuations. Fe reached its highest concentration of 1807 ng m −3 during the spring season and decreases to 699 ng m −3 in autumn. On the other hand, Al ranges from 1160 ng m −3 in spring to 870 ng m −3 in autumn. Turning to toxic elements, As and Pb show sporadic presence. As is detected at 13.5 ng m −3 only in spring, suggesting localized sources or seasonal shifts in atmospheric transport. The concentration of Pb in spring is 155 ng m −3 , but in summer it is 49.7 ng m −3 , suggesting variability in the sources of emissions. The concentration of Co shows levels of 13.85 ng m −3 in spring, 8.26 ng m −3 in summer, and 5.08 ng m −3 in autumn. Cr exhibits notable variations in concentrations, with levels of 284 ng m −3 in spring, 112 ng m −3 in summer, and 68.1 ng m −3 in autumn. These shifts reflect the multiple influences of industrial activities and atmospheric conditions. In the fine fraction, Ca had highest concentration of 7043 ng m −3 in spring decreasing to 4008 ng m −3 in autumn. High values of Na were observed in fine fraction for all three seasons, highest in spring and comparable values in summer and autumn. Mg and Si also exhibit significant seasonal variations, which can be attributed to road dust and crustal sources. Toxic elements such as Pb and As show irregular presence. The concentration of Pb is recorded at 106 ng m −3 in spring and drops to 65.3 ng m −3 in summer. It is not detected in autumn, suggesting that there is variability in the sources of emissions. The concentrations of As in spring are measured at 56.27 ng m −3 , which then fall to 24.8 ng m −3 in summer. Other elements such as Fe, Cr, and Ni also display notable variations across seasons. The concentration of Fe reaches its highest at 6982 ng m −3 during the spring season and decreases to 652 ng m −3 in autumn. Similarly, Cr and Ni exhibit comparable patterns, with higher concentrations observed in spring compared to summer and autumn. To assess the correlation between seasonal elemental data, Pearson's correlation matrix was calculated Fig. 5 . The elemental composition of spring in both PM 2.5 and PM 2.5–10 showed a moderate correlation with summer and autumn. The elemental composition of summer and autumn strongly correlated with each other. These results suggest that the elemental composition of the air in Islamabad's industrial area is distinct for spring in comparison with summer and autumn values. This is attributed to the higher accumulation of PM during the winter season in Islamabad's industrial area. 3.2 Depositional fluxes Significant differences in major and trace elements deposition fluxes over the sampling period were mainly associated with the concentration levels of individual elements in conjunction with the seasonal pattern of apportioned emission sources. The seasonal depositional fluxes are plotted in Fig. 6 and given in Table S7. In the PM 2.5–10 during the spring season, the deposition fluxes of elements like Al, As, Ba, and Ca were relatively high, with values of 2 × 10⁶ ng m −2 day −1 , 5.82×10³ ng m −2 day −1 , 14.7 × 10³ ng m −1 day −1 , and 5.7 × 10⁶ ng m −2 day −1 , respectively. During summer, Al, Ba, and Ca saw slightly lower fluxes of 1.7 × 10⁶ ng m −2 day −1 , 11×10³ ng m −2 day −1 , and 9.1 × 10⁶ ng m −2 day −1 . In autumn, a distinct shift occurred, with Al, Ba, and Ca deposition fluxes of 1.5 × 10⁶, 5.82×10³, and 13×10⁶ ng m −2 day −1 , respectively, with Ca peaking at 132×10⁶ ng m −2 day −1 , the highest among all measured elements in PM 2.5–10 . The variations across seasons were influenced by changes in emission sources, dry deposition velocities, and meteorological factors. Spring saw elevated fluxes of toxic metals like As, Ba, and Co, while Cr and Cu were more prominent in summer. Autumn, on the other hand, showed higher levels of Ni and Pb, reflecting seasonal emission and deposition dynamics in the area. The depositional flux data for PM 2.5 in Islamabad's industrial area shows significant seasonal variation. Al has the highest flux in spring at 5.26 × 10⁶ ng m −2 day −1 , while Ca is notably high across all seasons, peaking in summer at 1.34 × 10⁷ ng m −2 day −1 . As and Pb exhibit higher levels in spring, with fluxes of 2.43 × 10⁴ and 4.57 × 10⁴ ng m −2 day −1 , respectively, but decrease in summer, with arsenic not detected in autumn. The depositional fluxes of Fe and Na display elevated spring values while, Ni and Cr are also much higher in spring compared to other seasons, indicating a strong seasonal influence on metal deposition. Overall, spring tends to have the highest elemental fluxes, suggesting increased PM during this period due to factors like wind and industrial activity. Deposition fluxes of major and trace elements differ between PM 2.5–10 and PM 2.5 fractions. PM 2.5–10 generally shows higher fluxes, particularly for elements like Ca and Ba, indicating their prevalence in larger particulate sizes. In contrast, PM 2.5 typically exhibits lower fluxes except for elements such as Na, Cr and Cu, which exhibit higher fluxes than PM 2.5–10 . Seasonal variability affects both fractions, with PM 2.5–10 often reflecting higher fluxes during seasons with increased particulate emissions and conducive environmental conditions dependent upon the meteorological conditions (Temperature, wind speed, wind direction etc.), local and transboundary sources for particle deposition [ 57 ]. 3.3 Pollution level assessment For pollution level assessment EF and PI were calculated. The results of EF and PI are plotted in Fig. 7 (a-b). The EF values for Cr, Cu, Pb, S, Zn, Ni, As, Li, Mo, Sn and Ag in both PMs showed extremely high enrichment. The high enrichment of these elements shows high anthropogenic input. The PI values showed a similar trend for Cr, Cu, Pb, S, Zn, Ni, As, Li, Mo, Sn and Ag. Both indices showed high levels of pollution for these elements and confirm the high anthropogenic involvement in their dispersion in Islamabad's industrial area. To assess the seasonal variation of elemental pollution in the PM of Islamabad's industrial area, PI was calculated for each season Fig. 8 (a-d). For each season the pollution levels of Pb, Li, Zn, Ni, Sn, Mo, Ag were high for both PM 2.5 and PM 2.5–10 . This suggests a local and seasonal invariant source of these metals in Islamabad's industrial area. This source is most likely the steel mills and smelting plants as they run throughout the year. During spring season Co, Cu, S, As and Cr have elevated pollution levels in PM 2.5–10 size fraction, and during the summer only As and Cr have high pollution levels. This suggests lowering pollution levels of Co, Cu and S in summer. In PM 2.5 during summer Co, Cu and S showed higher pollution levels as compared to the other seasons. 3.4 Source apportionment For source apportionment PMF model was used, and the results are plotted in Figs. 9 and 10 . Five convergent factors were extracted for both PM 2.5 and PM 2.5–10 . 3.4.1 Sources of PM 2.5 For PM 2.5 the five factors are given in Fig. 9 and their percentage contributions are given in supplementary data under Section 10 (Fine PM 2.5 ). For fine PM analysis, species were classified as strong or weak. Al, Ba, Cr, Fe, Mn, Na, and Zn were designated strong due to robust S/N (signal to noise) ratios and consistent detection. In contrast, Ca, Cu, K, Mg, Ni, S, and Sr were classified as weak owing to lower detection frequencies. Sulphur (S) required special consideration as its uncertainty was derived from an externally sourced limit of detection (LOD), necessitating conservative downweighing. This approach minimized bias from sporadically detected or poorly constrained species while preserving the influence of dominant pollutants. The 4 times concentration uncertainty assigned to PM and BC common for bulk components further ensured model stability. The EPA PMF 5.0 analysis of fine PM identified five robust sources: Vehicular Emissions (Factor 1, 80 % mapped), Industrial Dust (Factor 2, 72 %), Soil (Factor 3, 84 %), Steel Mills (Factor 4, 93 %), and Mixed Sources (Factor 5, 99 %). These results were selected based on three key criteria: (1) High bootstrap mapping confidence (>90 % for Steel Mills and Mixed Sources), confirming source stability across 100 runs; (2) Minimal rotational ambiguity (zero factor swaps at dQ max = 4 in DISP analysis), ensuring interpretable profiles; and (3) Low variability in Q(Robust) (median: 1727, IQR: 1531–1907), indicating model convergence. The first factor is distinguished by BC loadings, reflecting the combustion of fossil fuels, and elevated concentrations of Zn, Cu, and Mn. Zn commonly arises from tire wear, Cu from brake linings, and in vehicular emissions, manganese Mn primarily originates from the use of Methylcyclopentadienyl Manganese Tricarbonyl (MMT) as a gasoline enhancer [ 72 ]. Together, these elements are indicative of traffic related activity. The second factor (Industrial Dust) is characterized by elevated loadings of Al, Ba, Fe, K, Mn, and S, along with positive contributions to both PM and BC. The prominent presence of Al, Ba, and Fe suggests that this source originates from the disturbance and handling of raw materials and construction-related activities [ 73 ]. Mn indicates contributions from mineral sources disturbed by industrial operations [ 74 ]. The soil source factor exhibits loadings of Al, Ba, Na, and S, but with comparatively small contributions to PM and minimal BC. This profile covers PM 2.5 primarily derived from natural soil and crustal materials, lifted into the atmosphere through wind erosion or mechanical disturbances, with limited influence from combustion or industrial processes [ 69 ]. The steel mills factor stands out with high loadings of Cr, Cu, Fe, Mn, Ni, and Zn. This distinct combination of elements provides a clear fingerprint of metallurgical processes, where high-temperature operations and the use of various alloying elements result in significant emissions. The elemental profile confirms that steel production is a major contributor to heavy metals in PM₂.₅ in the industrial area [ 46 ]. The mixed source factor is characterized by high loadings of Ca, Mg, Na, Sr, BC, PM, S, Cu, and Ba, coupled with relatively low loadings of Cr, Fe, Ni, and Zn. The high levels of Ca and Mg are typical of mineral processing, particularly in marble plants, while elevated Na and Sr suggest a significant crustal component. The presence of BC, PM, and S indicates additional contributions from combustion-related sources, notably diesel vehicles. The lower loadings of Cr, Fe, Ni, and Zn help differentiate this factor from steel mill emissions, indicating that it represents a composite source resulting from the combined influence of marble processing, diesel emissions, and other industrial activities [ 75 ]. 3.4.2 Sources of PM 2.5–10 For PM 2.5–10 the five factors are given in Fig. 10 and their percentage contributions are given in supplementary data under Section 9 (Coarse PM 2.5–10 ). For coarse particulate matter, the species were classified as strong or weak. Elements such as Al, Ca, Fe, and Mg were designated strongly due to high S/N ratios and consistent detection. In contrast, K, Na, Ni, Pb, S, and Sn were classified as Weak owing to lower S/N ratios and detection frequencies. Sulphur (S) was further downweighed because its uncertainty was derived from an external LOD value, as it was not reported in the ICP-OES analysis. PM and BC were also treated as weak species, as their uncertainties were conservatively assigned as four times their measured concentrations, a common approach for poorly constrained total mass or carbonaceous components to prevent overfitting. This classification ensured that species with higher measurement uncertainty or sporadic detection did not disproportionately influence the model, while preserving the robustness of dominant industrial and dust-related sources. The EPA PMF 5.0 analysis of coarse PM samples from Islamabad's industrial area resolved five dominant pollution sources. Bootstrap diagnostics (100 runs, R ≥ 0.6) demonstrated strong factor stability: Factor 1 (Battery Recycling, 76 % mapped), Factor 2 (Steel Mills, 95 % mapped), Factor 3 (E-Waste Burning, 80 % mapped), Factor 4 ( Re -suspended Dust, 94 % mapped), and Factor 5 (Marble Processing, 100 % mapped). The model's robustness was further confirmed by DISP analysis, which revealed no rotational ambiguity (zero swaps at dQ max = 4), additionally, the tight distribution of Q(Robust) values (median: 2158; IQR: 1967–2274) suggested good model convergence. These results were selected for their high statistical confidence, as evidenced by the low variability in Q values and near-perfect factor mapping. The identified sources align with known industrial activities in the region, particularly highlighting Steel Mills, Re -suspended Dust and Marble Processing as potential sources in coarse particulate matter. In the first factor, attributed to battery recycling, the profile is dominated by high loadings of Ba, Mn, and Pb with additional, yet lower, contributions from Ca, Cr, Fe, Li, Mg, Ni, Sn, Sr, and Zn. The minimal BC signal further distinguishes this source from those driven by combustion [ 76 , 77 ]. The second factor, associated with steel mills, displays high loadings of Cr, Cu, Ni, Mn, and Zn. Lower contributions of Ba, Fe, Li, Pb, and Sn, along with moderate levels of PM, BC, and S, are also present. This pattern is consistent with high-temperature metallurgical processes that occur in steel production. The composition suggests that the primary source of these emissions is the core activities of steel mills, with only minor contributions from other industrial operations. Such a profile is robust evidence of the link between steel production and coarse particulate matter emissions [ 9 , 73 ]. The third factor, attributed to e-waste burning, is characterized by elevated loadings of Al, Cu, Fe, K, Li, Sn, and Pb and by higher levels of PM and BC. The combination of these elements and combustion markers points to the incineration of electronic waste [ 78 ]. The lower contribution of Cr, Mn, and Zn help distinguish this source from those associated with high temperature metal production. The fourth factor, related to resuspended dust, shows significant loadings of Al, K, Ba, and Pb, with moderate amounts of Ca, Fe, Cr, Cu, Mg, Mn, Sn, and Zn, as well as noticeable PM and BC levels. This mixture reflects contributions from both natural soil dust and industrial activities. The high presence of crustal elements combined with metals of anthropogenic origin indicates that dust is lifted from urban and industrial surfaces [ 79 , 80 ]. The fifth factor, connected to marble processing, is defined by very high loadings of Mg, Na, S, Sn, Sr, and Ca, and by lower levels of Ba, Cu, K, Li, Zn, PM, and BC. This elemental signature is clearly mineral in nature, reflecting the raw material properties of marble. The high loadings of Mg, Na, S, Sn, Sr indicate that they originate from the dust generated during marble cutting and processing [ 75 , 81 , 82 ]. This distinct profile provides a clear fingerprint of emissions from marble processing plants. 3.5 NOAA HYSPLIT back-trajectory analysis The National Oceanic and Atmospheric Administration's (NOAA) Hybrid Single-Particle Langrangian Integrated Trajectory model (HYSPLIT) was utilized to compute the back trajectories for episodes of elevated PM. Three days back-trajectories were computed. Only important elemental and PM outliers are discussed in this section. Many high PM and elemental episodes revealed local sources given in Fig. 11 (a) while the remaining similar trajectories are given in supplementary data Fig. S2(a-i). The NOAA HYSPLIT Back-Trajectory analysis revealed most of the sources to be of local origin, with some exceptions in which PM originates from Russia, Afghanistan and India. Out of all the back trajectories, 70 % were of local origin. The backward trajectories showed that air on 23–04–23 (High PM and Sulphur) originated from Russia-Ukraine border Fig. 11 (c). The war causes large amounts of PM, SO 2 , NO 2 , and various toxic metals to be released into the air [ 83 ]. Russia-Ukraine war has adversely impacted air quality of the entire European continent [ 84 ]. There is insufficient literature available on long range transport of pollutants from Russo-Ukrainian war to other continents. It is recommended that a detailed study should be done on long range transport and effects of air pollution caused by Russia-Ukraine, Armenia-Azerbaijan and Israel-Palestine conflict on Asia and Africa. These transboundary events not only devastate the local pollution levels but could also impact on the vulnerable and poor countries. Back trajectories for 29 and 27 April 2023 (High PM 2.5 ) Fig. 11 (b & d) showed that the air is mostly coming from India's Punjab province and Afghanistan, respectively. The industrial activities and burning of crop residue in Indian Punjab could also contribute to the PM mass [ 85 ]. The rest of the back trajectories of outliers are given in supplementary data. 3.6 Comparison with other studies 3.6.1 PM 2.5 comparison The PM 2.5, fine BC, as well as elemental content of fine PM samples collected from Islamabad's industrial area reveal interesting patterns when compared with these parameters from other studies conducted in different regions around the world ( Table 3 ). The mean PM 2.5 concentration in this study (40 µg m −3 ) was found to be significantly exceeding the 24-h guideline values recommended by WHO (15 µg m −3 ), and Pak-EPA (35 µg m −3 ), as well as those of the annual average standards of WHO (10 µg m −3 ) and Pak-EPA (25 µg m −3 ). Similarly, the present PM 2.5 levels were also higher than those reported from Brazil (13 µg m −3 ), Vietnam (40 µg m −3 ), and California (11 µg m −3 ). However, the current average contents of PM 2.5 were relatively lower than those reported from China (66 µg m −3 ), India (313 µg m −3 ), Lahore (183 µg m −3 ), Karachi (101 µg m −3 ), and Nigeria (300 µg m −3 ). The comparison of the average elemental levels in the particulates revealed that the present concentrations of most toxic elements were considerably higher than the reported levels from other regions. For instance, the concentration of Cr in this study (848 ng m −3 ) was significantly higher than those reported from Brazil (2 ng m −3 ), China (16 ng m −3 ), and California (2 ng m −3 ), but lower than Lahore (2400 ng m −3 ). Similarly, the levels of As (51 ng m −3 ) were higher than those reported from China (21 ng m −3 ), Vietnam (4 ng m −3 ), and California (470 ng m −3 ). For Ni, the levels in this study (1188 ng m −3 ) were significantly higher than those reported from Brazil (6 ng m −3 ), China (15 ng m −3 ), and California (0.7 ng m −3 ). Pb levels (89 ng m −3 ) were also higher than those reported from Brazil (20 ng m −3 ), China (33 ng m −3 ), and California (2.2 ng m −3 ), but significantly lower than those reported from Nigeria (6000 ng m −3 ). The levels of Co (66 ng m −3 ) in this study exceeded those from Vietnam (0.15 ng m −3 ) and California (0.09 ng m −3 ) but were lower than those from Karachi (25 ng m −3 ). 3.6.2 Comparison of PM 2.5–10 The elemental composition of PM 2.5–10 in Islamabad's industrial area was compared with data from other regions, revealing distinct differences ( Table 4 ). The mean PM 2.5–10 concentration in this study (221 µg m −3 ) was found to be significantly exceeding the 24-h PM 2.5–10 guideline values recommended by WHO (50 µg m −3 ) and Pak-EPA (150 µg m −3 ), as well as those of the annual average standards of WHO (20 µg m −3 ) and Pak-EPA (120 µg m −3 ). Similarly, the present PM 2.5–10 levels were also higher than those reported from Brazil (33 µg m −3 ), China (93 µg m −3 ), California (10 µg m −3 ), and Nigeria (381 µg m −3 ). However, the current average contents of PM 2.5–10 were relatively lower than those reported from Vietnam (508 µg m −3 ), India (445 µg m −3 ), Lahore (284 µg m −3 ), and Karachi (438 µg m −3 ). The comparison of the average elemental levels in the particulates revealed that in the present study the concentrations of most toxic elements were considerably higher than the reported levels from other regions. For instance, the concentration of Cr in this study (150 ng m −3 ) was significantly higher than those reported from Brazil (2 ng m −3 ), China (5 ng m −3 ), and California (1.1 ng m −3 ), but lower than Lahore (2700 ng m −3 ). Similarly, the levels of As (13 ng m −3 ) were higher than those reported from China (21 ng m −3 ) and Vietnam (11 ng m −3 ), but lower than Nigeria (630 ng m −3 ). For Ni, the levels in this study (462 ng m −3 ) were significantly higher than those reported from Brazil (4 ng m −3 ), China (3.4 ng m −3 ), and California (0.6 ng m −3 ). Pb levels (98 ng m −3 ) were also higher than those reported from Brazil (13 ng m −3 ), China (25 ng m −3 ), and California (0.9 ng m −3 ), but significantly lower than those reported from Nigeria (8700 ng m −3 ). The levels of Co (10 ng m −3 ) in this study exceeded those from Russia (0.5 ng m −3 ) and California (0.11 ng m −3 ) but were comparable to Karachi (10 ng m −3 ). A more comprehensive context is established by comparing the results with data from various regions worldwide. Differences in industrial practices, fuel composition, regulatory standards, and local meteorological conditions may influence variations in elemental concentrations and PM levels [ 86 ]. Additionally, these discrepancies emphasize the complex relationship between natural factors and anthropogenic activities. For instance, regional industrial infrastructure could involve a variety of facilities, including modern plants with sophisticated filtration systems and outdated facilities with inadequate emission controls, each of which contributes to ambient pollutant levels in a unique manner [ 87 ]. The emission profiles of trace metals are also significantly influenced by the differences in fuel types, which range from high-sulfur coal to cleaner alternatives. Collectively, these factors underscore the necessity of specialized air quality management strategies that are designed to address the distinctive barriers presented by the environmental and industrial characteristics of the region. 4 Strengths, limitations, and recommendations This study's primary strengths lie in its comprehensive analysis of air quality parameters and the detailed source apportionment of heavy metals, accounting for seasonal variations within Islamabad's industrial area. Our findings provide a robust baseline and a solid foundation for future environmental assessments in the region. However, a notable limitation is the study's inability to precisely link air quality dynamics with meteorological parameters such as wind speed, wind direction, and other climatic factors. We recommend that future research incorporates high-resolution meteorological data to better elucidate the relationship between atmospheric conditions and pollutant dispersion. By integrating these elements, subsequent studies could build upon our work and offer a more complete understanding of the factors influencing air quality in Islamabad's industrial environment. To improve air quality in Islamabad's industrial area, it is recommended to implement stricter emission controls on local industries, especially steel mills and smelting plants. Increasing green buffer zones and implementing dust suppression measures can reduce particulate emissions. Enhancing monitoring and regulation of diesel vehicle emissions is crucial, along with promoting cleaner fuel alternatives. Strengthening waste management practices, particularly for electronic waste, will help mitigate pollution. The introduction of good and comprehensive public transport will decrease the number of vehicles on the road, resulting in better air quality. Deploying state-of-the-art sensor networks and remote monitoring systems to collect real-time data on pollutant levels can help pinpoint emission sources and track compliance with air quality standards. Encouraging industries to adopt the best available control technologies, such as continuous emissions monitoring systems and advanced filtration, can significantly reduce emissions. Facilitating local workshops, school programs, and interactive platforms to share air quality data can empower communities to participate in pollution reduction efforts and adopt healthier practices. Additionally, international cooperation is necessary to address transboundary pollution, and public awareness campaigns can educate the community on reducing air pollution. Forming cross border initiatives and data-sharing agreements can help manage transboundary pollution effectively. This cooperation can lead to joint research projects and harmonized environmental standards. Investing in renewable energy sources will also contribute to long term air quality improvement. 5 Conclusion The study analyzed the elemental composition and PM concentrations of PM 2.5 and PM 2.5–10 in Islamabad's industrial area from April to November 2023. The mean PM 2.5 and PM 2.5–10 values were 40.4 ± 32 µg m −3 and 221±67 µg m −3 , respectively, exceeding Pak-EPA limits of 35 µg m −3 for PM 2.5 and 150 µg m −3 for PM 2.5–10 . The measured PM 2.5 and PM 2.5–10 levels exceed local and international standards, signaling a serious air quality concern. The elemental profiling revealed significantly elevated levels of toxic metals in PM 2.5 , As, Cr and Ni in PM 2.5 surpassing USEPA thresholds, posing severe carcinogenic and respiratory health risks. Seasonal variability underscored higher PM pollution during cooler months (autumn and spring), driven by thermal inversions in the basin like topography and intensified industrial activity, while monsoon rains and holiday related industrial closures temporarily mitigated pollution. Source apportionment identified vehicular emissions, steel mills, battery recycling, e-waste burning, and marble processing as dominant contributors, with HYSPLIT trajectories implicating both local industries and transboundary sources. Deposition flux analysis further highlights the seasonal dynamics and the contributions of both local and transboundary sources. In conclusion, these findings provide a solid baseline for future studies and underscore the need for integrated pollution control strategies. Future work should incorporate high-resolution meteorological data to better understand pollutant dispersion and long-range transport. Ethics approval Not applicable. Consent to participate Not applicable. Consent for publication All the authors have provided the consent for publication. Funding The funding was provided by IAEA for this project (RCA Research Contract No RCARP02/RC07 for Project “Distribution and Source Apportionment of Industrial Pollution & its Health Impact Using NATs”, under Research Contract “Air Quality and Environmental Impact Assessment of Industrial Activities in Asian Region, 2021–2023). CRediT authorship contribution statement Mavia Anjum: Writing – review & editing, Writing – original draft, Visualization, Validation, Software, Methodology, Investigation, Data curation, Conceptualization. Naila Siddique: Writing – review & editing, Validation, Supervision, Project administration, Funding acquisition, Formal analysis, Conceptualization. Hannan Younis: Writing – review & editing, Validation, Supervision, Project administration. Munib Ahmed Shafique: Validation, Methodology, Formal analysis. Sadia Munawar: Resources. Mohsina Zubair: Resources. Huzaifa Younas: Software, Resources. Ansar Abbas: Writing – review & editing, Software. Yasir Faiz: Writing – review & editing, Validation, Supervision, Resources, Project administration, Investigation, Conceptualization. Declaration of competing interest We declare that we have no conflict of interest. Supplementary materials Supplementary material associated with this article can be found, in the online version, at doi:10.1016/j.jtemin.2025.100244 . Appendix Supplementary materials Image, application 1
REFERENCES:
1. SICARD P (2023)
2. SAKTI A (2023)
3. BARMPADIMOS I (2012)
4. CHIFFLET S (2024)
5. NAN N (2023)
6. PING L (2023)
7. SINGH N (2017)
8. MUKHERJEE A (2017)
9. OWOADE K (2015)
10. GALVAO E (2023)
11. LEE S (2025)
12. NTESAT U (2025)
13. GJERGJIZINALLBANI B (2025)
14. ZHANG J (2025)
15. SUVARAPU L (2017)
16. SUVARAPU L (2014)
17. LIANG R (2014)
18. FENG S (2016)
19. AYUA T (2024)
20. YADAV V (2024)
21. ROY D (2024)
22. CHAUHAN B (2024)
23. KHAN M (2016)
24. AYUA T (2020)
25. GAN W (2011)
26. DEBONT J (2022)
27. LEE B (2014)
28. WU X (2016)
29. REHMAN K (2018)
30. PALMALARA I (2020)
31. JAISHANKAR M (2014)
32. SCHRAUFNAGEL D (2020)
33. ZHANG X (2018)
34. (2023)
35. BILAL M (2021)
36.
37. VIANA M (2008)
38. HOPKE P (2020)
39. NGOCTHUY V (2024)
40. KARAGULIAN F (2015)
41. SUN X (2020)
42. MANSHA M (2012)
43. WAHEED S (2012)
44. ALAM K (2015)
45. SIDDIQUE N (2024)
46. ANJUM M (2024)
47. SHEIKH I (2008)
48. HORVATH H (1997)
49. LICHTE F (1987)
50. ANJUM M (2024)
51. WEDEPOHL K (1995)
52. SHAH Z (2022)
53. KESHAVKRISHNA A (2016)
54. ANJUM M (2024)
55. SIUDEK P (2024)
56. NIRMALKAR J (2023)
57. ZHANG L (2012)
58. GONG C (2024)
59. PAATERO P (1994)
60.
61. LEE D (1999)
62. WATSON J (2002)
63. DRAXLER R (2010)
64. STEIN A (2015)
65. MOROZ B (2010)
66. MORAKINYO O (2021)
67. YATERA K (2018)
68. GENCHI G (2020)
69. WEGER M (2023)
70. CESLERMALONEY M (2022)
71. SHUKLA J (2008)
72. DIGIROLAMO M (2021)
73. OGUNDELE L (2016)
74. SI R (2019)
75. KHAN A (2024)
76. VIECELI N (2016)
77. ZHAO Z (2022)
78. STEWART E (2003)
79. WANG X (2016)
80. ALSWADI H (2022)
81. ODOKUMAALONGE O (2019)
82. THAKUR A (2018)
83. PROTOPSALTIS C (2012)
84. MENG X (2023)
85. AWASTHI A (2011)
86. KAYGUSUZ K (2012)
87. GENG Y (2016)
88. DONG D (2023)
89. DAS R (2015)
90. KRUPNOVA T (2021)
91. KHAN S (2024)
92. LURIE K (2019)
93. OROUMIYEH F (2022)
94. BUI T (2023)
95.
|
10.1016_j.ipej.2016.02.010.txt
|
TITLE: Change in P wave morphology after convergent atrial fibrillation ablation
AUTHORS:
- Shrestha, Suvash
- Chen, On
- Greene, Mary
- John, Jinu Jacob
- Greenberg, Yisachar
- Yang, Felix
ABSTRACT:
Convergent atrial fibrillation ablation involves extensive epicardial as well as endocardial ablation of the left atrium. We examined whether it changes the morphology of the surface P wave. We reviewed electrocardiograms of 29 patients who underwent convergent ablation for atrial fibrillation. In leads V1, II and III, we measured P wave duration, area and amplitude before ablation, and at 1, 3 and 6 months from ablation.
After ablation, there were no significant changes in P wave amplitude, area, or duration in leads II and III. There was a significant reduction in the area of the terminal negative deflection of the P wave in V1 from 0.38 mm2 to 0.13 mm2 (p = 0.03). There is also an acute increase in the amplitude and duration of the positive component of the P wave in V1 followed by a reduction in both by 6 months. Before ablation, 62.5% of the patients had biphasic P waves in V1. In 6 months, only 39.2% of them had biphasic P waves.
Hybrid ablation causes a reduction of the terminal negative deflection of the P wave in V1 as well as temporal changes in the duration and amplitude of the positive component of the P wave in V1. This likely reflects the reduced electrical contribution of the posterior left atrium after ablation as well as anatomical and autonomic remodeling. Recognition of this altered sinus P wave morphology is useful in the diagnosis of atrial arrhythmias in this patient population.
BODY:
Introduction Atrial fibrillation (AF) is a common form of supraventricular tachyarrythmia affecting more than 5 million people in the US alone [1] . Current understanding suggests that AF results from anatomic substrate capable of both initiation and perpetuation of fibrillatory waves [2,3] . Most of these foci are found at the orifices of pulmonary veins and the posterior atrial wall [4] . Thus, ablation of these foci has emerged as the treatment for persistent and paroxysmal AF not responding to medical therapy. Convergent AF ablation employs both trans-diaphragmatic epicardial and catheter induced endocardial ablation to electrically isolate the posterior wall of the left atrium and the area around the pulmonary veins. Ablation is associated with changes in architecture and the electrical progression across the LA. Additionally, autonomic ganglionated plexi are concomitantly ablated as they exist in the epicardial surface. These changes translate into changes in the 12-lead surface electrocardiography (ECG) and may correlate with freedom from AF. Current literature shows P wave duration (PWD) significantly decreases after circumferential pulmonary vein isolation (CPVI), and that the decrease in PWD correlates with freedom from AF [5–11] . Some studies have assessed the change in P wave area post CPVI, however, the results are inconsistent. Convergent ablation results in more extensive scarring and may lead to more distinct ECG changes. Data on ECG changes after this procedure is scarce. One study involving 41 patients undergoing convergent ablation reported reduction in PWD consistent with the studies involving CPVI [12] . We have assessed changes in P wave duration and P wave area (PWA) after 1 month, 3 months and 6 months of convergent ablation. Materials and methods Twenty-nine patients who underwent convergent ablation were included in the study. These patients had persistent AF or paroxysmal AF with enlarged atria, which did not respond to medical therapy including rate control or antiarrhythmic therapy. Six of them had undergone CPVI as well in the past. Ablation technique Convergent ablation procedure utilizes a transdiaphragmatic cannula and the EPi-Sense™ Guided Coagulation System with VisiTrax (nContact, Inc.). The EPi-Sense device is used to create epicardial lesions along the entire posterior wall and around the pulmonary veins (PV). The pericardial reflections limit the epicardial ablation of the superior aspect of the veins and the region below the right inferior PV. The lesion set is therefore completed endocardially by an electrophysiologist and bidirectional block is confirmed across the veins ( Fig. 1 ). A right cavotricuspid isthmus line is also created. ECG parameters measurement We analyzed ECGs before (up to 6 months prior to ablation), and 1, 3 and 6 months after ablation. P wave duration, area and amplitude were measured in lead V 1 , II and III as previous studies found ECG changes in these leads. Measurements were performed with use of Sketchandcalc, an online software designed for precise 2 dimensional measurements [13] . We uploaded the electronic copy of the standard 12-lead ECG onto the software, magnified it to 25 times, and made the measurements. We set the scale in the software so that one small box of the ECG strip after magnification would be equivalent to 1 mm which is the standard. We used the free drawing tool in the software, and drew under the curve of the P wave to get the area ( Fig. 2 ). For each parameter, two consecutive P waves in ECG were measured and the average value was used. During the six months period after ablation, any episode of AF lasting more than 30 s during follow-up ECG, 24–48 h Holter, or 30 day event monitoring was considered recurrence of AF. Statistical analysis We used repeated measures analysis of variance (ANOVA) to evaluate the change in P wave duration, amplitude and area before and at 1 month, 3 months and 6 months after ablation. Results Twenty-nine patients who underwent convergent ablation of atrial fibrillation were included in the study ( Table 1 ). The mean age was 62, and 21 of them were male. Five patients did not have any ECG demonstrating normal sinus rhythm within 6 months prior to ablation. After convergent ablation, only lead V 1 had some statistically significant changes. P wave area decreased significantly in the negative component of V 1 (0.38 ± 0.4 mm 2 to 0.13 ± 0.3 mm 2 , p = 0.03) in six months. Likewise, P wave duration also decreased significantly in the positive component of V 1 (39.1 ± 19 ms to 29.42 ± 25 ms, P = 0.004), and positive P wave amplitude decreased from 0.46 ± 0.6 mm to 0.33 ± 0.5 mm, p = 0.05 ( Table 2 ). Discussion Main findings We found that after convergent ablation, in lead V 1 , there is a statistically significant reduction in P wave area in the terminal portion from 0.38 mm 2 to 0.13 mm 2 . Furthermore, a significant portion of biphasic morphology of P wave seen in leads V 1 (62.5%) decreased (39.2% at 6 months) to a mono-phasic morphology suggesting a decrease in the negative terminal portion of P wave as a result of the electrical debulking of the left atrium ( Table 3 ). We noticed qualitatively that patients would frequently have very tall, upright P waves in V 1 immediately post-ablation which was statistically significant. At elevated heart rates, this made it more challenging to distinguish these sinus P waves from an atrial tachycardia or atrial flutter given the fact the ECG differed from the pre-ablation baseline ECG. Interestingly, this increase in amplitude decreases over the 6 month follow-up period and ultimately becomes smaller than the pre-ablation amplitude. A similar temporal pattern was also noted regarding the duration of the positive component of the P wave in V 1 ( Table 2 ). Change in P wave area Convergent ablation involves both epicardial and endocardial ablation. It encompasses the posterior wall and therefore removes the AF triggers. This approach increases the chance for durable trans-mural isolation. In addition, the epicardial ablation across the posterior wall creates an effective roof line and posterior box, and eliminates rotors in the posterior wall and around the pulmonary veins. Because of the extensive ablation, this procedure has higher success rate. In a study of 73 patients, Gersak et al. reported that 80% remained in sinus rhythm after a year of convergent ablation [14] . Other studies also had similar results [15,16] . The terminal portion of P wave is generally accepted as a reflection of left atrial activation. After convergent ablation, the P wave area in the negative terminal portion of V 1 decreased significantly from 0.38 mm 2 to 0.13 mm 2 . This reduction in the PWA could be explained by the fact that convergent ablation eliminates a significant portion of LA electrical activity. After ablation, reverse morphological remodeling occurs, which may also further decrease the LA dimension [17] , which is reflected as the decrease in P wave area. In a study by Beeuman et al., a significant decrease in PWA in lead II (2.32 to 1.62 mvms) was reported after VATS pulmonary vein isolation. The decrease was more pronounced in the terminal portion of P wave [5] . Change in P wave duration P wave duration has been reported to correlate with LA size and studies have shown association of prolonged P wave duration with presence of paroxysmal AF [4,5] . In a study, a cut off PWD of 140 ms had a sensitivity and specificity of 71% and 61% respectively in prediction of AF [7] . Previous studies have reported a decrease in PWD, mostly in lead II, with both convergent and conventional PVI [5–12] . In one study involving convergent ablation, Kumar et al. reported a decrease in PWD from 104.4 ms to 84.7 ms after the procedure [12] . In our study, over a follow-up of six months, there was a trend towards decrease in PWD in lead II before and after ablation (91.6 ms to 78.7 ms). However, it did not meet statistical significance. The difference in the outcome can be explained by the fact that the mean pre-ablation PWD in our group was 91 ms, considerably shorter compared to the pre-ablation duration reported previously. Furthermore, 20% of our patients had already undergone the traditional catheter ablation, whereas in Kumar et al.'s study, all the participants were first time ablation patients. This history of prior intervention may have lessened the P wave shortening effect of ablation in our patient group. Limitations of the study Our study is limited by the relatively small sample size. However, using a pre-validated and precise method of evaluating P wave parameters, we had findings in line with prior studies. Conclusions Convergent ablation causes a reduction of the terminal negative deflection of the P wave in V 1 . This reflects the reduced electrical contribution of the ablated posterior left atrium. The majority of patients no longer met criteria for left atrial enlargement in V 1 after ablation. The duration and amplitude of the positive segment of the P wave in V 1 also acutely increased following ablation and then decreased over 6 months. In addition to anatomical remodeling, there may be autonomic remodeling given the collateral ablation of ganglionated plexi. The normal sinus P wave in these patients is significantly different than expected and may exhibit temporal changes in morphology. Recognizing the change in sinus P wave morphology post-ablation has implications in the diagnosis of atrial arrhythmias since many algorithms involve V 1 morphology. Conflicts of interest None.
REFERENCES:
1. COLILLA S (2013)
2. SHIROSHITATAKESHITA A (2005)
3. CALKINS H (2007)
4. CHEN Y (2006)
5. BEEUMEN K (2010)
6. CHANG S (2007)
7. OGAWA M (2007)
8. NASSIF M (2013)
9. PAPPONE C (2004)
10. ZHAO L (2013)
11. UDYAVAR A (2009)
12. KUMAR N (2015)
13.
14. GERSAK B (2014)
15. GILLIGAN D (2013)
16. ZEMBALA M (2012)
17. CHOI J (2008)
|
10.1016_j.jobab.2023.06.002.txt
|
TITLE: Recent advances in thermochemical conversion of woody biomass for production of green hydrogen and CO2 capture: A review
AUTHORS:
- Pang, Shusheng
ABSTRACT:
Hydrogen as a clean energy carrier has attracted great interests world-wide for substitution of fossil fuels and for abatement of the climate change concerns. However, green hydrogen from renewable resources is less than 0.1% at present in the world hydrogen production and this is largely from water electrolysis which is beneficial only when renewable electricity is used. Hydrogen production from diverse renewable resources is desirable. This review presents recent advances in hydrogen production from woody biomass through biomass steam gasification, producer gas processing and H2/CO2 separation. The producer gas processing includes steam-methane reforming (SMR) and water-gas shift (WGS) reactions to convert CH4 and CO in the producer gas to H2 and CO2. The H2 storage discussed using liquid carrier through hydrogenation is also discussed. The CO2 capture prior to the SMR is investigated to enhance H2 yield in the SMR and the WGS reactions.
BODY:
1 Introduction Hydrogen as a clean energy carrier has attracted great interests world-wide to substitute fossil fuels and to reduce greenhouse gas (GHG) emissions. A recent report by International Energy Agent (IEA) predicts that the world hydrogen demand will increase from current 94 million tonnes in 2021 to 115 million tonnes by 2030 ( International Energy Agency, 2022 ). The current hydrogen use is dominated by oil refining, ammonia and methanol synthesis as well as iron and steel manufacturing. However, the emerging use of hydrogen for energy and transport has been increasing rapidly, which includes fuel cell vehicles, energy storage for power generation and substitution for natural gas, and these will be the major areas of hydrogen demand increase in the future. Using hydrogen to abate the climate change is only possible when the hydrogen is generated from renewable resources, termed as green hydrogen. Unfortunately, the present hydrogen production is predominantly from fossil fuels such as natural gas, oil and coal which resulted in almost 900 million tonnes CO 2 emitted ( International Energy Agency, 2021 ). The hydrogen from water electrolysis contributed 510 MW by the end of 2021, representing only 0.1% of the current global hydrogen production ( International Energy Agency, 2022 ). Rapid increase in hydrogen production is predicted in the future which also increases the electricity demand. The benefits in water electrolysis for hydrogen production can only be realised when the electricity is renewable ( Ji and Wang, 2021 ; Terlouw et al., 2022 ). It is clear that the future green hydrogen should be produced through various renewable resources and thus corresponding technologies need to be developed and implemented ( Ji and Wang, 2021 ; Olabi et al., 2021 ; Terlouw et al., 2022 ). Among these is the utilisation of biomass resources which include woody biomass such as residues from forest harvesting and wood processing, agricultural residues and purpose-grown coppice. However, the potential pathways to convert the biomass resources to hydrogen vary broadly from thermochemical processes to biotechnology processes. Correspondingly the conversion, energy efficiencies and production costs are also in wide ranges. The objective of this paper is to provide a state-of-art review on green hydrogen production from wood biomass through advanced biomass gasification, gas processing and hydrogen-CO 2 separation. The CO 2 capture is also included for achieving negative GHG emission and enhancing the hydrogen production. From technoeconomic analysis and assessment on the environmental impacts, green hydrogen from biomass through advanced biomass gasification is one of the most promising technologies in the medium to long terms ( Shayan et al., 2018 ; Valente et al., 2019 ; Navas-Anguita et al., 2020 ; Wu et al., 2023 ). The proposed pathway is illustrated in Fig. 1 which consists of biomass steam gasification using a dual fluidised bed system, gas cleaning, CO 2 capture, gas processing and hydrogen/CO 2 separation. Pretreatment of the biomass prior to the gasification is not shown in Fig. 1 which can include chipping and drying. If the original form of the biomass is in large size such as logs and branches, the biomass needs to be chipped to particle sizes of 20–50 mm to ensure consistency in subsequent drying and gasification. Wet biomass can be dried using fluidised bed dryer, moving bed dryer or rotary dryer and more details can be found elsewhere ( Xu and Pang, 2008 ; Pang and Mujumdar, 2010 ; Pang and Xu, 2010 ). However, biomass drying needs to be considered for subsequent gasification requirement. As the gasification uses steam as the gasification agent, the optimum moisture content of the feed biomass is 15%–20%. Too high moisture content will require excessive heat in the gasifier for moisture vapourisation which reduces the gasification efficiency. On the other hand, over drying of the biomass will result in significant amount of heat loss in the drying process. 2 Biomass steam gasification using a dual fluidised bed gasifier system and gas cleaning It is well known that the biomass steam gasification enhances hydrogen yield and increases the hydrogen content in the producer gas. However, biomass steam gasification is overall endothermic and thus requires external heat supply. As the mixed solids (biomass, char and bed materials) and gases are involved inside the gasification rector, conventional heat transfer technologies cannot be employed due to the low transfer rate and possible blockage. Recently, dual fluidised bed (DFB) gasification system has been developed for the biomass steam gasification. In the DFB gasifier, circulating bed material is used as the heat carrier to supply the required heat for the biomass steam gasification. There are essentially two reactors in the DFB gasification system: gasification reactor (gasifier) and combustion reactor (combustor). The biomass steam gasification reactions occur in the gasifier in which hydrogen-rich producer gas and char are generated. The producer gas flows out of the gasifier from the top while the bed materials and the solid char move from the gasifier bottom to the combustor by gravity. By injecting controlled amount of air into the combustor, the char is combusted, heating the bed material to 50–100 °C higher than the gasification temperature which is carried up and flows out of the combustor. After separation from the flue gas in a cyclone, the hot bed material falls down to the gasifier bed and provides the required heat to the gasification process. More detailed description of the DFB gasification technology can be found in the literature ( Saw et al., 2012 ; Saw and Pang, 2013 ; Pang, 2019 ). The DFB gasification system also provides opportunity to use catalytic bed material for further enhancing hydrogen yield. The working principle is shown in Fig. 2 in which the blended silica sand and CaO are used as the bed material ( Pfeifer et al., 2004 ; Pang, 2019 ). In this process, the added CaO in the bed material in effective shifts CO 2 from the gasifier to the combustor, thus increasing the hydrogen content in the producer gas. In addition, for improving energy efficiency, the sensible heat possessed by the hot flue gas can be recovered in a boiler to generate steam which is used as the gasification agent in the biomass gasifier. As the gasifier operates at relatively lower temperature of 700–750°C, exothermic carbonation reaction occurs as below: (1) CaO + CO 2 → CaCO 3 Δ H 298 = −178.8 kJ/mol When the formed calcium carbonate (CaCO 3 ) moves to the combustor, it is heated up to 800–850°C which favours the calcination reaction or the reverse carbonation reaction. The released CO 2 in the combustor enriches the CO 2 concentration in the flue gas and can be captured to reduce the GHG emissions. In the gasifier, the reduction of CO 2 through the application of CaO as bed material also enhances water-gas shift (WGS) reaction and steam methane reforming (SMR) reaction as follows. (2) H 2 O + CO → H 2 + CO 2 Δ H 298 = −41 kJ/mol (3) CH 4 + H 2 O → H 2 + CO Δ H 298 = 206 kJ/mol Experiments have been performed in our previous studies to investigate the effect of CaO loading on the producer gas yield and composition, and the results are shown in Fig. 3 ( Saw and Pang, 2012 ; Pang, 2019 ). From Fig. 3 , it is observed that the hydrogen content of 60% can be achieved with application of pure calcite as the bed material. However, the calcite attrition is the key issue in this case and thus 50% calcite loading was considered as more practical in long time operation which can also achieve 50% hydrogen in the producer gas. Under these conditions, the producer gas from steam gasification of woody biomass consists of about 50% H 2 , 20% CO, 20% CO 2 and 10% CH 4 . Another challenge is the reactivity reduction of the calcite with increased number of cycling. This will be discussed in details in the subsequent section on CO 2 capture. Further studies are being conducted to increase mechanical strength of the calcite-based bed material and to retain its reactivity over long time operation. The producer gas from the biomass gasification also contains some undesirable species such as tars and gaseous contaminants (H 2 S, NH 3 and HCl). The tar compounds exit in the form of vapour at high temperatures immediately following the gasification unit but condense when the temperature is reduced. Once this occurs, the condensates will block the pipes and contaminate the inner parts and materials in equipment processing the producer gas. The tar content in the producer gas from fluidised bed gasifiers varies from 1.5 to 10 g/Nm 3 . However, the required tar contents for down-stream processing such as liquid fuel synthesis and gas processing is less than 1 mg/Nm 3 ( Asadullah, 2014 ; Nakamura et al., 2016 ; Zhang and Pang, 2019 ). Extensive studies have been conducted world-wide on developing efficient and cost-effective technologies to remove the tars in the producer gas, which can be classified into two categories: wet methods and dry methods. In the wet methods, the producer gas is cleaned by using solvent in a scrubber. The well-known technology is the OLGA process developed by the Energy Research Centre of the Netherlands (ECN, now TNO) ( Boerrigter et al., 2005a , 2005b ; Drift et al., 2005 ; Rabou and Almansa, 2015 ). In this system, a scrubber is used for absorption of tar compounds from the producer gas and the tar-loaded solvent then flows to a stripper where the tar compounds are released at higher temperatures to hot carrying gas. In this way, the solvent can be reused. Dry methods for tar removal include adsorption by solid sorbents and hot cracking by catalysts or combined adsorption and catalytic cracking ( Juutilainen et al., 2006 ; Kostyniuk et al., 2019 ; Zeng et al., 2020 ). The advantages of the dry methods are that the operation can be at high temperatures thus there is no need to cool down the gas from the gasification. This is energy efficient, particularly when the down-stream operation also operates at elevated temperatures such as for steam reforming to convert methane in the producer gas to hydrogen. In the study of Juutilainen at el. (2006) , various catalysts were tested for cracking and adsorption of toluene as a model tar compound. From this study, it was found that MEZR0500 (87% ZrO 2 , 13% Al 2 O 3 ) can reduce tar content by 80% at 600 °C while pure Al 2 O 3 can achieve similar tar removal efficiency at 800–900 °C. These catalysts can also removal ammonia in the producer gas. In a separate study ( Oemar et al., 2014 ), more complicated catalysts were investigated and it was found that the tar model compound, toluene, was reformed effectively at reaction temperature of 650–750 °C. For subsequent gas processing with application of catalysts, gaseous contaminants (H 2 S, NH 3 , HCl) need to be removed as well, otherwise, these contaminants are poisonous to the catalysts. The most common method for removing these contaminants is catalytic cracking and adsorption which may be integrated with the dry methods for tar removal. One of the key objectives for the contaminant removal studies is to find cost-effective and readily available catalytic materials. A series of studies have been conducted by Hongrapipat et al. (2014 ; 2016 ) and Wang and Pang (2018a ; 2018b ) to use iron sand (titanomagnetite) to remove these gaseous contaminants. It was found that the H 2 -reduced titanomagnetite can achieve 94% removal efficiency for NH 3 at 750 °C ( Wang and Pang, 2018b ) while both of the reduced and the unreduced titanomagnetite can effectively remove H 2 S at 600 °C ( Wang and Pang, 2018a ). Recent studies by Dashtestani et al. (2021a ; 2021b) on CO 2 capture found that CaO-Fe 2 O 3 based sorbent material can also effectively adsorb the gaseous contaminants. More details on the CaO-Fe 2 O 3 based sorbent will be presented in the following section. 3 The CO 2 capture from biomass gasification producer gas The producer gas from biomass steam gasification consists of four major species including H 2 , CO, CO 2 and CH 4 as shown in Fig. 3 . The gas composition is dependent on biomass feedstock, gasification technologies, operation condition and gasification agent. In order to enhance the hydrogen yield, CH 4 and CO should be converted through reactions with steam (H 2 O) which will be discussed in the following section. However, such conversion can be promoted if CO 2 is captured and removed from the gas. CO 2 capture, storage and reuse have become a hot topic in reduction of GHG emissions and abating climate change. The CO 2 can be captured through pressure swing adsorption (PSA), CO 2 absorption by aqueous monoethanolamine (MEA) solution, CO 2 adsorption by CaO-based sorbent through carbonation and calcination looping, and membrane separation. Most of the previous studies were focused on CO 2 capture from post-combustion flue gas which objective was to reduce GHG emissions. Considering that the producer gas from biomass gasification and hot gas cleaning processes is still at high temperatures (600–700 °C) and the downstream process of steam methane reforming also operates at high temperatures, CO 2 adsorption by CaO-based sorbent through carbonation and calcination looping is considered the most promising technology. This technology can achieve CO 2 removal efficiency of more than 90% ( Duhoux et al., 2016 ; Hu et al., 2017 ; Dashtestani et al., 2020 ; Haaf et al., 2020 ). In the CaO-based CO 2 adsorption process, carbonation reaction is exothermic thus operates at temperatures between 600 and 650 °C in which CaO reacts with CO 2 to form carbonate (CaCO 3 ). After the adsorption reaches a certain level where most of the loaded CaO has been converted to CaCO 3 , the carbonation reaction operation stops and the reactor is switched to calcination mode. The calcination reaction is the reverse carbonation and is endothermic thus operates at high temperatures between 850 and 950 °C where CO 2 is released. There are three challenges in applying the CaO-based sorbent for CO 2 capture. The first one is the selection of operation conditions both for carbonation and calcination reactions. The second challenge is the maintaining of the reactivity with cycling, and the third one is attrition due to the low strength. For determination of the operation conditions, the CO 2 transfer between the gas stream and the sorbent material needs to be considered. In the early studies, the equilibrium pressure of CO 2 at different temperatures was proposed for the carbonation-calcination reactions ( Baker, 1962 ; Manovic and Anthony, 2007 ), and the results can be expressed in the following equation and illustrated in Fig. 4 . where (4) ln P CO2, eq = 7.079 – 8 308/ T P CO2, eq is the partial pressure of CO 2 at equilibrium on the material (CaO/CaCO 3 ) surface (kPa) and T is the material temperature (K). At a given temperature, if the CO 2 partial pressure in the gas is lower than the equilibrium pressure of CO 2 , then CO 2 will diffuse from the material surface into the gas stream. On the other hand, if the CO 2 partial pressure in the gas stream is higher than P CO2, eq , CO 2 in the gas stream will transfer from the gas stream to the material surface. Alternatively, if the CO 2 partial pressure in the gas stream is known which may be derived from gas composition and total pressure in the system, the selection of operation temperature will depend on if the process is carbonation or calcination. As shown in Fig. 4 , if the CO 2 partial pressure is 0.101 kPa in the gas stream, and the operation is carbonation, then the temperature can be 750 °C with equilibrium pressure of CO 2 of 0.010 1 kPa. In this case, the driving force for CO 2 transfer from the gas stream to the material surface is 0.090 9 kPa. On the other hand, if the operation is calcination, then the temperature can be 1 000 °C with equilibrium pressure of 10.1 kPa which will provide driving force of 9.999 kPa for the CO 2 transfer from the material surface to the gas stream. However, in practical operation, reaction kinetics, material reactivity and material physical and mechanical properties need to be taken into account. It is reported that the CaO based sorbent tends to lose reactivity with cycles of the carbonation-calcination looping ( Florin and Fennell, 2011 ; Materić et al., 2014 ; Valverde et al., 2014 ; Perejón et al., 2016 ). From the results shown in Fig. 5 , it can be seen that the conversion of the CaO-based sorbent material through the carbonation-calcination looping was reduced from initial 0.7 to less than 0.1 after 30 cycles ( Florin and Fennell, 2011 ). In the meantime, the CO 2 capture capacity of the sorbent decreased from 0.52 to 0.08 g CO 2 per gram sorbent. Similar trends were also reported by Perejón et al. (2016) and Valverde et al. (2014) . This can be improved by addition of support and binder which could double the sorbent material conversion and CO 2 capture capacity with cycling of carbonation-calcoination looping. Materić et al. (2014) investigated the effects of hydration-dehydration and superheated treatments to enhance and retain the reactivity of CaO-based sorbent for CO 2 capture in a dual fluidised bed system. In the hydration-dehydration treatment, the limestone sorbent was initial calcinated and then hydrated in 20% steam in N 2 for 60 min followed by dehydration before it was used for the CO 2 capture. The superheated treatment included an additional heating up stage, following the hydration-dehydration, to 520 °C in 100% CO 2 and then maintaining that temperature for 25 min. The results are illustrated in Fig. 6 for comparison between untreated sorbent and hydrated sorbent. From the figure, the CO 2 capture efficiency for the untreated sporbent was reduced from about 65% in the early stage of operation to less than 30% in 5 h of operation. With the hydration-dehydroation treatment, the sorbent achieved the CO 2 capture efficiency of about 85% in the early stage and retained the CO 2 capture efficiency of about 50% after 5 h. The corresponding CO 2 capture efficiencies for the sorbent with superheated treatment were about 92% and 60%, respectively. However, significant attrition was found in the sorbent treatments. Recent studies by Dashtestani et al. (2020 ; 2021a ; 2021 b) investigated a new CaO-Fe 2 O 3 sorbent, developed by Hot Lime Labs, New Zealand, on the performance of CO 2 capture from a stream of simulated producer gas from biomass gasification. The sorbent material is composed of 70% CaO, 20% Fe 2 O 3 and 10% binder which were mixed and pelletized to composite pellets of about 5 mm. Selected results from this study are shown in Fig. 7 ( Dashtestani et al., 2020 ), in which the operation temperatures are read from the right-hand side y -axis and the outlet gas compositions are read from the left-hand side y -axis. In the carbonation stage, the inlet gas consisted of 25% CO 2 , 30% H 2 and 50% Ar whereas the inlet gas in the calcination stage was air. Effects of carbonation temperature and carbonation-calcination looping cycling were also examined, and the results are illustrated in Fig. 8 . It is found that the optimum carbonation temperature was 620 °C and the CO 2 capture efficiency was over 85% although the efficiency tended to decline with cycling. An important property of the above CO 2 sorbent material is that it can tolerate and adsorb gaseous contaminants such as H 2 S, NH 3 and HCl in the inlet gas stream ( Dashtestani et al., 2021a ; 2021 b). Relative high CO 2 capture efficiency can be achieved from the inlet gas with these contaminants which gives advantage for the sorbent to be used for CO 2 capture from biomass gasification producer gas. The released CO 2 -rich air can be injected to greenhouse nursery for enhancing crop growth and crop yield. When sufficiently high calcination temperature is used, pure CO 2 is then produced which can be used as chemical feedstock for synthesis of urea, liquid fuel and food processing. This is an area for future research. Theoretically the CO 2 reduction in the producer gas through CO 2 capture can enhance the subsequent water-gas shift reaction which in turn promotes the steam methane reforming reaction. However, quantitative analysis of these effects has not been reported in the literature thus this is being investigated by our research team. 4 Processing of producer gas to convert CH 4 and CO The producer gas from the biomass steam gasification contains H 2 , CO, CO 2 and CH 4 . Most of the CO 2 can be captured following the gasification operation, however, CH 4 in the gas can be converted to H 2 and CO through steam-methane reforming reaction while CO in the gas can be converted to H 2 and CO 2 through water-gas shift reaction. 4.1 Steam methane reforming of biomass gasification producer gas Steam methane reforming (SMR) is a standard process in hydrogen production from natural gas ( Boretti, A.; Banik, B.K. 2021 ; Hydrogen Production: Natural Gas Reforming, 2023 ) with the following reaction: (5) CH 4 + H 2 O → 3H 2 + CO Δ H 298 = 206.8 kJ/mol As the SMR reaction is endothermic, the reactor is operated at temperatures of 700−1 000 °C with application of appropriate catalysts (metal-based such as nickel). The SMR process is followed by water-gas shift (WGS) reaction which is exothermic thus operates at relatively lower temperatures. The WGS reaction will be discussed in the following section but it can be a side-reaction in the SMR although the WGS process is not at the optimum conditions. Considering the dominant SMR reaction with WGS as the side reaction, CH 4 conversion at equilibrium was proposed by Joensen and Rostrup-Nielsen ( Joensen and Rostrup-Nielsen, 2002 ) and the results are shown in Fig. 9 . From the figure, low pressure favours the CH 4 conversion, however, considering that the down-stream process normally operates at high pressures and the reactor size can be reduced with increase in the operation pressures, the operation pressure for SMR process is normally higher than 300 kPa. Effects of steam to carbon were also examined by Zhang et al. (2002) and the results are illustrated in Fig. 10 at 850 °C and pressures from 2 to 7 MPa. As the producer gas from biomass gasification consists of more gaseous species than the natural gas, the gas processing is more complicated and no studies were found in the literature of the combined SMR and WGS processing of the biomass gasification producer gas. Previous studies have been reported on gas processing of biogases generated from digestion of agricultural residues and bio-solid wastes. The biogas from these resources contains CH 4 (40%–75%), CO 2 (20%−50%) with trace species of N 2 , O 2 and H 2 S ( Song and Pan, 2004 ; Zhang et al., 2014 ; Zhao et al., 2020 ). From the biogas processing, tri-reforming reactions were proposed as combined steam methane reforming, CO 2 methane reforming and methane oxidation ( Song and Pan, 2004 ; Zhang et al., 2014 ). The product gas composition from the tri-reforming reactions at equilibrium is predicted through theoretical analysis. The results are shown in Fig. 11 a on gas composition as a function of operation temperature at operation pressure of 100 kPa, in Fig. 11 b for effect of operation pressure on H 2 content in the gas and in Fig. 11 c for effect of pressure on CH 4 concentration at the operation temperature of 850 °C ( Zhang et al., 2014 ). 4.2 Water-gas shift reaction of biomass gasification producer gas Water-gas shift (WGS) reaction is the reaction of CO with water to form H 2 and CO 2 as follows ( Newsome , 1980; Baraj et al., 2021 ): (6) CO + H 2 O → H 2 + CO 2 Δ H 298 = −41.2 kJ/mol As the WGS reaction is exothermic, lower temperature favours the reaction towards H 2 production. However, low temperatures result in low reaction kinetics and thus long reaction time. Therefore, two stage processes are commonly employed with the high temperature WGS reaction at 320–360 °C while the low temperature WGS reaction at 190–250 °C ( Montenegro Camacho et al., 2017 ; Halla et al., 2018 ; Palma et al., 2019 ). In the high temperature WGS process, fast conversion of CO is targeted and in the low temperature WGS process, high conversion of CO is achieved. The other advantages of two stage WGS reaction are that the first stage can also remove gaseous contaminants thus the more expensive and precious catalysts used in the second stage can last longer time. In addition, less steam is consumed in the two stage WGS than the single stage WGS reaction ( Newsome, 1980 ). The CO conversion at equilibrium for various steam to dry gas (S/G) ratio is illustrated in Fig. 12 as a function of temperature ( Mendes et al., 2010 ). This is simulated for exit gas from steam methane reforming which consists of H 2 , CO, CO 2 and traces of CH 4 and N 2 . In a separate study, the CO conversion was measured at low temperatures by Choi and Stenger (2003) who investigated the effect of steam (H 2 O) to CO ratio as shown in Fig. 13 . It is interesting to note that the CO conversion is increased with reaction temperature which is due to reaction rate increasing with the operation temperatures. It is also found that the measured CO conversion is lower than the equilibrium value. For example, at steam to CO ratio of 1 and reaction temperature of 220 °C, the measure CO conversion was 70% while the calculated equilibrium CO conversion was 80%. 5 Separation of H 2 and CO 2 , and H 2 storage by hydrogenation After the processing of the biomass gasification producer gas, the gas mixture consists of H 2 and CO 2 with traces of other species. Conventional separation technologies may be applied for the separation of H 2 and CO 2 which include absorption, adsorption and membrane separation. However, in recent studies, non-conventional separation technologies have been proposed and some have been commercially employed. The CO 2 absorption using aqueous monoethanolamine (MEA) based solutions is a well-established technology and numerous studies have been reported ( Conway et al., 2015 ; Monteiro et al., 2015 ; Khan et al., 2011 ). The actual absorption process is described by the following overall reaction: (7) CO 2 + 2RNH 2 ↔ RNHCOO – + RNH 3 + In which R represents CH 2 CH 2 OH (ethyl alcohol), RNH 2 is the MEA with chemical formula of HOCH 2 CH 2 NH 2 or C 2 H 7 NO, RNHCOO − is the carbamate ion, and RNH 3 + is the protonated carbonyl ion. The above process is reversible depending on the operation temperature. At low temperatures, CO 2 dissolves into the solvent and reacts with MEA to form a salt. The salt can be converted back to the acid and the amine at higher temperatures and reduced pressures, regenerating the solvent and releasing CO 2 ( Conway et al., 2015 ; Monteiro et al., 2015 ; Afkhamipour et al., 2019 ). In the experimental study of Tontiwachwuthikul et al. (1992) , the CO 2 capture process in the pilot absorber was operated at 14–20 °C for the solvent inlet temperature and 101.15 kPa for the pressure. The gas flow rate was 11.1–14.8 mol/m 3 ⋅s and that of the liquid was 9.5–13.5 m 3 /(m 2 ⋅s). The MEA concentration in the liquid was 1.2–3.8 kmol/m 3 . From this study, the CO 2 capture efficiency was reported to be 100% from the air-CO 2 mixture at CO 2 concentrations from 11.5% to 19.5%. The change of CO 2 concentration in the gas is shown in Fig. 14 . However, this process is costly, and the effective regeneration of MEA solution requires heat input. In addition, some other gaseous species may also be absorbed by MEA or MEA based solutions which is undesirable. Solid sorbent may also be used to capture CO 2 such as the CaO based materials as discussed in Section 3 . Other sorbents such as activated carbon and zeolite materials have also been employed for CO 2 capture which require much lower temperatures ( Siqueira et al., 2017 ; Gonzalez-Olmos et al., 2022 ; Majchrzak-Kucęba et al., 2022 ). These sorbents are used with the pressure swing adsorption (PSA) which is a cyclic adsorption/desorption by changing the operation pressure and temperature. This is based on the fact that sorption capacity of a sorbent for a specific gas species varies with pressure and temperature as shown in Fig. 15 in which the activated carbon was used to adsorb CO 2 from CO 2 and N 2 mixture ( Siqueira et al., 2017 ). The PSA normally operates using dual bed columns alternatively, one for charging (loading) and the other for discharging. From comparison studies, zeolite was found to have better environmental impacts than carbon based sorbent materials ( Gonzalez-Olmos et al., 2022 ). From Fig. 15 , it is found that the CO 2 sorption capacity is high at higher pressures and lower temperatures which occurs in the adsorption column. However, the sorbent also adsorbed a small quantity of N 2 which would be released during the desorption process at low pressure and high temperature. This will reduce the purity of the target gas (CO 2 ) for reuse. A combination of PSA adsorption followed by MEA absorption was proposed for industrial applications to capture CO 2 from the tail gas from steam reforming for hydrogen production ( Pellegrini et al., 2020 ). It is well-known that hydrogen storage is still a challenge as hydrogen has very low density thus the hydrogen needs to be compressed at high pressure (20 000−30 000 kPa) or to be liquefied at very low temperatures (−252.8 °C at atmospheric pressure) ( Züttel, 2004 ; Cao et al., 2021 ). These methods are costly and inconvenient; therefore, other hydrogen storage methods have also been investigated which include solid metal hydrides, chemical storage carriers such as NH 3 and methanol, and liquid storage carriers through hydrogenation and dehydrogenation ( Murray et al., 2009 ; Gianotti et al., 2018 ; Modisha et al., 2019 ; Niermann et al., 2019 ). Important factors to be considered include costs, mass ratio of hydrogen stored to carrier materials, hydrogen volumetric density, energy input for every kg H 2 stored, and easiness of the storage and transport. Considering the combination of these factors, liquid organic hydrogen storage carrier has attracted great interests ( Abdin et al., 2021 ) and thus the liquid hydrogen storage is the focus of this review. Recently, selected organic liquids were used as the hydrogen carrier through hydrogenation and dehydrogenation ( Ferreira-Aparicio et al., 2002 ; Oda et al., 2010 ; Modisha et al., 2019 ; Wijayanta et al., 2019 ; Abdin et al., 2021 ). Taking toluene as an example, during the hydrogenation process, toluene reacts with hydrogen to form methylcyclohexane (MCH): (8) C 6 H 5 CH 3 + 3H 2 → C 6 H 11 CH 3 Δ H 298 = −204.8 kJ/mol The above reaction is reversible at different temperatures and pressures, and the conversion of MCH at equilibrium as a function of pressure and temperature is given in Fig. 16 . From the figure, the hydrogenation can be operated at 400 kPa and temperatures lower than 150 °C. On the opposite, dehydrogenation occurs when the temperature is increased to above 300 °C at the pressure of 100 kPa. The above hydrogenation and dehydrogenation using toluene as the hydrogen carrier is illustration in Fig. 17 ( Modisha et al., 2019 ). In this process, every mole of toluene can adsorb 3 mol H 2 thus the mass ratio of H 2 stored to toluene is about 0.065 while the energy demand for dehydrogenation is 68.3 kJ/mol H 2 or 28% of the energy the H 2 possesses (LHV = 120 kJ/g). As the hydrogenation is at lower temperatures, the energy released is difficult to be recovered, therefore the energy demand for toluene-MCH system is relatively high. However, due to the easiness of handling of the hydrogen carrier, this system has been employed by Mitsui & Co., Ltd. ( The World's First Global Hydrogen Supply Chain Demonstration Project, 2017 ). Alternative systems are also reported and their performance is compared in Table 1 ( Modisha et al., 2019 ). From the above discussion, the key advantages of using organic liquids as hydrogen storage carriers include high hydrogen carrying capacity and easiness for H 2 storage and transport. However, energy demand for dehydrogenation and costs for the liquids are major challenges. Further studies are needed to assess different organic liquid carriers in respect of economic and environmental impacts. 6 Conclusions and recommendations Biomass is an abundant and renewable resource in the world which can be used for green hydrogen production. However, the biomass has bio-origin with complicated chemical composition and physical structure. Consequently, the conversion efficiency is low and production cost is high. In order to improve the conversion efficiency and reduce the production cost, extensive studies have been reported in the literature. This review presents recent advances in hydrogen production from woody biomass through thermochemical conversion processes. The first step in the process is advanced biomass steam gasification which produces a hydrogen-rich gas mixture consisting of H 2 , CO, CO 2 and CH 4 . Following the gasification, CO 2 in the gas is captured for reuse and the gas is further processed to convert CH 4 and CO to H 2 and CO 2 through reactions with steam. Eventually H 2 in the H 2 /CO 2 mixture is separated for storage using organic liquid carrier through hydrogenation. The key objective of this review is to explore the opportunities for enhancing H 2 yield through steam inputs to biomass gasification and gas processing operations. This effectively transfers hydrogen atom in the steam to hydrogen gas in the final product through chemical reactions. In addition, the separated pure CO 2 in the final stage together with the captured CO 2 prior to the gas processing can be collected for reuse. In this way, negative carbon emissions can be achieved considering that CO 2 is adsorbed from environment by the growing trees through photosynthesis. Further research and development are required for implementation of the hydrogen production technologies from woody biomass which include: 1) Technology proof at demonstration scale, particularly for SMR and WGS reactions of the producer gas from biomass steam gasification. Although SMR and WGS have been employed in hydrogen production from natural gas, the producer gas from biomass gasification is more complicated and may contain trace contaminants. 2) Process modelling for feasibility studies and environmental impact assessment. The operation conditions for the major operation units can be optimised through the system modelling. Analysis of capital and operational costs can be conducted based on the modelling results. Life cycles assessment can also be performed based on the modelling results to verify the overall environmental benefits. Declaration of Competing Interest The authors declare no conflict of interest. Acknowledgements The present work was carried out as part of the research project UOCX1905 funded by the New Zealand Ministry of Business, Innovation and Employment (MBIE).
REFERENCES:
1. ABDIN Z (2021)
2. AFKHAMIPOUR M (2019)
3. ASADULLAH M (2014)
4. BAKER E (1962)
5. BARAJ E (2021)
6. BOERRIGTER H (2005)
7. BOERRIGTER H (2005)
8. BORETTI A (2021)
9. CAO Y (2021)
10. CHOI Y (2003)
11. CONWAY W (2015)
12. DASHTESTANI F (2021)
13. DASHTESTANI F (2021)
14. DASHTESTANI F (2020)
15. DRIFT A (2005)
16. DUHOUX B (2016)
17. FERREIRAAPARICIO P (2002)
18. FLORIN N (2011)
19. GIANOTTI E (2018)
20. GONZALEZOLMOS R (2022)
21. HAAF M (2020)
22. HALLAC B (2018)
23. HONGRAPIPAT J (2016)
24. HONGRAPIPAT J (2014)
25. HU Y (2017)
26.
27.
28.
29. JI M (2021)
30. JOENSEN F (2002)
31. JUUTILAINEN S (2006)
32. KHAN F (2011)
33. KOSTYNIUK A (2019)
34. MAJCHRZAKKUCEBA I (2022)
35. MANOVIC V (2007)
36. MATERIC V (2014)
37. MENDES D (2010)
38. MODISHA P (2019)
39. MONTEIRO J (2015)
40. MONTENEGROCAMACHO Y (2017)
41. MURRAY L (2009)
42. NAKAMURA S (2016)
43. NAVASANGUITA Z (2020)
44. NEWSOME D (1980)
45. NIERMANN M (2019)
46. ODA K (2010)
47. OEMAR U (2014)
48. OLABI A (2021)
49. PALMA V (2019)
50. PANG S (2010)
51. PANG S (2019)
52. PANG S (2010)
53. PELLEGRINI L (2020)
54. PEREJON A (2016)
55. PFEIFER C (2004)
56. RABOU L (2015)
57. SAW W (2012)
58. SAW W (2012)
59. SAW W (2013)
60. SHAYAN E (2018)
61. SIQUEIRA R (2017)
62. SONG C (2004)
63. TERLOUW T (2022)
64.
65. TONTIWACHWUTHIKUL P (1992)
66. VALENTE A (2019)
67. VALVERDE J (2014)
68. WANG Y (2018)
69. WANG Y (2018)
70. WIJAYANTA A (2019)
71. WU N (2023)
72. XU Q (2008)
73. ZENG X (2020)
74. ZHANG Y (2021)
75. ZHANG Y (2014)
76. ZHANG Z (2019)
77. ZHAO X (2020)
78. ZUTTEL A (2004)
|
10.1016_j.jmh.2025.100344.txt
|
TITLE: Proportional representation and incidence rate of repeat visits in ethnic minorities compared to native Dutch people under the age of 25 years in the Netherlands
AUTHORS:
- Evers, Y.J.
- Verhaegh, A.
- Ibrahim, A.
- Peters, C.
- Dukers-Muijrers, N.H.T.M.
- Reijs, R.
- Hoebe, C.J.P.A.
ABSTRACT:
Introduction
Migration is a growing phenomenon and has impact on sexual and reproductive health outcomes, such as an increased burden for STIs, sexual violence and unintended pregnancies. Equitable access to sexual health care is of great importance for young people from ethnic minorities (EMs). In this study, we aimed to determine the proportional representation of first- and second generation EMs under 25 years at Dutch Sexual Health Centers (SHCs) compared to native Dutch citizens.
Methods
In this retrospective cohort study, coded health records data of 270,927 persons in the age group of 15 till 24 years visiting SHCs between 2016 and 2021 were included. Health records data was combined with census tract data (Statistics Netherlands) to average annual calculate consultation rates, i.e., dividing 6-year-average of the number of first consultations of a patient in the study period belonging to a specific EM by the total number of citizens in the age group of 15 till 24 years belonging to that EM in the Netherlands in 2021, multiplied by 1000.
Results
The consultation rate for native Dutch patients was 22.0 per 1000 persons (95 %CI: 21.8–22.2, 18.9, 19.8 (95 %CI: 19.8–20.4) for first-generation EMs and 18.4 (95 %CI: 18.0–18.8) for second-generation EMs. In both first- and second generation EMs, consultation rates for patients from Turkey, Morocco, Eastern Europe and Asia were lower than for native Dutch patients. Consultation rates among patients from Africa were lower for first-generation EMs than native Dutch patients. Consultation rates among patients from Indonesia, Suriname/Dutch Antilles, Latin America and other western countries were equal or higher than among native Dutch patients
Discussion
Our study showed that several EMs were underserved in Dutch sexual health care, suggesting lower access to care and indicating the need for culturally sensitive approaches to increase access. Using consultation rates is informative to indicate inequalities in access to sexual health care among EMs.
BODY:
Introduction During the last decades, migration is a growing phenomenon. Migrant stock data showed that nearly 87 million international migrants lived in Europe, an increase of nearly 16 percent since 2015 ( IOM. World Migration Report 2024 ). Migration can have impact on sexual health and reproductive outcomes in young people, leading to a higher burden of sexually transmitted infections (STIs), unintended pregnancies and sexual violence ( Botfield et al., 2016 ). This vulnerability arises from diverse cultural, political and psychosocial factors, such as stigma and discrimination, poor travel, living and working conditions, and increased exposure to violence and exploitative conditions ( Egli-Gany et al., 2021 ). Migrants also face unequal access to sexual healthcare and a lack of continuity of care, affecting STI testing and treatment, contraceptive services, and vaccinations, such as those for human papillomavirus (HPV) ( Botfield et al., 2016 ; Davies et al., 2006 ). European-American studies have showed higher STI burden among several first-generation and second-generation migrant groups. A UK study established a significant association between ethnic origin and reported STIs in the previous five years ( Aicken et al., 2020 ). A USA study assessed that African Americans are nearly five times more likely to be infected with an STI compared to other ethnic groups ( Laumann and Youm, 1999 ). In the Netherlands, several studies have also shown that STI positivity rates were higher among individuals with an ethnic minority (EM) background compared with the native Dutch population, especially among EMs under the age of 25 years ( van Oeffelen et al., 2017 ; Ostendorf et al., 2021 ; Kayaert et al., 2023 ). Individuals under the age of 25 years from regions or continents such as Morocco, Turkey, the Dutch Antilles, Suriname, Africa and Eastern Europe are particularly at higher risk of Chlamydia trachomatis (CT) and Neisseria gonorrhoeae (NG) (9) . This disparity in STI risk is partly attributed to the endemic presence of STIs in these countries, transnational sexual networks, and unequal access to healthcare services ( Laumann and Youm, 1999 ; Hughes and Field, 2015 ).The higher burden of sexual health problems underpins the importance of equal access to sexual health services among EMs. A national survey study from the UK demonstrated that health services are less used by EMs compared with resident populations ( Saunders et al., 2021 ). A literature review demonstrated that structural challenges associated with health system navigation and knowledge of services, expected and perceived stigma, and insufficient culturally safe and language-specific services posed significant barriers to healthcare ( Machado et al., 2021 ). Underutilization of healthcare services was particularly relevant in cases of specialized care, such as sexual health care. In the Netherlands, sexual health care is organized by general practitioners and Centers for Sexual Health (SHC). SHC offer free services, including STI/HIV testing, contraceptive care, vaccinations, and counselling to high-risk groups (e.g. young people under the age of 25 years). Several Dutch studies reported lower consultation rates at SHC among EMs aged under 25 years from Turkey, North-Africa, Eastern Europe, and Asia compared to the Dutch native population of this age group, indicating a lower proportional representation of several young EMs at sexual health services ( van Oeffelen et al., 2017 ; Ostendorf et al., 2021 ). Both studies were performed in specific Dutch regions, limiting their generalizability. Equitable access to sexual healthcare are especially relevant to young people due to their relative higher risk on STIs and unintended pregnancies. Culturally sensitive prevention and care activities tailored at EMs could help to increase access and reduce the STI burden and improve sexual and reproductive health ( Furegato et al., 2016 ). To effectively target these initiatives, updated and broader generalizable data on the representation of young EMs in sexual healthcare across diverse urban areas is needed. Therefore, the current study aims to determine the proportional representation using consultation rates for several first generation and second generation EMs aged under 25 years visiting Dutch SHCs in both urban and rural areas and compare these to consultations rates among native Dutch individuals. Additionally, this study aims to gain insight into continued use of sexual health care by assessing incidence rates of repeat visits in EMs compared to native Dutch individuals. Methods 2.1 Study design In this retrospective cohort study, coded electronic health records of patients under the age of 25 years were included from all outpatient Public Health Sexual Health Centers (SHCs; also known as public health STI clinics) in the Netherlands (from 25 Public Health Services with 38 STI clinic locations spread across the country). Reporting of coded patient consultation data to the National Institute for Public Health and the Environment is standardized and mandatory for all SHC thereby covering virtually all consultations performed. The publicly funded Dutch SHC serve high-risk groups, including young people under the age of 25 years, men who have sex with men, and sex workers, offering free STI testing and sexual health counselling on for example condom use, use of contraceptives, sexual pleasure, sexuality and sexual identity, unwanted sexual experiences, and vaccinations (e.g. HPV). We extracted person-level data on sociodemographic characteristics (i.e. region of birth of patient and parents, age, gender and urbanity of the visited SHC). We used first-time consultations of patients and first repeat visits, using a patient identifier, in the timeframe of 2016, January 1st, to 2021, December 31th. First-time consultations include the first visit of a patient during the study period using a unique patient number. In addition to SHC data, we used nationwide census tract data from Statistics Netherlands (CBS) to calculate and compare consultation rates ( www.cbs.nl ). 2.2 Outcomes 2.2.1 EM groups and consultation rates First generation EMs were defined as individuals born outside of the Netherlands. Second generation EMs were defined as individuals born in the Netherlands of whom one or both of their parents were born outside of the Netherlands. If both parents were born in different EM groups, the classification was determined by the mother’s birth country. The classification of EMs was based on the classification of Statistics Netherlands, and the following EM groups were identified: western (West, North and South Europe, United States, Canada, Virgin Island, Newfoundland, Greenland and Oceania), and non-western regions, including Asian (Asia, excluding Turkey and Indonesia), East European (Bulgaria, former Soviet Union, former Yugoslavia, Hungary, Poland, Romania, Slovakia, Czech Republic), African (Africa, excluding Morocco), Latin American (North, Central and South America, excluding the United States and Canada), Surinamese and Dutch Antillean (Suriname and the Dutch Antilles), Turkish (Turkey), Indonesian (Indonesia) and Moroccan (Morocco). 2.3 Data analysis Descriptive statistics were used to describe the sociodemographic characteristics (age, gender, urbanity) of the study population. To calculate average annual consultation rates (called consultation rates further in text) per 1000 persons for each EM group, we first summed the number of first consultations of a patient belonging to a specific EM in the age group of 15 till 24 years in the 6-year study period. This number was then divided by 6 to obtain an average annual number of consultations. We used a 6-year average rather than data from a single year to ensure more stable and reliable estimates, particularly for migrant groups with low consultation numbers. Subsequently, this average of first consultations of patients belonging to a specific EM was divided by the total number of inhabitants in the age group of 15 till 24 years belonging to that EM in the Netherlands in 2021 (January 1st till December 31th), multiplied by 1000. Only the most recent year for the total number of inhabitants per EM group in the census tract data was chosen, as these numbers remained relatively stable over time during the study period. Age of patients from the SHC data was determined as the date of visit minus the birth date. Used information on inhabitants belonging to an EM group can be found in Appendix I. We calculated 95 % confidence intervals (95 %CI) for these rates using standard methods for single rates ( http://vassarstats.net/prop1.html ). Consultation rates for EM groups were calculated separately for male and female EMs. General consultation rates were calculated separately for patients living in highly urban and less urbanized areas. As postal codes were largely missing for EMs in SHC data, level of urbanization was determined by the location of the consulted SHC. SHC in the provinces of Noord-Holland, Zuid-Holland and Utrecht were classified as highly urban, considering the highest environmental address density and also known as the urban conglomerate ‘Randstad’. The provinces of Noord-Brabant, Zeeland, Limburg, Gelderland, Overijssel, Friesland, Groningen, Drenthe and Flevoland were classified as less urban, considering the lower environmental address density. Consultations rates were compared in different first- and second generation EM groups and the native Dutch population (as a reference group) using the absence or presence confidence interval overlap. Furthermore, incidence rates of first repeat visits were calculated by dividing the number of first repeat visits after the initial STI clinic visit during the study period by the total number of person years (exposure time) in EMs, multiplied by 1000. Exposure time was defined as the time between the first consultation until the first repat consultation or the end of the study period. We again calculated 95 %CI for these rates, and rates were compared by the absence or presence of the intervals. Statistical analyses were performed using IBM SPSS Statistics (Version 27.0, Armonk, NY, USA). 2.4 Ethical approval After official review, the Medical Ethics Committee of Maastricht University Medical Centre (MUMC+ METC 2017–0251) waived the requirement for formal ethical approval by prevailing laws in the Netherlands and confirmed that written informed consent was not needed because the data originated from standard care, were deidentified, and were analysed anonymously. Results The comparative national population of 15 to 24 year olds was 2139,691 in 2021, based on the nationwide census tract data from Statistics Netherlands (CBS) (Appendix I). The proportion of EMs in the population was 28.5 % (609,902), and the proportion of non-western EMs was 22.5 % (481,676). The proportion of native Dutch people was 71.5 % (( IOM. World Migration Report 2024 ),529,772), 10.5 % (225,411) for first-generation EMs, and 18.0 % (384,508) for second-generation EMs. The SHC study population consisted of 270,927 first-time consultations among patients under the age of 25 years between 2016 and 2021, after exclusion of 859 consultations (0.3 %) in which information on region of birth was missing. Of the patients included in the study population ( N = 270,927) two-third were women (63.7 %), 36.2 % were men and 0.1 % were transgender. Median age was 21 years with an IQR of 20 to 22 years. More than half visited a SHC in highly urban areas (55.0 %; 149,054) and 45.0 % (121,873) in less urban areas. The proportion of EMs among SHC patients under the age of 25 years was 25.5 % (69,179) and the proportion of non-western EMs was 18.7 % (50,737). The proportion of native Dutch patients was 74.5 % (201,748), 9.9 % (( Souleymanov et al., 2023 ),730) for first-generation EMs, and 15.7 % (42,449) for second-generation EMs. 3.1 Consultation rates for total EMs The consultation rate for native Dutch people was 22.0 per 1000 persons (95 %CI: 21.8–22.2). The consultation rate for all EMs, both first- and second-generation, was 18.9 (95 %CI: 18.6–19.2), for first-generation EMs 19.8 (95 %CI: 19.8–20.4) and for second-generation EMs 18.4 (95 %CI: 18.0–18.8). In non-western EMs, the consultation rate was 17.6 (95 %CI: 17.2–18.0), 17.1 (95 %CI: 16.5–17.7) in first-generation EMS and 17.8 (95 %CI: 17.3–18.3) in second-generation EMs. In highly urban areas, the consultation rate for all EMs was 19.0 (95 %CI: 18.6–19.5), 19.0 (95 %CI: 18.5–19.5) for non-western EMs, and 27.7 (95 %CI: 27.3–28.1) for native Dutch people. In less urban areas, the consultation rate for all EMs was 12.6 (95 %CI: 12.2–13.0), 14.8 (95 %CI: 14.2–15.4) for non-western EMs, and 18.1 (95 %CI: 17.8–18.4) for native Dutch people. 3.2 Consultation rates for different EM groups In both first- and second generation EMs, consultation rates for people from Turkey, Morocco, Eastern Europe and Asia were lower than for native Dutch people. Consultation rates among people from Africa were lower for first-generation EMs, but not for second-generation EMs than native Dutch people. Consultation rates among people from Indonesia, Suriname/Dutch Antilles, Latin America and other western countries were equal or higher than among native Dutch people ( Fig. 1 ). 3.3 Consultation rates for male and female EM groups The consultation rate for native Dutch male patients was 14.5 (95 %CI 14.2–14.7) and for native Dutch female patients 29.8 (95 %CI 29.4–30.2). The consultation rate for all first-generation male EMs was 18.3 (95 %CI 17.5–19.1) and for first-generation female EMs 21.1 (95 %CI 20.3–22.0). For all second-generation EMs, the consultation rate was 17.4 (95 %CI 16.8–18.1) for men and 13.1 (95 %CI 12.9–13.2) for women. The consultation rate for first-generation non-western male EMs was 11.4 (995 %CI 11.1–11.7) and for non-western female EMs 19.8 (95 %CI 18.6–21.0). For second-generation non-western EMs, the consultation rate was 23.9 (95 %CI 23.0–24.9) for men and 19.9 (95 %CI 19.1–20.6) for women. For women, consultation rates among patients from Turkey, Morocco, Asia and Africa or who had a parent from these areas were lower than native Dutch women. The consultation rate among men from Asia or with a parent from Asia was lower than among native Dutch people ( Fig. 2 ). 3.4 Incidence rates of first repeat visits for total EMs The incidence rate of first repeat visits for native Dutch people was 116 (95 %CI: 115 – 117) per 1000 person years. The incidence rate was 124 (95 %CI: 121–127) for all first-generation EMs and 146 (95 %CI: 144–149) for second-generation EMs. Among non-western EMs, the incidence rate was 139 (95 %CI: 135–142) in first-generation EMs, and 153 (95 %CI: 150–155) in second-generation EMs. Discussion This study shows that consultation rates at SHC were generally lower among non-western EMs than native Dutch patients under the age of 25 years, in both first- and second-generation EMs. Consultation rates were especially lower among patients with an Asian, Eastern European, Moroccan, and among Turkish first- or second-generation migration background. For first-generation EMs, the consultation rate was also lower among patients with an African migration background. Although consultation rates in highly urban areas were generally higher than in less urban areas for both native Dutch patients and EMs, consultations rates among EMs were lower than in native Dutch people in both urban and less urban areas. Consultation rates among men were generally lower than among women, but lower consultation rates among non-western EMs were assessed for both men and women. Lower consultations rates from patients with specific EM backgrounds were more visible in women. Repeat visits were higher among (non-western) EMs compared to native Dutch patients, suggesting that EMs might not be disadvantaged in continued use of sexual health care. This suggests that first access to sexual health care might be suboptimal for young EMs and indicates the importance of culturally sensitive approach to sexual health promotion and service provision in both high and less urban areas. Previous studies have already indicated disparities in access to sexual healthcare among migrants and EM. The current study results are comparable to previous Dutch studies in SHCs in specific highly urban areas and in a less urban area showing lower consultation rates for patients with an Asian, Eastern European, Moroccan, Turkish and African ethnic background and higher consultation rates for patients from Latin America and Suriname/Netherlands Antilles ( van Oeffelen et al., 2017 ; Ostendorf et al., 2021 ). However, in these previous studies, no distinction was made between first- and second generation EMs. Although our study showed some differences in the proportional representation of specific EMs at SHCs, the pattern of EMS with lower and higher consultation rates was comparable for first- and second generation EMs, in both men and women. This suggests that barriers in accessing sexual health care are still present in persons born in the Netherlands with parent(s) born in the above listed countries. A previous Dutch study in general practitioners (GP) data showed higher consultation rates for EM, except for Turkish and Moroccan women, suggesting that some EM prefer consulting the GP over SHCs ( Woestenberg et al., 2015 ). Our study shows that consultation rates were generally lower among men than among women, which was in line with national surveillance reports ( Kayaert et al., 2023 ). The lower consultation rates for patients with an Asian, Eastern European, Moroccan, Turkish and African ethnic background were more visible among women than among men. However, as men generally already tend to access the SHCs less and the consultation rates were also low for patients with the above-mentioned ethnic backgrounds, attention for improving access to SHCs should be provided to both men and women. A scoping review ( Adrian Parra et al., 2024 ) has shown that social and structural determinants play a main role in explaining inequalities in sexual health outcomes and use of sexual health services among EMs. On the political level, economic crises can lead to framing migrants as a threat to the economy and can lead to fear among EMs as ‘using too many resources’ ( Bloemraad et al., 2019 ). During economic crises in Europe, there is evidence of a higher STI risk among EMs ( Kentikelenis et al., 2015 ). In addition, unclear and contradicting policies on requirements such as proof of residence and insurance to access health care, create barriers for both EMs and healthcare providers ( Adrian Parra et al., 2024 ; Jones et al., 2021 ; Martinez and Ormel, 2024 ; Bil et al., 2019 ; Nöstlinger et al., 2022 ; Pérez-Sánchez et al., 2024 ). On the socioeconomic level, discrimination, poverty and poor travel, living and working conditions for some EMs could lead to poor sexual health outcomes, such as sexual violence ( Adrian Parra et al., 2024 ; Alessi et al., 2021 ). Furthermore, there are cultural, religious and linguistic barriers related to healthcare provision and access ( Nöstlinger et al., 2022 ; Barrio-Ruiz et al., 2024 ; Dillon et al., 2024 ; Souleymanov et al., 2023 ). On the individual level, knowledge and attitude towards sexual health services and experienced or expected stigma regarding migrant status and sexual preferences, such as homosexuality, play a role in explaining disparities ( Adrian Parra et al., 2024 ; Martinez and Ormel, 2024 ; Nöstlinger et al., 2022 ; Pérez-Sánchez et al., 2024 ). Furthermore, experiences and familiarity with healthcare systems in departure countries might also shape migrants’ practices in sexual healthcare seeking in transition and destination countries ( Kamenshchikova et al., 2024 ). Based on previous research, participative and culturally tailored approaches taking into account the specific healthcare needs and the barriers on these different levels ( Kamenshchikova et al., 2024 ; Bouaddi et al., 2023 ; Inthavong and Pourmarzi, 2024 ; Candeias et al., 2021 ; Lozano et al., 2023 ; Cordel et al., 2022 ) should probably be used to tackle disparities in access to sexual health care among people with an Asian, Eastern European, Moroccan, Turkish and African ethnic background. The use of a large dataset including diverse EMs visiting sexual healthcare centers in both highly urban and less urban areas combined with national population numbers provides insight in the representation of vulnerable groups at sexual healthcare centers. However, a general limitation of this study is that we were only able to use SHC data, and no data of other sexual healthcare providers, such as general practitioners (GPs). It might be possible that several EMs more frequently visit GPs for sexual healthcare ( Woestenberg et al., 2015 ), possibly due to a lack of knowledge on SHCs, and might not use sexual healthcare less frequently but in other care settings. Nevertheless, access to the free and anonymous SHCs should be equal to all young people regardless of migration background. Another limitation in this study is that it is impossible in population numbers to differentiate between men who have sex with men (MSM) and men having sex with women. Discrimination and persecution surrounding homosexuality exists in several cultures worldwide, such as in a large number of African countries, Southeast Asia, and Eastern Europe ( Poushter and Kent, 2020 ). This might lead to even lower consultation rates and unequal access to sexual healthcare among MSM from these countries. In conclusion, our study showed that several EMs were underrepresented at the Dutch sexual healthcare centers, including first- and second-generation EMs with an Asian, Eastern European, Moroccan and Turkish ethnic background. Using consultation rates by combining population numbers and sexual health center patient data helps to inform for which population groups targeted approaches are needed to increase access. More research is needed into the factors related to lower consultation rates to inform culturally tailored approaches. Data sharing The sexual health center data used were explicitly made available this study by the National Institute for Public Health and the Environment. Any data sharing requests should be directed to the National Institute for Public Health and the Environment (soap@rivm.nl). The nationwide census tract data are publicly available from Statistics Netherlands (CBS) ( www.cbs.nl ). Funding This work was supported by the Public Health Service South Limburg, the Netherlands. CRediT authorship contribution statement Y.J. Evers: Writing – review & editing, Writing – original draft, Visualization, Supervision, Methodology, Formal analysis, Data curation, Conceptualization. A. Verhaegh: Writing – review & editing, Writing – original draft, Visualization, Methodology, Conceptualization. A. Ibrahim: Writing – review & editing, Writing – original draft, Methodology, Formal analysis, Data curation, Conceptualization. C. Peters: Writing – review & editing, Conceptualization. N.H.T.M. Dukers-Muijrers: Writing – review & editing, Supervision. R. Reijs: Writing – review & editing, Supervision, Conceptualization. C.J.P.A. Hoebe: Writing – review & editing, Writing – original draft, Supervision, Conceptualization. Declaration of competing interest The authors declare no conflicts of interest. Acknowledgements The authors want to thank the Dutch Centres for Sexual Health (also referred to as STI clinics) and the National Institute of Public Health and the Environment (RIVM) for providing the national STI health records data used in this study. Appendix I: Frequencies of first visits and number of inhabitants per first- and second-generation EM group to calculate consultation rates Sexual Health Centers electronic health records data Total N = 270,927 Number of first visits in a 6-year period from 2016–2021 Census tract data from Statistics Netherlands^ Total N = 2139,691 Number of inhabitants in 2021 Consultation rates (95 %CI)* Dutch 201,748 1529,772 22.0 (21.8 – 22.2) First generation EMs Turkey 432 6179 11.7 (9.3 – 14.7) Morocco 326 3846 14.0 (10.7 – 18.2) Eastern Europe 3398 48,259 11.7 (10.8 – 12.7) Asia 3900 53,482 12.2 (11.3 – 13.2) Africa 2367 25,117 15.7 (14.2 – 17.3) Other western countries 10,025 62,500 26.7 (25.5 – 28.0) Indonesia 245 2199 18.6 (13.7 – 25.1) Suriname/Dutch Antilles 3797 15,468 40.9 (37.9 – 44.1) Latin America 2240 8361 44.6 (40.4 – 49.2) Second generation EMs Turkey 3546 62,033 9.5 (8.8 – 10.3) Morocco 4249 69,536 10.2 (9.5 – 11.0) Eastern Europe 1700 24,394 11.6 (10.3 – 13.0) Asia 3936 42,944 15.3 (14.2 – 16.5) Africa 4749 33,298 23.8 (22.2 – 25.5) Other western countries 8417 65,743 21.3 (20.2 – 22.4) Indonesia 1972 10,698 30.8 (27.7 – 34.2) Suriname/Dutch Antilles 11,756 63,263 31.0 (29.7 – 32.4) Latin America 2124 12,599 28.1 (25.4 – 31.1) ^ https://opendata.cbs.nl/statline/#/CBS/nl/dataset/37325/table?ts=1678195852121 . *To calculate the consultation rates, the number of first visits are first divided by 6 to obtain an average annual number of consultations. Appendix II: Frequencies of first visits among men and number of inhabitants per first- and second-generation male EM group to calculate consultation rates Sexual Health Centers electronic health records data Number of first visits in a 6-year period from 2016–2021 Census tract data from Statistics Netherlands^ Number of inhabitants in 2021 Consultation rates (95 %CI)* Dutch 67,960 782,520 14.5 (14.2 - 14.7) First generation EMs Turkey 258 3143 13.7 (10.2 – 18.4) Morocco 212 1798 19.5 (14.0 – 27.0) Eastern Europe 1221 13,265 15.4 (13.4 – 17.6) Asia 2039 28,816 11.8 (10.6 – 13.1) Africa 1232 13,695 15.0 (13.1 – 17.1) Other western countries 4172 28,253 24.6 (22.9 – 26.5) Indonesia 90 1014 14.8 (9.0 – 24.3) Suriname/Dutch Antilles 1931 5443 59.2 (53.2 – 65.7) Latin America 1029 4067 42.3 (36.5 – 48.9) Second generation EMs Turkey 2010 31,958 10.5 (9.4 – 11.7) Morocco 2489 35,394 11.7 (10.7 – 12.9) Eastern Europe 605 3545 28.5 (23.5 – 34.5) Asia 1456 22,149 11.0 (9.7 – 12.4) Africa 2130 17,041 20.8 (18.8 – 23.1) Other western countries 2872 33,823 14.2 (13.0 – 15.5) Indonesia 630 5525 19.0 (15.7 – 23.0) Suriname/Dutch Antilles 4954 9430 87.6 (82.1 – 93.5) Latin America 825 6502 21.2 (18.0 – 25.0) ^ https://opendata.cbs.nl/statline/#/CBS/nl/dataset/37325/table?ts=1678195852121 . *To calculate the consultation rates, the number of first visits are first divided by 6 to obtain an average annual number of consultations. Appendix III: Frequencies of first visits among women and number of inhabitants per first- and second-generation female EM group to calculate consultation rates Sexual Health Centers electronic health records data Number of first visits in a 6-year period from 2016–2021 Census tract data from Statistics Netherlands^ Number of inhabitants in 2021 Consultation rates (95 %CI)* Dutch 133,608 747,252 29.8 (29.4 – 30.2) First generation EMs Turkey 168 3036 9.2 (6.4 – 13.3) Morocco 113 2048 9.3 (5.9 – 14.4) Eastern Europe 2167 14,618 24.7 (22.3 – 27.3) Asia 1844 24,666 12.4 (11.1 – 13.9) Africa 1131 11,422 16.5 (14.4 – 19.1) Other western countries 5820 34,146 28.4 (26.7 – 30.2) Indonesia 154 1185 21.9 (15.0 – 32.0) Suriname/Dutch Antilles 1853 5500 56.2 (50.4 – 62.6) Latin America 1178 4294 45.6 (39.8 – 52.3) Second generation EMs Turkey 1532 30,075 8.5 (7.5 – 9.6) Morocco 1757 34,142 8.6 (7.7 – 9.6) Eastern Europe 1092 3395 53.6 (46.5 – 61.7) Asia 2474 20,795 19.8 (18.0 – 21.8) Africa 2616 16,257 26.8 (24.4 – 29.4) Other western countries 5534 31,859 28.9 (27.2 – 30.8) Indonesia 1341 5173 43.3 (38.1 – 49.2) Suriname/Dutch Antilles 6780 9180 123.1 (116.5 – 130) Latin America 1297 6097 35.4 (31.1 – 40.4) ^ https://opendata.cbs.nl/statline/#/CBS/nl/dataset/37325/table?ts=1678195852121 . *To calculate the consultation rates, the number of first visits are first divided by 6 to obtain an average annual number of consultations.
REFERENCES:
1. ADRIANPARRA C (2024)
2. AICKEN C (2020)
3. ALESSI E (2021)
4. BARRIORUIZ C (2024)
5. BIL J (2019)
6. BLOEMRAAD I (2019)
7. BOTFIELD J (2016)
8. BOUADDI O (2023)
9. CANDEIAS P (2021)
10. CORDEL H (2022)
11. DAVIES A (2006)
12. DILLON F (2024)
13. EGLIGANY D (2021)
14. FUREGATO M (2016)
15. HUGHES G (2015)
16. INTHAVONG A (2024)
17. (2024)
18. JONES B (2021)
19. KAMENSHCHIKOVA A (2024)
20. KAYAERT L (2023)
21. KENTIKELENIS A (2015)
22. LAUMANN E (1999)
23. LOZANO A (2023)
24. MACHADO S (2021)
25. MARTINEZMARTINEZ V (2024)
26. NOSTLINGER C (2022)
27. OSTENDORF S (2021)
28. PEREZSANCHEZ M (2024)
29. POUSHTER J (2020)
30. SAUNDERS C (2021)
31. SOULEYMANOV R (2023)
32. VANOEFFELEN A (2017)
33. WOESTENBERG P (2015)
|
10.1016_j.atech.2023.100231.txt
|
TITLE: Performance evaluation of YOLO v5 model for automatic crop and weed classification on UAV images
AUTHORS:
- Ajayi, Oluibukun Gbenga
- Ashi, John
- Guda, Blessed
ABSTRACT:
For sustainability and efficiency in maintaining high crop yield and less chemically polluted agricultural lands, precise weed mapping is essential for the total implementation of site-specific weed management which currently stands as a major challenge in present day agriculture. In this research, the robustness of the training epochs of You Only Look Once (YOLO) v5s, a Convolutional Neural Network (CNN) model was evaluated for the development of an automatic crop and weeds classification using UAV images. The images were annotated using a bounding box and they were trained on google colaboratory over 100, 300, 500, 600, 700 and 1000 epochs. The model detected and categorized five different classes which are sugarcane (Saccharum officinarum), banana trees (Musa), spinach (Spinacia oleracea), pepper (Capsicum), and weeds. To find the optimal performance on the test set, the model was trained across several epochs, and training was stopped when the test performance (classification accuracy, precision, and recall) began to drop. The obtained result shows that the performance of the classifier improved significantly as the range of training epochs tends to rise from 100 through to 600 epochs. Meanwhile, a slight decline was observed as the number of epoch was increased to 700 when the classification accuracy, the precision of weed and recall of 65, 43 and 43%, respectively, was recorded as against 67, 78 and 34% that was obtained as the classification accuracy, weed precision and recall, respectively, at 600 epochs. This decline continued even when the epoch was increased to 1000 where classification accuracy, weed precision and recall of 65%, 45% and 40%, respectively was obtained. The results showed that the training epoch of YOLOv5s significantly affects the model's robustness in automatic crop and weep classification and identified 600 as the epoch for optimal performance.
BODY:
1 Introduction One of the primary goals of the United Nations is to eradicate all kinds of hunger and malnutrition by ensuring that everybody, particularly adolescents and those who are disadvantaged individuals within the society enjoys access to consistent supply of sufficient and nutritional food by 2030. In order to promote sustainability in agricultural methods and ensure that this goal is achieved, local farmers' livelihoods and skills must be improved while farmers are provided with equitable availability of resources which includes land, technological innovations, and markets [ 1 ]. There is evidence to suggest that each 1% increase in agricultural output reduces the percentage of truly impoverished families by between 0.6% and 1.2% worldwide [ 2 ]. In the meantime, it was estimated that increase in population would reach 9.7 billion by 2050, necessitating an increment of roughly 70% in agricultural productivity to satisfy the rise in demand [ 3 ]. Consequently, the achievement of SDGs especially as it pertains to zero hunger and with respect to agro-production is predominantly affected by weeds which have considerable impact on crops posing serious threat to farms and reducing yields when they are not properly controlled and monitored [ 4 ]. According to Vilà et al. [ 5 ], yield damages caused by non-native weeds may account for 42% of crop production. These undesired, invasive, and harmful plants hinder the emergence of other crops, which has an impact on anthropogenic practices, biological forces, and the nation's economy [ 6 ]. Nonetheless, some of the most popular weed management techniques since the advent of agriculture have been human (hand) weeding, mechanized weeding, and herbicide sprays before the fabrication of hand implements for soil cultivation and weed eradication [ 7 , 8 , 9 , 10 , 11 , 12 ]. Subsequently, with the utilization of these weed control and management strategies, weed infiltration levels have been kept low and agricultural yield has increased globally but they are not without their share of difficulties and disadvantages. The main difficulties in hand weeding include declining availability of labor, rising labor costs, and uneven weed control [ 13 , 14 ]. In a related manner, mechanical weed management necessitates increased soil turnover, which can disrupt morphology of the soil and decrease its nutrients [ 15 ]. The impacts over time and cost of mechanical weed control are not always healthy [ 16 ]. Some of the impacts of chemical use include herbicide-resistant weeds, adverse health consequences, and ecological pollution [ 17 , 18 , 19 ]. Therefore, it is crucial to diversify existing contemporary weed management techniques due to the difficulties regarding traditional weed control strategies. Pixel-based techniques are among the most frequently used methods in image processing. The typical pixel-based method performs computational evaluation at the pixel level and largely focuses on spectral data, ignoring the possibility of textural and spatial variables to increase algorithm accuracy [ 20 ]. Nevertheless, Convolutional Neural Networks (CNNs) have recently evolved, taking into account the textural, spatial and spectral features of photographs, enabling improved or optimized classification as well as their relevance in managing a large amount of data concurrently, and effectively decreasing the ambiguity of the algorithm [ 21 ]. Applications involving text synthesis, object classification, autonomous machine translation, and other fascinating topics all use deep learning techniques. In Precision Agriculture, it has been deployed to distinguish weeds from the crop zone and anticipate their occurrence [ 6 , 22 , 23 ]. The findings of the performance evaluation of a deep learning algorithm known as You Only Look Once (YOLOv5s) in the development of an automated weed classification scheme is presented in this research. 2 Literature review Site Specific Weed Management (SSWM) is an approach which involves modifying weed control within a farm to allow for variations in weed population size, distribution, and diversity [ 24,25 ]. In farmlands, the population of weed is frequently distributed erratically. As a result, the foundation of this core strategic approach is to offer a weed geographical information map that will aid the application of agrochemicals in a controlled system such that the chemicals are applied directly to specific needs, while also using other strategies, such as incorporating any applications of plant derivatives which consists of allelopathy effect, i.e., natural weed killers to try and reduce chemical contamination [ 26 ] and reduces soil, water, and air pollution. The detection and mapping of weeds are the fundamental steps in executing a SSWM methodology. This entails merging the sensor, processing methods, and actuation of systems for weed map creation. Though they may examine larger regions, conventional remote sensing technologies like piloting aircraft and earth observation satellites offer lower temporal and spatial imaging resolution [ 27 ]. Due to their increased cost-effectiveness and ease of use, unmanned aerial vehicles (UAV) have demonstrated a great prospect of operating at lower altitudes thereby providing improved picture spatial resolutions in agricultural production [ 28 , 29 ]. The complete automation of the weed management system will lead to a considerable reduction in the amount of human input or efforts needed for the execution of different tasks. To achieve this, there have been recent proposals for classification schemes to recognize crops and distinguish crops from weeds [ 30 , 31 ]. The traditional object detection methods which includes Support Vector Machine [ 32 ], Random Forest (RF) [ 33 ], Decision tree [ 34 ], k-Nearest Neighbour (kNN) [ 35 ], Artificial Neural Network [ 36 , 32 ] requires the effort of specially skilled experts to retrieve knowledgeable features (such as SIFT features, leaf digital features, and texture features). The technique's adaptability has to be significantly enhanced due to the complicated modeling process and challenging implementation. Also, due to the fact that the accuracy of the weed identification procedure heavily relies on the classifier and training features utilized for the user-defined classes, particularly with regards to the spatial and spectral pixel density of the images, conventional techniques cannot undertake the weed identification and classification from crops optimally. Several CNN based architectures have been proposed for object detection which includes AlexNet [ 37 ], Faster-RCNN [ 38 ], GoogLeNet [ 39 ], LeNet-5 [ 40 ], and the single-stage algorithm such as SSD [ 41 ], VGG [ 42 ], You Only Look Once (YOLO) [ 43 ], etc. Various researchers have used the region-based detection classifiers for object identification. For example, in Liu & Chahl [ 44 ], weeds connected to soybean seedlings were identified with the aid of k-means model and a CNN. In order to enhance the accuracy of weed's recognition, improved parameter values were obtained using the k-means training approach for uncontrolled indicators. The suggested approach outperformed detection outcomes using two-layer networks with no parameter tuning and convolutional neural networks with random initialization, achieving 92.89% accuracy. The authors also demonstrated that adjusting the parameters has a considerable impact on the rise in accuracy. In developing a weed detection scheme, Lavania & Matey [ 31 ] used image segmentation predicated on the 3D Otsu approach, to distinguish between farm produce and weeds while performing classification by compressing 3D image vectors using Principal Component Analysis (PCA) technique. Also, Urmashev et al. [45] developed a method of segmentation and identification of weeds premised on Mask R-CNN. An extraction of semantic and geographic data related to weeds was created using the ResNet-101 network. The calculation of the classification loss, regression and segmentation was carried out by the output modules. Averagely, the Mask R-CNN's accuracy is 0.853, which proved to outperform the Sharp Mask and Deep Mask models. The concept of automating the process of weed removal using machine learning algorithms was investigated by Sarvini et al. [ 46 ]. Four (4) varieties of economic crops and two (2) varieties of weeds made up the data sample that was acquired and used. The effectiveness of the developed classification models was compared with both ANNs and CNNs. Although the members of the region-based identification classifiers have great degree of precision, the technique requires a lot of processing and performs poorly in real time. dos Santos Ferreira et al. [ 47 ] developed an algorithm that detects weeds in crops and can tell them apart from weeds with broad and herbaceous leaves. Convolutional networks were used to provide highly accurate classification results for all weed types. However, the classification models utilized by the authors to categorize weeds were not evaluated either in terms of speed or performance. Therefore, in order to provide solution to the challenges of high computational complexities, low real-time performance, and high processing time which are the major gaps observed in the reviewed literature, a single-stage target detection algorithm known as You Only Look Once (YOLO) v5s was experimented in this research. The research involves acquisition of aerial overlapping photographs of the mixed-crop farms using a UAV fitted with a Red Green Blue (RGB) camera. These images are labelled, resized, and then used to train the YOLO v5s model for the detection and classification of both crops and weeds. Furthermore, the research entails analysing the impact of various training epochs on the weed classification scheme's overall performance accuracy. The model splits a photograph into regions and forecasts bounding boxes and possibilities for individual region using full pictures within a single evaluation [ 48 ]. Summarily, this research aims at investigating the performance of YOLOv5s in automatic extraction and mapping of weed and crops in a mixed-crop farmland. The performance was evaluated on six different training epochs (100, 300, 500, 600, 700, and 1000 epochs) to identify the epoch where over-fitting begins to set into the model as it trains, and to monitor the variations in accuracies with increase in epochs in order to determine the suitable epoch that will yield optimal accuracy of the automatic weed detection. 3 Materials and methods 3.1 Research area The test site used for this research is a mixed-crop farmland located at Lapan Gwari neighborhood of Minna, the state capital of Niger State in Nigeria, and it is approximately 2.8 hectares in coverage. The site is situated geographically within (9°31′33′'N, 6°30′02′'E), and (9°31′37′'N, 6°30′05′'E), with a height of 250.626 m above the ground level (see Fig. 1 ). The soil on the farmland is primarily loamy and it is equipped with a pumping machine connected to a river for irrigation. The crops cultivated on the farm include banana (Musa), pepper (Capsicum), spinach (Spinacia oleracea) and sugarcane (Saccharum officinarum). 3.2 General methodological outlook The methodological approach adopted for the execution of this research is made up of four important steps; acquiring the data, pre-processing, training and evaluation. These steps are described in Sections 3.2.1 - 3.2.5. Using the CNN architecture, the extraction of feature was executed with deep convolutional layers used to draw out the features from the acquired input photographs. The process flow diagram of the YOLO v5 algorithm's architecture used in designing the automatic crop and weed classification model is presented in Fig. 2 . 3.2.1 Data acquisition A DJI Phantom 4 drone equipped with a RGB camera sensor having 5.74 mm focal length and a resolution of 12 MP was used for the image data acquisition. The drone was flown at a height of around 30 m above the ground level to capture overlapping image pairs of the test site. The flight mission was conducted at around 12:00noon (midday) to ensure proper illumination and minimize shadow formation on the images. About 254 images having a side and front overlap of 75%, and with a 0.5 meter spatial resolution were collected. To georeference the data and for post processing, eight (8) field markers were placed evenly throughout the research test site to serve as Check Points (CPs) and Ground Control Points (GCPs) as presented in Fig. 3 . The center of each of the field markers were highlighted and used in generating the spatial coordinates of the GCPs and CPs with the aid of a dual-frequency Global Navigation Satellite System (GNSS) receiver in Real-Time Kinematic (RTK) mode. 3.2.2 Image pre-processing The image pre-processing techniques performed on the acquired images in preparation for the image classification include the following (see Fig. 4 ); (a) Image resizing: Each raw image from the dataset, which was 4000 × 3000 mega pixels in dimension, was scaled to 416 × 416 mega pixels to match the input layer of the YOLOv5s because its original dimension was not suitable for the RAM of the system used for the processing. An adaptive interpolation technique was used to implement the resizing procedure. (b) Annotation of Data: Labeling marks the feature on images that show certain objects or crops by surrounding each one on the farm with a bounding box. The downscaled images are marked to choose the suggested areas, which is made up of each unique crop. The weed patches were annotated in each individual separate sub-image as a rectangular shaped bounding box using the Python labeling imager tool (LabelImg). The used tagging procedure covered the five classes of the crops and weeds. Throughout the training phase, the labeler enclosed the various crop and weed zones on the field with rectangular shaped bounding boxes. (c) Data Splitting: The whole collection of data (254 images) was partitioned into three subsets of 70%, 10% and 20% representing the training, validation, and test datasets, respectively, and this data splitting was necessary typically to avoid overfitting and to assess the model's performance. While the validation dataset was used for ascertaining the performance of the model during training, the test dataset was used to assess performance of the model post-training. 3.2.3 Implementation architecture The object detection task involves the two-step process of object localization and classification. The object localization process detects the presence of objects of interests in the object by a bounding box. The object classification process classifies the identified object into one of the 5 classes depicting the crop types cultivated in the study area. The YOLO v5 model works in an end-to-end approach by doing both the object localization and classification directly. It comprises of three parts which are the backbone, neck, and head [ 49 ]. Features are extracted from the aerial photographs by the model's backbone. The Cross Stage Partial (CSP) was used as the framework or backbone to quickly retrieve characteristic features from the photographs while pyramids of the extracted features are generated using the neck component of the model. The Path Aggregation Network (PANet) was implemented as the model's neck while the model's head performs the final detection by applying anchor boxes to features like weeds and crops. The primary contribution of YOLO v5 is the implementation of quick training processes in the Darknet, which is an architecture that is compatible with GPU computations. Also, the classifier has remarkable accuracy for detecting even small elements. The architectural design of YOLO v5 which depicts the identification procedure of the crops and weeds is presented in Xu et al. [ 50 ]. 3.2.4 Training process The YOLOv5 model was implemented using PyTorch; a well-known open-source deep learning framework. Due to the computing requirements for training the algorithm to achieve a high accuracy, the code was run on GPUs provided by a pro version. A GPU R-80 and 16GB of RAM were used for learning and identification operations (NVIDIA GeForce GTX TITAN X). A workstation running Ubuntu 18.04 with GPU acceleration served as the operating system for the virtual machine used in this research while Python programming was used for the coding. The dataset was assembled representing the images with labelled bounding boxes around the weeds and crops that are to be detected. In training the YOLOv5s model, a number of arguments were passed such as defining the image size of 416 × 416; a batch size of 16 photographs was employed. Following that, separate datasets for training, validation, and testing were created, each of which respectively contained 70%, 10% and 20% of the total data. Training epochs were set at 100, 300, 500, 600, 700, and 1000. The output layer of the network has 5 nodes corresponding to the 5 classes of the crops and weed which was set for the model's classification, the dataset location was set also and the training process was carried out using the pre-trained weights that the YOLO developers made available. The information was initially uploaded into the CSP network to retrieve attributes of weeds and crops after being submitted with all the image data. The head component was ultimately utilized to report data like class, grades, position, and object size. The focus module was used in the Backbone stage to retrieve useful information features. Training losses and performance data were saved to Tensorboard as well as a logfile to evaluate the effectiveness of YOLOv5s model. After this, inference was run with the trained weights on contents of the test images, which is used for making real-world predictions and classification. 3.2.5 Performance evaluation The YOLO v5 algorithm was assessed for performance and speed using both the testing and validation datasets with the aid of different metrics which includes Recall (R), Accuracy (A), F1-score (F1), and Precision (P). Kamilaris & Prenafeta-Bold [ 51 ] affirmed that these metrics are frequently used in deep learning applications. (1) Accuracy = T r u e p o s i t i v e + T r u e n e g a t i v e T r u e p o s i t i v e + T r u e n e g a t i v e + F a l s e p o s i t i v e + F a l s e n e g a t i v e (2) Precision = T r u e p o s i t i v e T r u e p o s i t i v e + F a l s e p o s i t i v e (3) Recall = T r u e p o s i t i v e T r u e p o s i t i v e + F a l s e n e g a t i v e (4) F 1 score = 2 * ( A v e r a g e P r e c i s i o n * A v e r a g e R e c a l l ) ( A v e r a g e P r e c i s i o n + A v e r a g e R e c a l l ) (5) Precision Average = P ( Weed ) + P ( Banana ) + P ( Sugarcane ) + P ( Pepper ) + P ( Spinach ) 5 (6) Recall Average = R ( Weed ) + R ( Banana ) + R ( Sugarcane ) + R ( Pepper ) + R ( Spinach ) 5 Where True Positive (TP) describes the number of positive images correctly categorized as positive. True Negative (TN) indicates a specific number of instances the model correctly identified a Negative sample as being truly Negative. False Positive (FP) details how many instances a negative sample was mistakenly identified as a positive sample by the algorithm. False Negative (FN) indicates the number of instances the algorithm incorrectly categorized a positive sample as being negative. The accuracy ( Eq. (1) ) gives the fraction of samples across all classes that were correctly classified. However, this does not provide details on how the model was performing across each crop and the weeds. The macro-averaged precision ( Eq. (5) ) shows the fraction of correct classifications out of the total predictions made for that class, while the recall ( Eq. (6) ) shows the fraction of correct classification out of all the ground truth for that class. Generally, the acceptable level of performance for these three metrics varies based on the application scenario. Also, some use cases are more tolerant to low recall while others are more tolerant to low precision. For example, in sprinkling of hazardous herbicides, a low precision on the weeds may be disastrous as crops may be mistakenly destroyed. 4 Results and discussions The training loss per network epoch was determined in order to evaluate how well the network training process performed. The network went through 100, 300, 500, 600, 700 and 1000 epochs. The obtained results showed that the training losses decreased all through from Fig. 5 (a) to Fig. 5 (f) meaning that the model was learning. The model keeps learning as it goes through even more epochs, which leads to less training loss in later epochs. After the 300 epoch in Fig. 5 (b), a loss that is mostly constant is observed from Fig. 5 (c) to Fig. 5 (f). This indicates however that the network was learning with increasing accuracy, which shows that the training loss was presumably minimal as illustrated graphically in Fig. 5 (a)–(f). From Fig. 5 (d) and (e), the loss curve had no significant improvement meaning that the training curve flattened out at 600 epochs in Fig. 5 (d). The result also shows that the classifier's performance will be improved when the loss decreases. The range of epochs is displayed on the x-axis, whereas the loss value is displayed on the y-axis. The graphs were extracted from Tensorboard Visualization. 4.1 Validation graphs from YOLOv5s The developed model's performance was evaluated using the validation dataset. The step by step validation losses of the model are shown in Fig. 6 (a) to (f). The result shows that the models converged poorly at 100 epochs as shown in Fig. 6 (a), but converged better as the loss decreases along its training at 600 epochs as shown in Fig. 6 (d), and remained constant up to 1000 epochs as shown in Fig. 6 (f). This is in line with the notion that the model is continually updating its parameters and learning relevant representative attribute or features of the crops and weeds without “overfitting” (when the network learns the training data very well, but performs poorly on the generated data) and “underfitting” (when the algorithm is not able to model either the training data or testing data). Fig. 6 (a)–(f) indicates a possible optimal case. The y-axis represents the validation loss values while x-axis represents the range of epochs. The Confusion Matrix obtained at 100 epochs for the multi - class classification (five [5] classes) is shown in Fig. 7 . The number of TP elements for each class is displayed on the diagonal (top left to bottom right) as follows: 53% of all objects in the class of banana trees, 32% of all objects in the class of spinach, 10% of all objects in the class of sugarcane, and 1% of all objects in the class of weeds were properly categorized. Additionally, 13% of all banana-class objects were mistakenly predicted as sugarcane-class objects, another 33% of all objects of the weed class were tagged as unidentified / unclassified. 3% of all objects of spinach class were misclassified as sugarcane and 65% were classified as unknown. 5% of all objects of sugarcane class were misclassified as spinach while 86% were classified as unknown, and 99% of all objects of the weed class was classified as unknown and were not categorized into any class by the classifier. In Table 1 , the value of Precision ranges from 0 (no precision) to 1.0 (ideal/perfect precision). The most precise class was ‘pepper’ (at an approximate of 0.947) followed by ‘spinach’, ‘banana crops’, ‘sugarcane crops’ and ‘weeds’ (at an approximate of 0.0504) as the least. The Recall value also ranges from 0 (no precision) to 1.0 (ideal precision). The class with the best Recall was ‘banana’ (at an approximate of 0.615) followed by ‘spinach’, ‘sugarcane crops’, ‘weed’ and ‘pepper crops’ (at an approximate of 0.0167) as the least, meaning that fewer positive samples of 'weed' and 'pepper' were found by the classifier compared to those of 'spinach', 'bananas', and 'sugarcane' crops. Fig. 8 (a) depicts the model's precision curve metric that measures the proportion of accurate bounding box predictions, while Fig. 8 (b) depicts the recall curve metric that determines the percentage of the true bounding box that was accurately predicted. In Fig. 8 (a), the model began improving swiftly from around 40 epochs all through to 98 epochs where it slightly dropped in precision on the precision curve. In Fig. 8 (b), the recall curve also improved from 15 epochs which means the model was gradually learning. The y-axis displays the precision value for (a) and the recall value for (b), while the x-axis displays the range of epochs for both (a) and (b). The Confusion Matrix obtained at 300 epochs for the multiclass classification is displayed in Fig. 9 . The number of TP combinations for every class is displayed on the diagonal, from top left to bottom right. The result shows that 92% of all objects in the class of banana trees, 70% of all objects in the class of pepper, 97% of all objects in the class of spinach, 84% of all objects in the class of sugarcane, and 45% of all objects in the class of weed were properly classified. Also, 8% of all banana-class features were discovered by the model as being unidentified, another 2% of all objects of the pepper class were misclassified as spinach and 28% were classified as unknown. 3% of all objects of spinach was found grouped as unknown, 16% of all objects of sugarcane class were categorized as unknown. 55% of all the objects of weed class were classified as unknown and were not categorized by the classifier into any class. In Table 2 , the values of the Precision and Recall are shown. The value of the Precision ranges from 0 to 1. The class "spinach" had the maximal values of Precision and Recall (at an approximate of 0.809 and 0.927, respectively). This is followed by 'banana crops', 'sugarcane', and 'pepper crops', while 'weeds', yielded the least Precision and Recall (around 0.458 and 0.319). This indicated that while the algorithm was capable of detecting positive samples for 'spinach', 'banana', and 'sugarcane', it recognized less positive samples for the class of 'weed'. Fig. 10 (a) depicts the model's precision curve metric which shows the percentage of precise bounding box predictions, while Fig. 10 (b) depicts the recall curve metric that determines the percentage of the accurately predicted true bounding box. In Fig. 10 (a), the model began to improve swiftly from 100 epochs all through to 300 epochs on the precision curve and in Fig. 10 (b), the recall curve also improved swiftly all the way to 300 epochs which means that the model was learning. The precision and recall captures the model performance, so the higher they are, the better the classifier or the classification. The y-axis displays the values of Precision for (a) and the recall value for (b), while the x-axis displays the range of epochs for both (a) and (b). The Confusion Matrix obtained at 500 epochs for the multi - class classification is shown in Fig. 11 . The number of TP elements for each class is displayed on the diagonal (top left to bottom right) as follows: 85%, 77%, 94%, 83% and 54% of all items in the class of banana, pepper, spinach, sugarcane and pepper, respectively, were correctly categorized. Additionally, 15%, 23%, 6%, 17% and 45%, of all objects of the banana class, pepper, spinach, sugarcane, and weed, respectively, were classified as unknown, while 1% of all the objects of weed class was classified as sugarcane. As observed in Table 3 , the class with best level of Precision and Recall was the ‘spinach class’ (at an approximate of 0.902 and 0.875) followed by ‘sugarcane’, ‘weed’, and ‘banana’ while the least precise value was observed to be ‘pepper’ (at an approximate of 0.560). The Recall values followed in the order of ‘sugarcane’, ‘banana’, ‘pepper crops’ whereas the ‘weed plants’ class produced the lowest Recall value (at an approximate of 0.269) and this indicates that while the classifier was capable of identifying samples of 'spinach', 'banana', and 'sugarcane', it did not identify as many positive samples of 'weed' but with great improvement in the value of ‘weed’ Precision from ‘0.458′ at 300 epochs to ‘0.747′ at 500 epochs. Fig. 12 (a), depicts the models precision curve metric that determines the percentage of precise bounding box predictions. While Fig. 12 (b) depicts the recall curve metric that determines the percentage of the true bounding box that was accurately predicted. In Fig. 12 (a), the model began improving swiftly from around 100 epochs all through to 500 epochs on the precision curve while the recall curve ( Fig. 12 (b)) also improved swiftly all the way to 500 epochs which means the model was learning. The y-axis displays the precision value for (a) and the recall value for (b), while the x-axis displays the range of epochs for both (a) and (b). The confusion matrix obtained at 600 epochs for the multi - class classification is displayed in Fig. 13 . The range of TP values for each class is displayed on the diagonal, from top left to bottom right: 100, 72, 94, 81 and 56 % of all items in the banana class, pepper, spinach, sugarcane, and weed, respectively, were properly and accurately classified. Also, about 28, 6, 19 and 44 % of all objects of the pepper class, spinach, sugarcane, and weed, respectively, were classified as unknown and could not be classified into any class by the classifier. Table 4 showed that 'spinach class' had the highest precision (at an approximate of 0.932) followed by ‘banana class’, ‘sugarcane class’, and ‘weed class’ which improved to ‘0.782′ and then ‘pepper’ as the least. Also, the class having the maximum Recall was ‘banana’ (at an approximate of 1.000) followed by ‘spinach class’, ‘sugarcane class’, ‘pepper’, and ‘weed class’ (at an approximate of 0.338) that also improved from ‘0.269′ at 500 epochs which means that the classifier was increasingly effective in identifying positive samples of ‘weed’. Fig. 14 (a) depicts the model's precision curve that determines the percentage of precise bounding box predictions. While Fig. 14 (b) depicts the recall curve that determines the percentage of the true bounding box that was accurately predicted. In Fig. 14 (a), the model began to improve swiftly from around 100 epochs all through to 600 epochs on the precision curve and in Fig. 14 (b), the recall curve also improved swiftly all the way to 600 epochs which means that the model was learning. The y-axis displays the precision value for (a) and the recall value for (b), while the x-axis displays the range of epochs for both (a) and (b). The confusion matrix obtained at 700 epochs for the multi - class classification is shown in Fig. 15 . The number of TP elements for every class is displayed on the diagonal (top left to bottom right) as follows: 92% of all items in the "banana" class, 73% in the "pepper" class, 94% in the "spinach" class, 81% in the "sugarcane" class, and 52% in the "weed" class were properly classified. Also, 8%, 27%, 6%, 19%, and 48% of all objects of banana class, pepper class, spinach class, sugarcane class, and weed class, respectively, were classified as unknown and could not be categorized by the classifier into any class. The data in Table 5 revealed that the classifier was starting to flatten out and this was because the precision started to decline, with the exception of 'spinach', which saw a little increase with a '0.001′ gain. The value of Precision of 'weed' decreased from '0.782′ to '0.433′, but the Recall value marginally optimised from '0.338′ to '0.429′, indicating that the classifier had attained saturation level at this epoch and the loss curve had flattened out at the base. This means that adding more epochs than 700 will probably not result in a meaningful improvement in the detection and classification of weeds. Fig. 16 (a) depicts the model's precision curve metric while Fig. 16 (b) depicts the recall curve metric. In Fig. 16 (a), the model began improving swiftly from around 100 epochs all through to 700 epochs on the precision curve and in Fig. 16 (b), the recall curve also improved swiftly all the way to 700 epochs which means the model was still learning. The y-axis displays the precision value for (a) and the recall value for (b), while the x-axis displays the range of epochs for both (a) and (b). The confusion matrix obtained at 1000 epochs for the multi - class classification is shown in Fig. 17 . The number of TP elements for every class is displayed on the diagonal (top left to bottom right) as follows: 97, 86, 73, 85 and 47 % of all items in the spinach class, sugarcane class, pepper class, banana tree class, and weed class, respectively, were properly categorized. Also, 15, 27, 3, 14 and 53 % of all objects of banana class, pepper class, spinach class, sugarcane class, and weed class were classified as unknown and could not be categorized into any class by the classifier. As observed in Table 6 , the ‘weed’ Precision marginally improved from 0.433 to 0.449 while the Recall value dropped from 0.429 to 0.403 indicating a direct implication that the classifier had already gotten to a saturated level which means that at this point, there was no need to proceed further with the iteration. Fig. 18 (a) depicts the model's precision curve metric while Fig. 18 (b) depicts the recall curve metric that determines the percentage of precise bounding box predictions. In Fig. 18 (a), the model began improving swiftly from around 100 epochs all through to 1000 epochs on the precision curve and in Fig. 18 (b), the recall curve also improved swiftly all the way to 1000 epochs which means the model was learning. The y-axis displays the precision value for (a) and the recall value for (b), while the x-axis displays the range of epochs for both (a) and (b). The overall accuracy obtained for 100, 300, 500, 600, 700 and 1000 epochs are 16, 65, 66, 67, 65 and 64 %, respectively. Thus, there was steady improvement in the accuracies which was observed while increasing the number of epochs from 100 to 600 as the YOLO v5s batch size remained the same. The result from 100 epochs gave an average of Precision, Recall and F1 score of 33%, 19% and 24%, respectively. Also, at 300 epochs, an average Precision, an average Recall and a F1 score of 63%, 71%, and 67%, respectively was achieved. At 500 epochs, 69%, 76%, and 64% were obtained as the F1 score, average Precision, and recall, respectively, while at 600 epochs, a F1 score, average precision and recall of 75%, 82%, and 69%, respectively was achieved. It was observed that the accuracy of the prediction kept on increasing with increase in training epochs but at 700 epochs, the accuracy of the prediction dropped to 70%, 67%, and 70% for the F1 score, average precision, and average recall, respectively. This decline in accuracy simply meant that the model had reached a saturated point and this continued even at 1000 epochs with a F1 score, average precision, and average recall of 71%, 71%, and 72%, respectively. The summary of these results are presented in Table 7 . Also, all epochs were processed on Collab Free having a GPU of K80, RAM of 16GB and runtime of 12hours. The expended time for 100 epochs was 4 minutes 62 seconds, 300 epochs was 11 minutes 88 seconds, 500 epochs was 18 minutes 48 seconds, 600 epochs was 22 minutes 92 seconds, 700 epochs was 25 minutes 86 seconds, and 1000 epochs was 38 minutes 22 seconds. Table 7 compares and contrasts each of the epochs' average precision, accuracy, average recall and the F1 scores. When contrasted to other epochs, it was discovered that at 600 epochs, the model displayed the optimal outcomes in respect of accuracy, F1 score, precision, thus rendering it the optimal training epoch for accurate crops and weeds recognition. 4.1.1 Result of the automatic weed classification The performance accuracy of the weed automatic classification was evaluated and the result from 100 epochs shows a classification accuracy, precision, and recall of 16%, 5%, and 13%, respectively. For 300 epochs, 65%, 46% and 32% was obtained for classification accuracy, precision and recall, respectively, while 66%, 75% and 27% was obtained as the classification accuracy, weed precision and weed recall, respectively at 500 epochs, Furthermore at 600 epoch, a classification accuracy of 67% was achieved, while a weed precision of 78% and weed recall of 34% were also obtained, respectively. Increasing the epochs to 700, a decline in precision of weeds was observed from 78% at 600 epochs to 43% at 700 epochs while the classification accuracy and weed recall of 65% and 43% were also achieved, respectively. Finally at 1000 epochs, 65%, 45%, and 40% were obtained as the classification accuracy, weed precision, and recall. In addition, it was observed that the automatic weed classification classifier's maximum weed Precision (78%) was attained at 600 epochs, and that weed accuracy began to decline at 700 epochs when it fell to (63%). Also, no considerable impact was observed in the accuracy of the weed classification when the number of epoch was increased to 1000. The pattern of the weed visualization outcomes which shows how the weeds of various sizes were identified within the mixed-farm containing sugarcane, pepper, banana, and spinach is presented in Appendices 1 to 6 . In Appendix 1 at 100 epochs, the YOLOv5s model was able to identify and classify weeds within the mixed-crop farm at approximately 63% in precision. This was due to the fact that the epoch used was too small for the model to completely learn. As observed in Appendix 2 when the model was at 300 epochs, the algorithm was able to also classify the weeds only to a precision of 63%. While in Appendix 3 , the model classified the weeds to a precision of 74% at 500 epochs. At 600 epochs as shown in Appendix 4 , the weeds were classified approximately to a precision of 78% within the farmland. Furthermore, weed class was identified and classified to a precision of approximately 63% in Appendix 5 at 700 epochs, and finally at 1000 epochs ( Appendix 6 ), weeds were classified with a precision of 51%. From the result obtained from the different epochs, it was discovered that at 600 epochs, the precision of weed reached its optimal state of 78% and began to decline when the range of epochs increased from 600 epochs to 700 epochs at 63%, and then to 51% at 1000 epochs. Additionally, it was discovered that there was no significant improvement above 700 epochs. 5 Conclusion Deep learning algorithms and in particular, CNNs are have proved to be very robust for highly accurate crop type classification in a mixed crop farm land and also for automatic weed recognition, and they can enhance real-time weed management efficiency. In this study, YOLO v5s, a CNN model has been implemented for crop classification and weed detection, while also evaluating the model's performance over different training epochs. Different metrics were used to evaluate the accuracy of the developed model using the loss function graphs, precision and recall graphs, confusion matrix and other validation metrics such as the F1 score, accuracy, recall and precision. The findings of the study showed that there was steady improvement in the accuracy of the crop and weed classification scheme with increase in training epochs though the model attained its optimal performance at 600 epochs, became saturated afterwards and it began to decline. Also, the training losses decreased all through the training process which implied that the model was learning and eventually flattened out at 700 epochs where the weed precision began to drop significantly. In summary, the YOLOv5s model demonstrated significant advantage in computation times while achieving comparable detection accuracies in the identification and classification of weeds from a mixed crop farmland. Since only about 254 image pairs were used for this experimentation, future research efforts will consider investigating the performance of the model with more weed images, and images with better spatial resolution. CRediT authorship contribution statement Oluibukun Gbenga Ajayi: Conceptualization, Methodology, Supervision, Visualization, Resources, Data curation, Formal analysis, Writing – original draft, Writing – review & editing. John Ashi: Data curation, Funding acquisition, Formal analysis, Writing – review & editing. Blessed Guda: Methodology, Formal analysis, Writing – review & editing. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Appendices Appendix 1 , Appendix 2 , Appendix 3 , Appendix 4 , Appendix 5 , Appendix 6
REFERENCES:
1. GIL J (2019)
2. LILIANE T (2020)
3. HUNTER M (2017)
4. ABOUZIENA H (2016)
5. VILA M (2004)
6. AJAYI O (2022)
7.
8. CHAUVEL B (2012)
9. GRIEPENTROG H (2010)
10. JABRAN K (2015)
11. RUEDAAYALA V (2011)
12. YOUNG S (2014)
13. CARBALLIDO J (2013)
14. GIANESSI L (2013)
15. SMITH R (2015)
16. BOND W (2001)
17. ANNETT R (2014)
18. HOPPIN J (2014)
19. STARLING A (2014)
20. BLASCHKE T (2010)
21. RAO A (2015)
22. RAO A (2013)
23. RASHID M (2012)
24. SOMERVILLE G (2020)
25. WESTWOOD J (2018)
26. ALSAMARAI G (2018)
27. RASMUSSEN J (2020)
28. MALAMIRI H (2021)
29. RODRIGUEZ J (2021)
30. DICICCO M (2017)
31. LAVANIA S (2015)
32. BAKHSHIPOUR A (2018)
33. DECASTRO A (2018)
34. WAHEED T (2006)
35. KRAMER O (2013)
36. AJAYI O (2022)
37. KRIZHEVSKY A (2017)
38. REN S (2015)
39. SZEGEDY C (2015)
40. LECUN Y (1998)
41. LIU W (2016)
42.
43. REDMON J (2016)
44. LIU H (2018)
45. URMASHEV B (2021)
46. SARVINI T (2019)
47. DOSSANTOSFERREIRA A (2017)
48. RADOVIC M (2017)
49. ZHU X (2021)
50. XU R (2021)
51. KAMILARIS A (2018)
|
10.1016_S2095-3119(21)63851-0.txt
|
TITLE: Protective effect of high-oleic acid peanut oil and extra-virgin olive oil in rats with diet-induced metabolic syndrome by regulating branched-chain amino acids metabolism
AUTHORS:
- Zhi-hao, ZHAO
- Ai-min, SHI
- Rui, GUO
- Hong-zhi, LIU
- Hui, HU
- Qiang, WANG
ABSTRACT:
High-oleic acid peanut oil (HOPO) and extra-virgin olive oil (EVOO) have been reported previously to have an attenuating effect on metabolic syndrome (MS). This study aimed to evaluate the metabolic effect of HOPO and EVOO supplementation in attenuating MS and the role of gut microbiota in regulating the metabolic profile. Sprague-Dawley rats were continuously fed with a normal diet, high-fructose and high-fat (HFHF) diet, HFHF diet containing HOPO, or a HFHF diet containing EVOO for 12 weeks. The metabolomics profiles of feces and serum samples were compared using untargeted metabolomics based on UPLC-Q/TOF-MS. Partial Least Squares Discriminant Analysis (PLS-DA) was used to identify the potential fecal and serum biomarkers from different groups. Correlation between gut microbiota and biomarkers was assessed, and pathway analysis of serum biomarkers was conducted. Differences in metabolic patterns in feces and serum were observed among different groups. There were 8 and 12 potential biomarkers in feces and 15 and 6 potential biomarkers in serum of HOPO group and EVOO group, respectively, suggesting that HOPO and EVOO supplementation mainly altered amino acids, peptides, and their analogs in feces and serum. The branched-chain amino acids (BCAAs) biosynthesis pathway was identified as a major pathway regulated by HOPO or EVOO. This study suggests that HOPO and EVOO supplementation ameliorate diet-induced MS, mainly via modulation of the BCAAs biosynthesis pathway.
BODY:
1 Introduction Metabolic syndrome (MS) is characterized by the co-occurrence of at least three of five clinical disorders: abdominal obesity, dyslipidemia, hypertension, hyperglycemia, and insulin resistance. It is widely acknowledged that MS could raise the risk of many chronic metabolic diseases, such as cardiovascular disease (CVD) and type 2 diabetes (T2DM) ( Alberti ). Patients suffering from MS have twice the death risk and three times the heart attack or stroke risk of those without MS. The incidence rate of MS has been growing; it is estimated that a quarter of people worldwide suffer from MS ( et al . 2006 Saklayen 2018 ). Moreover, developing countries face a rapid rise in MS cases, which can be accounted for by dietary habits and lifestyles changes. The current MS prevalence among adults in China is 24.2% and has experienced a huge rise from one decade ago (9.8%) using the same diagnostic criteria ( Gu ; et al . 2005 Li ). Accordingly, the global epidemic of MS has already become a huge threat, and it is urgent to find effective intervention strategies. et al . 2018 Virgin olive oil is widely used as the main dietary oil in Mediterranean countries. Importantly, an association has been demonstrated between regular olive oil consumption and a decreased risk of MS ( Martínez-González and Martín-Calvo 2013 ). The benefits of virgin olive oil have been attributed to the high monounsaturated fatty acid (MUFA) content and the bioactive components ( Cristina ). Similar to olive oil, high-oleic acid peanut oil (HOPO) is also a dietary vegetable oil with a high MUFA content, while the fatty acid profile, composition and content of active ingredients of these two kinds of oil are different. The protective effects of HOPO and extra-virgin olive oil (EVOO) on MS rats have been described in our previous study ( et al . 2013 Zhao . HOPO and EVOO supplementation significantly suppressed body weight gain, HDL/LDL decrease and insulin resistance process. HOPO and EVOO group showed a 59.83 and 81.08 g less body weight gain, 2.21 and 2.45 less HOMA-IR, and 0.55 and 0.25 higher HDL/LDL ratio than M group, respectively. Additionally, 16S rDNA sequencing confirmed that HOPO and EVOO supplementation prevented HFHFD-induced gut microbiota disturbance and promoted the proliferation of probiotics ( et al . 2019) Zhao . However, the metabolic effects of HOPO and EVOO supplementation in alleviating diet-induced MS remain unknown. et al . 2019) Metabolomics is the qualitative or quantitative analysis of all small molecule metabolites in tissues or biological fluids through high-throughput detection technology ( Wu ). It has become a promising analytical technique, widely used in disease diagnosis, toxicity screening, and drug efficacy assessments ( et al . 2018 Kirbaeva ; et al . 2014 Segers ). Hence, a comparison of metabolic effects on MS rats that received HOPO or EVOO supplementation was conducted in the present study. The metabolomics profiles of feces and serum samples were compared using untargeted metabolomics based on ultra performance liquid chromatography quadruple/time of flight-mass spectrometry (UPLC-Q/TOF-MS). The aim of the present study was to determine the changes in the feces and plasma metabolome of rats fed with a HOPO and EVOO diet. This study has a significance for understanding the protective mechanism of HOPO and EVOO on MS. et al . 2019 2 Materials and methods 2.1 Materials HOPO was kindly supplied by Luhua Group Co., Ltd. (Laiyang, China), while EVOO was purchased from a local supermarket (Mueloliva Co., Ltd., Córdoba, Spain). Fatty acid compositions of HOPO and EVOO are shown in Table 1 . Fructose was obtained in edible grade from SIWANG SUGAR Co., Ltd. (Binzhou, China). Chromatographic grade solvents acetonitrile and methanol were obtained from Thermo Fisher Scientific (Beijing, China). Chromatographic grade formic acid was obtained from Sigma-Aldrich (Beijing, China). 2.2 Animals and treatments Six-week-old male Sprague-Dawley rats were provided by Vital River Laboratory Animal Technology Co., Ltd. (Beijing, China). After acclimation for one week, 48 rats, randomly divided into four groups ( n =12), were fed with designed experimental diets ad libitum for 12 weeks: (A) NC group (normal control, normal chow diet and normal water); (B) M group (model, high-fat diet and 10% fructose drinking water); (C) HOPO group (high-oleic acid peanut oil diet, the high-fat diet contained 10% HOPO and 10% fructose drinking water); and (D) EVOO group (extra-virgin olive oil diet, the high-fat diet contained 10% EVOO and 10% fructose drinking water). The diets were supplied by the Laboratory Animal Diet Platform of Trophic Animal Feed High-tech Co., Ltd. (Nantong, China). The diets’ ingredients are shown in Table 2 . The rats were kept in a well-ventilated room with a diurnal 12 h light cycle. The room temperature was maintained at (25±2)°C. 2.3 Sample preparation Forty rats ( n =10 for each group) were selected randomly for fecal and serum sample preparation. Fresh fecal samples were collected during the final three days. Each fecal sample (100 mg) was mixed with 1.0 mL cold methanol in a 2.0-mL microcentrifuge tube, and ultrasound-assisted extracts were obtained in an ice bath for 60 min. The mixtures were centrifuged for 10 min (4°C, 12 000 r min −1 ). The 800 μL supernatant was transferred and concentrated by vacuum evaporation. Each extract was redissolved in 200 μL methanol and used for HPLC-MS. After 12 weeks, rats were anesthetized with sodium pentobarbital (2%, 50 mg kg −1 ) and blood samples were obtained from carotid artery. Then, blood samples were centrifuged at 2 000×g and 4°C for 15 min to collect the serum samples. Each serum sample (200 μL) was mixed with 600 μL cold methanol in a 1.5-mL microcentrifuge tube, then ultrasound-assisted extracted for 30 min. The mixtures were centrifuged for 10 min (4°C, 12 000 r min −1 ). The supernatant was used for HPLC-MS analysis. 2.4 Gut microbiota analysis Bacterial genomic DNA was extracted from fecal samples using a QIAamp DNA Stool Mini Kit from Qiagen (Hilden, Germany) following the manufacturer's instructions. The V3–V4 variable region was amplified and then assessed by agarose electrophoresis and a nanodrop spectrophotometer for integrity and quantity inspection, respectively. The 16S rDNA was paired-end sequenced and analyzed with the Illumina MiSeq 2500 System by Biomarker Technologies Co., Ltd. (Beijing, China). The details of gut microbiota sequencing and analysis were described in Zhao . et al . (2019) 2.5 UPLC-Q/TOF-MS analysis The untargeted metabolomics technique was based on UPLC-Q/TOF-MS. Separation of metabolites was carried out by ultrahigh performance liquid chromatography (UHPLC; Nexera UHPLC LC-30A System, Shimadzu Technologies, Japan) and screened with ESI-MS. The LC system includes a C18 column (Agilent ZORBAXRRHD Eclipse Plus, 100 mm×3 mm, 1.8 mm). The mobile phase was a mixture of solvent A (0.1% CH 3 COOH-H 2 O solution) and solvent B (pure acetonitrile) with gradient elution (0–0.5 min, 5% A and 95% B; 0.5–30 min, 5–95% A and 95–5% B; 30–33 min, 95% A and 5% B; 33–35 min, 95–5% A and 5–95% B). The flow rate of the mobile phase was 0.5 mL min −1 . The temperature of the column and sample manager was set at 45 and 4°C, respectively. The quadrupole time of flight mass spectrometer (Q/TOF-MS, G6540B, Agilent Technologies Inc., USA) equipped with a Dual Agilent Jet Stream ESI source was used for mass spectrometry. The scanning range of mass-to-charge ( m / z ) was from 50 to 1 500. The operating parameters were as follows: capillary voltage in positive mode, 4 000 V; capillary voltage in negative mode, 3 500 V; fragmentor voltage, 175 V; nebulizer pressure, 35 psi; drying gas temperature, 325°C; drying gas flow rate, 5 L min −1 . The instrument mode was set to an extended dynamic range. The quality control (QC) samples consisted of mixed fecal samples and mixed serum samples. One QC sample was injected after every five individual sample runs. 2.6 Statistical analysis The Mass Hunter Workstation (Agilent Technologies, USA) was used to obtain and align raw data based on the m / z value and the retention time of ion signals. Raw data was imported into MS-DIAL Software, and data preprocessing of peak extraction, noise reduction, deconvolution, and peak alignment was conducted by MS-DIAL ( Tsugawa ). Then, the detected peak information was compared with three databases including MassBank, Respect and GNPS to identify metabolites. There were 339 and 205 metabolites identified in feces and serum, respectively. The data were normalized using R 3.5.2 Software and Metnormalizer package based on the QC samples ( et al . 2015 Shen ). The partial least squares discriminant analysis (PLS-DA) was used to identify potential biomarkers. The screening criteria for biomarkers consisted of Student's et al . 2016 t -test ( P <0.05) and variable importance in projection (VIP) values (>1.0). Then, enrichment and pathway analyses based on the KEGG database were performed. Identification of potential biomarkers and enrichment and pathway analyses were conducted using online software MetaboAnalyst 4.0 (http:www.metaboanalyst.ca) ( Chong ). Finally, the biomarkers data of 40 samples in four groups and their corresponding gut microbiota data (in phylum level and top 10 genera) were used in Spearman correlation analysis. et al . 2018 3 Results 3.1 Metabolic profiling of feces samples PLS-DA is a multivariate data analysis method with a supervised mode, which can reduce the matrix's dimension and provide a preliminary understanding of the main patterns and trends of the data. The PLS-DA analysis of metabolites profile in feces samples is shown in Fig. 1 , and M vs. NC, HOPO vs. M and EVOO vs. M were compared, respectively. All groups were significantly separated, as shown in the PLS-DA scores plot. These results suggested that different diets induced different fecal metabolic profiles. HFHFD with different dietary fat proportions also induced different fecal metabolic profiles, while the nutritional compositions of the three HFHFDs were the same. 3.2 Metabolic profiling of serum samples The PLS-DA analysis of metabolites profile in serum samples is shown in Fig. 2 , and M vs. NC, HOPO vs. M and EVOO vs. M were compared, respectively. All groups were significantly separated, as shown in the PLS-DA scores plot. These results suggested that different diets induced different serum metabolic profiles. Similar to metabolic profiling of feces mentioned above, HFHFD with different dietary fat proportions induced different serum metabolic profiles, although the nutritional compositions of the three HFHFDs were the same. 3.3 Identification of potential biomarkers in feces A total of 32 metabolites were selected as potential biomarkers, of which 12, 8 and 12 metabolites were between NC vs. M, HOPO vs. M and EVOO vs. M, respectively ( Table 3 ). Glycerophospholipids (LPC 18:2, phosphatidylethanolamine alkenyl 16, phosphatidylethanolamine alkenyl 18, phosphatidylethanolamine 16 and phosphatidylcholine 15), steroids and derivatives (deoxycholic acid, dehydrocholic acid and cholic acid), and amino acids, peptides, and analogues (glutamic acid and glycyl-L-proline) were the major compound categories between NC vs. M (5, 3 and 2 of 12 metabolites, respectively). Glycerophospholipids (phosphatidylethanolamine alkenyl 16, phosphatidylethanolamine alkenyl 18, phosphatidylethanolamine 16, phosphatidylcholine 15 and phosphatidylcholine alkyl 16), and amino acids, peptides, and analogues (valine and glycyl-L-proline) were the major compound categories between HOPO vs. M (5 and 2 of 8 metabolites, respectively). Glycerophospholipids (phosphatidylethanolamine alkenyl 16, phosphatidylethanolamine alkenyl 18, phosphatidylethanolamine 16 and phosphatidylcholine 15), amino acids, peptides, and analogues (proline, valine, citrulline and glutathione (reduced)), and fatty acids and derivatives (oleic acid, FA 18:4+1O and FA 18:0+2O+SO4) were the major compound categories between EVOO vs. M (4, 4 and 3 of 12 metabolites, respectively). The relative abundance of glycyl-L-proline, phosphatidylethanolamine alkenyl 16, phosphatidylet-hanolamine alkenyl 18 and phosphatidylcholine 15 significantly increased ( P <0.05) in M compared to NC. The supplement of HOPO reversed this incremental change. Similar changes were observed for phosphatidylethanolamine alkenyl 16, phosphatidylethanolamine alkenyl 18 and phosphatidyl-choline 15 in EVOO compared to M. Valine, one amino acid of branched-chain amino acids (BCAAs) (leucine, isoleucine and valine), was a potential biomarker in both HOPO vs. M and EVOO vs. M. The relative level of valine in HOPO or EVOO increased significantly compared with that in M. However, oleic acid was identified as potential biomarker only between EVOO vs. M, although most HOPO-diets contain some level of oleic acid. 3.4 Identification of potential biomarkers in serum A total of 33 metabolites were selected as potential biomarkers, of which 12, 15 and 6 metabolites were between NC vs. M, HOPO vs. M and EVOO vs. M, respectively ( Table 4 ). Amino acids, peptides, and analogues (valine, threonine, leucine, glutamic acid and gamma-Glu-Tyr) and glycerophospholipids (LPC 18:2, phosphatidylethanolamine lyso 20, phosphatidylcholine lyso 18 and phosphatidylcholine 15) were the major compound categories between NC vs. M (5 and 4 of 12 metabolites, respectively). Amino acids, peptides, and analogues (proline, isoleucine, leucine, lysine, methionine, phenylalanine, arginine, glutamine and Ala-Ile), glycerophospholipids (LPE 16:0, phosphatidylinositol lyso 18 and phosphatidylinositol lyso 20) and pyrimidine nucleosides (2′-deoxycytidine and 2′-o-methylcytidine) were the major compound categories between HOPO vs. M (9, 3 and 2 of 15 metabolites, respectively). Amino acids, peptides, and analogues (leucine, Ala-Ile and folic acid) were the major compound categories between EVOO vs. M (3 of 6 metabolites). Amino acids, peptides, and analogues altered in all of three comparisons. It is worth noting that BCAAs and oleic acid were identified as potential biomarkers among NC vs. M, HOPO vs. M and EVOO vs. M. The relative level of BCAAs potential biomarkers in HOPO or EVOO increased significantly compared with that in M, while HFHFD induced a significantly lower relative abundance of BCAAs potential biomarkers (leucine and valine). Among three BCAAs, leucine was the only one potential biomarker in NC vs. M, HOPO vs. M and EVOO vs. M. Compared with HOPO vs. M, the potential biomarker among EVOO vs. M was very small (15 vs. 6), which might indicate that the metabolic profiling of serum was more similar between EVOO and M than between HOPO and M. 3.5 Spearman correlation between gut microbiota and potential biomarkers in feces A total of 13 correlations were detected at the phylum level between gut microbiota and potential fecal biomarkers ( Table 5 ). These correlations included 5 phyla and 11 potential biomarkers. The top 10 genera were selected to analyze the correlation between gut microbiota and potential biomarkers in feces at the genus level. A total of 22 correlations were detected between bacterial genera and potential fecal biomarkers ( Table 5 ). These correlations included 7 bacterial genera and 13 potential biomarkers. These biomarkers mainly included glycerophospholipids, amino acids and their derivatives in correlations between potential fecal biomarkers and gut microbiota (both at phylum level and genus level). 3.6 Spearman correlation between gut microbiota and potential biomarkers in serum As shown in Table 6 , a total of 8 correlations were detected at the phylum level between gut microbiota and potential biomarkers in serum, which included 3 phyla and 8 biomarkers in serum. A total of 28 correlations were found between top 10 bacterial genera and potential fecal biomarkers. These correlations included 7 bacterial genera and 19 potential biomarkers. Among these biomarkers, amino acids and their derivatives including three kinds of BCAAs accounted for a large proportion. Correlations between gut microbiota and amino acids biomarkers were more obvious in serum than in feces. 3.7 Pathway analysis of metabolites in serum All 33 biomarkers were imported to conduct pathway enrichment and analysis of metabolites in the serum. In the present study, 0.10 was set as the threshold of pathway impact. Significantly enriched pathways with P -values above this threshold were identified as differential metabolic pathways between the two groups. As shown in Fig. 3 , three differential pathways (D-glutamine and D-glutamate metabolism pathway, valine, leucine and isoleucine biosynthesis pathway, and alanine, aspartate and glutamate metabolism pathway) were identified during the comparison of M and NC groups. Moreover, five differential pathways (valine, leucine and isoleucine biosynthesis pathway, phenylalanine, tyrosine and tryptophan biosynthesis pathway, phenylalanine metabolism pathway, alanine, aspartate and glutamate metabolism pathway, and arginine and proline metabolism pathway) were identified between HOPO and M groups. Only one differential pathway (Valine, leucine and isoleucine biosynthesis pathway) was identified between EVOO and M groups. Interestingly, Valine, leucine and isoleucine biosynthesis pathway was found in M vs. NC, HOPO vs. M and EVOO vs. M. Accordingly, these results illustrated that amino acid biosynthesis and metabolism-related pathways were closely related with MS. The valine, leucine and isoleucine biosynthesis pathway (BCAAs biosynthesis pathway) has huge prospects as a potentially key target pathway. 4 Discussion Metabolic syndrome is one of the most serious global public health challenges of the 21st century, contributing to the occurrence and development of diverse serious metabolic diseases, including CVD and T2DM ( Alberti ; et al . 2006 O'Neill and O'Driscoll 2015 ). According to a study from the Diabetes UK-James Lind Alliance Priority Setting Partnership, the role of dietary fats in the management of metabolic diseases has become one of the top ten research priorities ( Finer ). Unhealthy eating patterns are one of the leading pathogeneses of MS. Both the source and amount of dietary fat intake can affect the phenotype of MS ( et al . 2017 Clifton 2019 ). As a main feature of the Mediterranean diet, olive oil is rich in oleic acid and widely regarded as healthy cooking oil. Oleic acid, known as a healthy fatty acid, has basic nutritional functions and health benefits, including anti-inflammation, anti-hypertension, cardiovascular disease and diabetes prevention ( Sales-Campos 2013) . Previous study has found that HOPO and EVOO could alleviate HFHFD-induced MS and significantly inhibit obesity and insulin resistance, which is involved in regulating gut microbiota ( Zhao . However, the metabolic mechanism of HOPO and EVOO supplementation in attenuating diet-induced MS remains unclear. The present study investigated the metabolic effects of HOPO and EVOO supplementation in alleviating MS in rats. It was found that 32 and 33 differential metabolites were involved in many metabolic pathways identified in feces and plasma of rats, respectively. Regulation of the BCAAs metabolic pathway was identified to play a crucial role in alleviating MS after HOPO and EVOO supplementation. et al . 2019) Interestingly, it was found that major metabolic changes associated with insulin sensitivity did not involve lipid metabolism but amino acid metabolism, especially BCAAs, aromatic amino acids (phenylalanine tyrosine), glutamic acid, glutamine, and alanine ( Zhao ). Several case-control and cross-sectional studies from various geographic and ethnic groups have indicated that BCAAs and their related metabolites are closely related to insulin resistance ( et al . 2016 Tillin ; et al . 2015 Chen ). BCAAs, including leucine, isoleucine, and valine, are special amino acids containing branched side chains at a carbon atom. These three essential amino acids act as important nutritional, metabolic signal molecules in the body. Previous studies have shown that BCAAs helped control body mass, promoted muscle protein synthesis and maintained blood glucose homeostasis ( et al . 2016 Wang ). However, some studies have found that an increase in BCAA was closely related to increased insulin resistance and T2DM risks in human and rodent animals ( et al . 2011 Giesbertz and Daniel 2015 ; Cummings ). Therefore, the effect of BCAAs on metabolic disorder diseases, such as diabetes, obesity and MS, is still controversial. The present study found that HFHFD induced a decreased abundance of BCAAs, and HOPO and EVOO supplementation reversed this decline in BCAAs. It also found differences in Valine, leucine and isoleucine biosynthesis pathway (BCAAs biosynthesis pathway) in M et al . 2018 vs. NC, HOPO vs. M and EVOO vs. M. This trend was consistent with changes in the relative abundance of BCAAs. The occurrence and development of diet-induced MS and the preventive effect of HOPO and EVOO supplementation were associated with the BCAAs biosynthesis pathways. BCAAs depend mainly on food intake and the degradation of proteins in tissues. It has been found that the dietary protein intake of insulin-resistant patients was comparable to that of insulin-sensitive people, which indicated that other mechanisms might be responsible for the abnormal increase in BCAAs levels ( Bloomgarden 2018 ). However, the underlying mechanisms are unknown, warranting further investigation. In the present study, the calorie proportions of protein in different diets were the same. An increasing body of evidence suggests that gut microbiota play a fundamental and crucial metabolic function in nutrients processing and maintaining host health ( Clemente ; et al . 2012 Henao-Mejia ; et al . 2012 Marcinkevicius and Shirasu-Hiza 2015 ; Nehra ). Accordingly, the present study hypothesized that gut microbiota could regulate BCAAs biosynthesis and metabolism and attenuate HFHFD-induced MS in rats. Spearman correlation analysis showed that at phylum level, Tenericutes was positively correlated with serum Leucine levels, while Bacteroidetes showed a negative correlation with serum Isoleucine levels. Moreover, more correlations between gut microbiota and amino acids levels including three BCAAs in serum were observed at genus level. These correlations provided indirect evidence that BCAAs biosynthesis and metabolism are regulated by gut microbiota in HFHFD-fed rats. et al . 2016 Lipids and their derivatives disturbed greatly among different groups. Lipid metabolism disorder is one of the main symptoms and triggers of MS, and multiple glycerophospholipids are considered as prominent biomarkers of MS ( Comte ). Glycerophospholipid metabolism pathway is observed as the most relevant pathway for lipid metabolisms in T2DM patients ( et al . 2021 Yan ). The role of interaction between glycerophospholipids and gut microbiota in metabolic health may be underestimated ( et al . 2021 Peng ; et al . 2020 Gao ). Spearman correlation analysis also showed a link between gut microbiota and level of lipids and their derivatives in this study. The different fatty acid and functional ingredient composition of fats restricted the in-depth evaluation of the role of gut microbiota in glycerophospholipid metabolism. et al . 2021 The past decades have seen great achievements in the breeding of high-oleic acid peanuts ( Janila ). High-oleic acid peanuts have been documented to have stronger oxidation stability, higher nutritional value, better flavor, and greater health benefits than traditional peanuts ( et al . 2015 Vassiliou ; et al . 2009 Riveros ; et al . 2010 Barbour ; et al . 2017 Lim ; et al . 2017 Paula ). High-oleic acid peanuts are mountingly popular globally and are expected to replace ordinary peanuts in the future. HOPO has similar or even higher MUFA levels than olive oil and is rich in minor bioactive components, including phytosterols and polyphenol. It is worth mentioning that peanut is one of the most important oil plants globally and is cultivated more widely than olives ( et al . 2018 Wang 2017 ). Research on the health efficacy of HOPO is important to provide evidence for consumers to select HOPO as an economically healthy fat source, especially in regions unsuitable for olive planting. In summary, HOPO has huge prospects as a healthy dietary fat source. 5 Conclusion This study investigated the metabolic effects of HOPO and EVOO supplementation on attenuating MS, which mainly involved the BCAAs biosynthesis pathway. Given the limitations in this study, further research exploring different dietary interventions in humans are essential to substantiate the efficacy of HOPO consumption on MS prevention. Moreover, significant correlations were found between gut microbiota and amino acid metabolism, especially BCAAs and aromatic amino acids. Further in vitro and in vivo studies are required to check the robustness of the findings. The authors declare that they have no conflict of interest. Ethical approval All animal-related procedures were checked and approved by the Institutional Animal Care and Use Committee of Beijing Vital River Laboratory Animal Technology Co., Ltd. (approval number: P2018036). Acknowledgements This research was supported by the Agricultural Science and Technology Innovation Project of Chinese Academy of Agricultural Sciences (CAAS-ASTIP-201XIAPPST), the Top Young Talents of Grain Industry in China (LQ2020202), and the National Natural Science Foundation of China (32172149).
REFERENCES:
1. ALBERTI K (2006)
2. BARBOUR J (2017)
3. BLOOMGARDEN Z (2018)
4. CHEN T (2016)
5. CHONG J (2018)
6. CLEMENTE J (2012)
7. CLIFTON P (2019)
8. COMTE B (2021)
9. CRISTINA G (2013)
10. CUMMINGS N (2018)
11. FINER S (2017)
12. GAO X (2021)
13. GIESBERTZ P (2015)
14. GU D (2005)
15. HENAOMEJIA J (2012)
16. JANILA P (2015)
17. KIRBAEVA N (2014)
18. LI Y (2018)
19. LIM H (2017)
20. MARCINKEVICIUS E (2015)
21. MARTINEZGONZALEZ M (2013)
22. NEHRA V (2016)
23. ONEILL S (2015)
24. PAULA M (2018)
25. PENG W (2020)
26. RIVEROS C (2010)
27. SAKLAYEN G (2018)
28. SALESCAMPOS H (2013)
29. SEGERS K (2019)
30. SHEN X (2016)
31. TILLIN T (2015)
32. TSUGAWA H (2015)
33. VASSILIOU E (2009)
34. WANG Q (2017)
35. WANG T (2011)
36. WU T (2018)
37. YAN L (2021)
38. ZHAO X (2016)
39. ZHAO Z (2019)
|
10.1016_j.celrep.2023.113438.txt
|
TITLE: Bodies in motion: Unraveling the distinct roles of motion and shape in dynamic body responses in the temporal cortex
AUTHORS:
- Raman, Rajani
- Bognár, Anna
- Nejad, Ghazaleh Ghamkhari
- Taubert, Nick
- Giese, Martin
- Vogels, Rufin
ABSTRACT:
The temporal cortex represents social stimuli, including bodies. We examine and compare the contributions of dynamic and static features to the single-unit responses to moving monkey bodies in and between a patch in the anterior dorsal bank of the superior temporal sulcus (dorsal patch [DP]) and patches in the anterior inferotemporal cortex (ventral patch [VP]), using fMRI guidance in macaques. The response to dynamics varies within both regions, being higher in DP. The dynamic body selectivity of VP neurons correlates with static features derived from convolutional neural networks and motion. DP neurons’ dynamic body selectivity is not predicted by static features but is dominated by motion. Whereas these data support the dominance of motion in the newly proposed “dynamic social perception” stream, they challenge the traditional view that distinguishes DP and VP processing in terms of motion versus static features, underscoring the role of inferotemporal neurons in representing body dynamics.
BODY:
Introduction The visual processing of dynamic bodies is vital for reproduction, survival, and social behavior, as it conveys information about action and affect. 1 , Previous research in the macaque visual temporal cortex found single cells selectively responding to bodies. 2 3 , Using static images, monkey fMRI studies identified body-category-selective patches (body patches) in the ventral bank of the superior temporal sulcus (STS) and ventral to the STS, 4 3 , both part of the inferotemporal (IT) cortex. Despite the social relevance of moving bodies, their visual processing remains poorly understood due to the focus on static images. 5 Recently, employing fMRI to map patches that are activated specifically by dynamic monkey bodies, 3 we observed patches in the dorsal-bank STS that were activated less by static images. 6 It has been proposed that the ventral visual stream, which includes IT, can be distinguished not only from the dorsal (parietal) stream but also from a third stream that processes dynamic social information, accentuating motion. The latter “dynamic social perception” stream 7 has been linked to the human STS and likely corresponds to the dorsal bank/fundus of the macaque STS. This proposal and our recent fMRI findings 7 underscore the importance of assessing and comparing the contributions of dynamic and static features to body responses in and between IT and dorsal-bank STS, which is the aim of this study. 6 Single-unit studies, recording randomly in the macaque STS, showed responses to acting humans 4 , 8 , 9 , 10 , 11 , 12 , 13 , and “stick” figures. 14 These studies suggested that some STS neurons respond to motion or, at least, are sensitive to the image sequence, showing less response to static images than to moving human bodies or animated stick figures. 15 8 , Studies using moving stick figures 15 15 , observed motion- or sequence-sensitive neurons mainly in the dorsal-bank STS, in agreement with older work that demonstrated motion selectivity in the dorsal-bank STS. 16 17 , 18 , However, selective responses to static stimuli have also been observed in the dorsal-bank STS, 19 11 , 14 , 15 , 20 , raising the question of how static and dynamic feature selectivity interact in the dorsal-bank STS. Furthermore, the contribution of motion to the responses of IT neurons to naturalistic body stimuli is unclear. Neurons responding to a hand interacting with an object were reported in the ventral middle STS, 21 but evidence for motion-related responses is lacking for other body parts and in anterior IT. We 22 found stronger fMRI activations also in the ventral STS in response to dynamic naturalistic body stimuli compared with static ones. These fMRI data raised the possibility that body dynamics also contribute to the IT neurons’ responses. 6 Here, we conducted a comparative study of single-unit responses between the dorsal bank/fundus of the STS and the regions within and ventral to the ventral bank of the rostral STS, using video stimuli featuring moving monkeys. To increase the probability of finding neurons that responded to naturalistic monkey videos, recordings were guided by fMRI mapping of dynamic body patches, targeting patches in the anterior IT (ventral patch; VP) and the dorsal bank/fundus STS (dorsal patch; DP). We selected anterior patches because these are supposed to be related more to invariant perception than posterior patches. 6 We compared their responses to static frames of the same videos and to time-reversed versions of the videos, assessing their motion or sequence sensitivity. 3 We assessed the contributions of motion and static features at the population level in VP and DP using regression analysis. To assess the contribution of motion, we related differences in motion among the body videos and the neural responses. To estimate the contribution of static features to the body video responses, we related neural responses and convolutional neural network (CNN) features. Recently, CNNs have emerged as improved models of IT responses to static images. 23 , 24 , 25 , 26 , 27 , It is unclear whether CNNs can model the selectivity for dynamic stimuli of STS neurons, particularly of DP neurons that may demonstrate a strong motion-driven response. 28 Overall, we expected that the responses of DP neurons to dynamic bodies would be dominated by motion and, to a lesser extent, by static features, while the VP responses would be determined by their selectivity for static body features. Results We targeted fMRI-defined rostral STS and IT body patches ( Figure 1 ). The targeted IT body patches were either in the anterior ventral STS (monkeys G and O; ASB in Bognár et al. ) or ventral to the anterior STS (monkey N; AVB in Bognár et al. 6 ). We will label neurons from both ventral IT patches as VP neurons. In the main analyses, we pooled the data of both IT patches since results were similar for both patches (single-patch data are in the supplemental figures). As dorsal STS region, we targeted in two animals a body patch in the rostral dorsal bank/fundus of the STS, and these neurons are labeled DP neurons. DP corresponded to the most anterior patch in the medial upper bank/fundus of the STS in the fMRI mapping (AMUB in Bognár et al. 6 ) of these two monkeys. 6 We selected neurons with a response to at least one body video ( STAR Methods ). The majority of the neurons responded on average stronger to the body videos than to the face and object videos in VP and DP ( Figures 2 A and 2B ). The body-category selectivity was higher for VP neurons (median body-category-selectivity index [BSI; STAR Methods ]: VP, 0.39, n = 149; DP, 0.23, n = 175; Wilcoxon rank-sum test p = 4.043e−07; Figure 2 C). As reported earlier when targeting body patches, 5 , 29 , many body-responsive neurons responded to some extent also to faces or objects. The neurons encoded differences in body shape and/or movement: they responded to some but not all of the 20 body videos ( 30 Figure 2 E), with the effective videos differing among neurons. To quantify the (within-category) body-video selectivity, we computed for each neuron the Sparseness of the response to the 20 body videos ( STAR Methods ). The Sparseness can range from 0 (equal response to the body videos) to 1 (response to a single body video). The median Sparseness was high in VP (median = 0.65) and DP (median = 0.61), with no significant difference between regions (Wilcoxon rank-sum test p = 0.556; Figure 2 D). The sparse body responses reduce the BSI, as demonstrated by the negative correlation between Sparseness and BSI (Spearman rank ρ = −0.15, p = 0.009, n = 324). In the main analyses, we will consider all body-responsive neurons, including the minority of neurons with low BSI, because even the latter can encode dynamic bodies, irrespective of their response to non-body stimuli. The VP population response was relatively constant during the video, except for an initial onset response, whereas marked variations in the DP population response were present during some of the body videos ( Figure S1 ). The latter might be related to differences in body dynamics during the video. Responses to body dynamics and static bodies To assess whether the neurons’ responses were driven by motion (or a changing image sequence), we tested neurons in a “snapshot test” in which we presented an effective body video, a time-reversed version of that video, and 10 snapshots of the video. The effective body video (“original video”) was selected based on the responses in the preceding test in which we presented the body videos. The snapshots were selected to be representative of the variety of poses and viewpoints that occurred during the video and were presented for 300 ms each in random order with an interstimulus interval of at least 1000 ms. In both regions, some neurons responded with similar peak firing rates to the original video and a snapshot (example VP neurons in Figures 3 A and 3C ; example DP neurons in Figures 3 D and 3F). Other neurons failed to respond to the snapshots, although they showed a sizable response when the corresponding frame occurred in the video (example neurons in Figures 3 B [VP] and 3E [DP]). We quantified the difference in peak firing rate between the video and the snapshots for each neuron by a snapshot index (SSI; STAR Methods ). A positive SSI corresponds to a smaller response to the static snapshot than to the video, whereas an SSI of zero corresponds to identical peak firing rates for the video and the static presentations. Both regions showed a wide range of SSI values, but the median SSI was higher for DP than for VP (median SSI: VP, −0.01; DP, 0.18; Wilcoxon rank-sum test p = 8.287e−06; Figure 4 A; individual monkey and patch data in Figure S2 A 1-2 ). This difference in SSI between regions remained significant for neurons with BSI > 0.33 ( Figure S2 A 3 ) and when controlling for the BSI difference between regions (ANCOVA with BSI as covariate: Table S1 ), in line with a stronger contribution of motion to DP responses. However, we also observed VP neurons that did not respond to static snapshots, despite a strong response to the video that included the same frames ( Figure 3 B). In fact, 11% of VP neurons had an SSI > 0.50 (a 3-fold difference in response), whereas the same held for 17% of the DP neurons. The majority of DP neurons showed a response to static presentations, although typically less than to the video. Ranking (using leave-one-trial-out cross-validation) of the snapshots based on the responses of each snapshot-responsive DP neuron (n = 124; STAR Methods ) showed a significant effect of snapshot rank on the mean DP responses, showing selectivity for still body images (Friedman ANOVA; p = 5.239e−17; Figure 4 D). The same ranking analysis showed snapshot selectivity for the snapshot-responsive VP neurons (Friedman ANOVA; p = 3.000e−75, n = 126; Figure 4 D). The average response to the most effective snapshot (rank 1) was higher in VP than in DP (net average firing rate (computed with a window of 400 ms) of 17 versus 7 spikes/s), but the regions did not differ in the temporal course of the averaged responses to their most effective snapshot ( Figure 4 E). Sensitivity to time reversal of body movements The time-reversed version of the original video allowed us to assess the effect of frame sequence on the video responses. VP and DP neurons showed a range of responses to the time-reversed video. Some neurons responded with similar average firing rates to the original and the time-reversed video ( Figures 3 A [VP] and 3D and 3E [DP]). Other neurons showed a marked difference in response between the two sequences ( Figures 3 B and 3C [VP] and 3F [DP]). To quantify the difference in response between the two sequences, we computed the video reversal index (VRI; STAR Methods ). A VRI of zero indicates equal responses to both sequences, whereas a VRI of 1 indicates a response to only one of the two sequences. The median VRI was significantly larger for DP (0.317) than for VP neurons (0.215; Wilcoxon rank-sum test; p = 3.061e−04; Figure 4 B; individual monkey and patch data in Figure S2 B 1-2 ). This difference in VRI between the two regions remained significant for neurons with BSI > 0.33 ( Figure S2 B 3 ) and when controlling for BSI (ANCOVA; Table S2 ), demonstrating a higher sequence sensitivity in DP compared with VP neurons. However, we observed VP neurons that responded to only one of the video sequences, although these consisted of the same frames, the only difference being the frame order ( Figure 3 C). In fact, 21% of the VP (and 38% of the DP) neurons had a VRI > 0.50, i.e., a 3-fold difference in response between the original and the time-reversed video. Furthermore, 41% of the VP neurons showed a significant difference in response between the two sequences (Wilcoxon rank-sum test; p < 0.05). Neurons of both regions with VRI of 1 showed an inhibitory response to the non-preferred movement ( Figure S2 B 4 ) without an excitatory response to the video onset. Relating responses to dynamic and static bodies The VRI did not correlate with the SSI (Spearman correlation; DP, ρ = 0.041, p = 0.61, n = 146; VP, ρ = 0.103, p = 0.21, n = 133; Figure 4 C). Indeed, several neurons had a high VRI but were responding well to static snapshots (SSI close to 0; Figures 3 C, 3F, and 4 C). These neurons were sequence sensitive but did not require motion to produce a response. This raises the question of how the responses to the static images relate to the responses to these frames during the video. To assess this, we selected those neurons that showed a significant excitatory response to at least one of the snapshots (split-plot ANOVA; p < 0.05). Then, for each selected neuron, we computed the correlation between the snapshot response and the response following the same frame during the video ( STAR Methods ). For the original video, the median correlation was 0.53 and 0.19 for the VP and DP neurons ( Figure 4 F), respectively, both significantly greater than zero (Wilcoxon test; VP, p = 3.449e−15; DP, p = 1.881e−06; individual monkey and patch data in Figure S2 C 1-2 ) and significantly higher in VP compared with DP (Wilcoxon rank-sum test; p = 1.226e−05; Figure 4 F). This difference between the regions was unrelated to regional differences in BSI ( Figure S2 C 3 ; ANCOVA; Table S3 ). This shows that one can predict the responses to the video from static snapshot responses better for VP than for DP. For the time-reversed video, the median correlation was 0.40 and 0.05 for VP and DP, respectively, both significantly larger than zero (Wilcoxon test; VP, p = 1.711e−10; DP, p = 0.0156; Figure 4 F; individual monkey and patch data in Figure S2 D 1-2 ). The median correlations between the snapshot responses and those for the time-reversed video were (marginally) significantly lower than those for the original video only for VP and not for DP (Wilcoxon signed rank test; VP, p = 0.043; DP, p = 0.196; Bonferroni corrected p values). Nonetheless, the correlations between the responses to the snapshot and the frames in the original video correlated with those computed for the reversed video for VP neurons (Spearman ρ = 0.46, p = 5.437e−08, n = 125; significant for neurons with BSI > 0.33; Figure S2 D 3 ). However, no such correlation was present for the DP neurons (Spearman ρ = −0.008, p = 0.927, n = 121; Figures 4 F and S2 E). This demonstrates that the response to a frame of a video can depend on the sequence in which that frame is presented, and this is to a larger extent in DP than in VP. This is in line with the stronger sequence sensitivity of DP neurons. In sum, for most VP neurons that respond to static presentations, the selectivity for static images predicts the responses to those images when presented as frames in a video. This holds less for DP neurons, showing a weaker association between responses to static images and videos. Neural responses to dynamic bodies are predicted by velocity pattern Next, we examined to what extent the body video responses can be explained by motion and static features. First, we computed a neural distance metric for all body video pairs, based on the lock-step Euclidean distance ( STAR Methods ) between the response trajectories to the videos in neural space ( Figure 5 A). Each dimension of the neural space corresponds to a neuron and each point in the neural space represents a response for a 20 ms bin and video. Note that the neural distance metric reflects the moment-by-moment difference in neural response between videos, unlike a distance metric computed on the responses averaged across the whole stimulus duration. To relate the neural distances to differences in motion among the body videos, we computed two velocity-based distance metrics: one in which we computed pairwise, pixel-wise velocity differences between videos and a second in which we computed pairwise differences between the frequency distributions of the velocities. Unlike the first metric, the velocity distribution metric does not consider the spatial location of the velocity vectors but only their frequency distribution per frame and hence is a position-invariant metric. To compute the “velocity space” distances (metric 1), we defined a velocity space in which the dimensions correspond to the velocity component for a particular pixel in the horizontal or vertical axis ( Figure 5 B). The velocity of each pixel per frame ( STAR Methods ) corresponded to a point in velocity space. Then, we computed the lock-step Euclidean distance for all video pairs using the same procedure as for the neural responses, thus providing a velocity-based distance for all video pairs. For the second metric, we computed the 2D frequency distribution of velocity (direction and speed) per frame ( Figure 5 C; STAR Methods ). Then, we computed the pairwise distance between the velocity distributions using the chi-squared distance metric ( Figure 5 C; STAR Methods ). We thresholded the speed before computing the distances by requiring a minimum speed. The reported effects were quite robust with respect to differences in threshold speed ( Figure S3 B), and we report results with a threshold of 0.2 (arbitrary units). The velocity-based distance metrics correlated to some extent (Pearson r = 0.85; Figure S3 A). The neural distances for bodies correlated significantly with both velocity-based distance metrics for the DP neurons ( Figures 5 D and 5E; stimulus label permutation test ). The correlation between the VP neural distances and the velocity space distances was also significant (p = 0.022; 31 Figure 5 D), but the correlation between the VP neural distances and the velocity distributions failed to reach significance (p = 0.07; Figure 5 E). The correlations for both velocity-based distances were significantly lower in VP than in DP (velocity space p = 3.600e−03; velocity distribution p < 0.001; bootstrapping neurons [1,000 resamplings]; STAR Methods ). This suggests that motion contributes more to DP than to VP dynamic body responses. To assess whether the speed distribution is sufficient to determine the neural distances, we binned the velocity vector magnitude (speed), ignoring motion direction. The correlation of the pairwise speed distribution and the neural distances was significant for DP, although the correlation was lower than computed for both speed and direction ( Figure S3 B), suggesting a contribution of speed and, to a smaller extent, motion direction, to DP body responses. Responses to dynamic bodies are predicted by CNN shape features in VP but not DP To assess the contribution of static features to dynamic body responses, we presented the frames of the body videos to CNNs: AlexNet, VGG16, 32 ResNet50, 33 and ResNet50_SIN. 34 We employed networks pretrained in the classification of ImageNet 35 data (AlexNet, VGG-16, and ResNet50) and stylized ImageNet 36 (ResNet50_SIN) and untrained ones as control. We computed for each video pair the lock-step Euclidean distance in the space in which each dimension corresponds to a unit of a layer ( 35 STAR Methods ). We correlated the CNN-based distances for each layer with the neural distances. For VP neurons, the correlations between neural and trained CNN feature distances increased with the layer, reaching significance for AlexNet and ResNet50 ( Figure 6 A; stimulus label permutation test). Correlations for the untrained AlexNet were smaller than for the trained versions for the deeper layers. The correlations between trained CNN-based distances and neural distances did not differ significantly from zero for the DP neurons ( Figure 6 A). The correlations of CNN feature distances of layer 5 and the VP neural distances were larger than those for the DP neural distances, the difference being significant for AlexNet (bootstrapping neurons; p < 0.001). This provides some evidence that VP responses to dynamic bodies are more related to static features than DP responses. Ventral middle STS body patch neurons preserve their selectivity when static natural images are transformed into silhouettes, indicating that shape features underlie their body selectivity. We measured the responses of 128 VP and 61 DP body-responsive neurons to silhouette versions of the body videos. We presented to each neuron two silhouette videos, one corresponding to the original video that produced the strongest response (“best”) and a second one that corresponded to an original video that produced no or a weak response (“worst”). We computed for each neuron a best-worst preference index (BWPI), which contrasted the responses to the best and worst silhouette video ( 37 STAR Methods ). The best and worst silhouette videos were defined based on the responses to the original videos. A BWPI of 1 indicates no excitatory response to the worst silhouette video, while 0 corresponds to an equal response to the best and worst silhouette videos. The median BWPI was high and very similar for both regions (VP median 0.97; DP, 0.95; Wilcoxon rank-sum test; p = 0.65; Figure 6 B; single-patch data and for neurons with BSI > 0.33; Figure S4 B 1-2 ). Hence, the body-video selectivity was preserved for silhouette versions in both regions. The preserved selectivity for silhouettes raised the question of whether the neural responses for the original body videos correlate with CNN responses to the silhouettes. Hence, we employed the same procedure as above to compute pairwise distances between the silhouette videos for each CNN layer and correlated these with the neural distances of VP and DP neurons for the original videos. The correlations of the VP distances with the silhouette feature distances were higher than those for the original videos ( Figure 6 A), despite the fact that we correlated CNN activations to silhouettes and neural responses to the grayscale videos. The CNNs that did not have a significant correlation between CNN and neural distance for the original videos showed significant correlations for the deeper layers when silhouette videos were used as input. The silhouette video distances of the deeper layers of the untrained CNNs also correlated significantly with the VP distances, although they were lower than for the trained CNNs. A parsimonious explanation for the increased correlations of VP and CNN silhouette responses is that VP neurons respond primarily to shape features that are preserved when transforming grayscale images to silhouettes. The DP responses did not show significant correlations with CNN silhouette features. Furthermore, for layer 5 of each CNN, the correlation of CNN silhouette feature distances and VP neural distances was significantly larger than those for DP (AlexNet, p = 0.012; VGG16, p < 0.001; ResNet50, p = 2.000e−03; ResNet50_SIN, p = 8.000e−03; bootstrapping neurons). This suggests that the DP responses to dynamic bodies are mainly driven by motion, whereas the VP responses are driven more by spatial features. The dissociation of the contribution of motion versus static features to DP and VP dynamic body responses is summarized in Figure 6 C. The differences between VP and DP in the correlations between the neural distances and the velocity-based/static CNN distances remained after equating the BSI distribution of VP and DP neurons ( Figure S4 D 1-2 ) and thus did not result from DP neurons having a lower average BSI. Also, trends were similar when comparing ASB with DP neurons ( Figure S4 D 3 ). Further analysis showed that the correlation between velocity space distances and neural distances was significant only for VP neurons with a Sparseness lower than the median of the population of neurons ( Figures 6 D and S4 C). This effect of Sparseness on the contribution of motion to the neural distances was intensified for VP neurons with BSI > 0.33 ( Figure S4 C). The correlations between neural distances and velocity-based metrics were significant and similar for high- and low-sparseness DP neurons ( Figures 6 D and S4 C). The low, non-significant correlations between velocity and distances for the high-sparseness VP neurons could be because, for those neurons, the response differences among the bodies were strongly driven by static features. This aligns with the significant correlations between the static feature and the high-sparseness VP distances for all networks (and neurons with BSI > 0.33), whereas the correlation for the static feature distances was significant only for one network for the low-sparseness neurons ( Figures 6 D and S4 C). Alternatively, high-sparseness VP neurons may show motion pattern selectivity that is not captured by our velocity-based metrics. Notably, motion sensitivity as such, as measured by SSI and VRI, did not decrease with Sparseness (correlation between sparseness and SSI, Spearman ρ = 0.18 (p = 0.04), and VRI, ρ = 0.05 [p = 0.57]). To assess whether motion and static features explain a common portion of the response variance of VP neurons, we employed commonality analysis ( STAR Methods ). This was done for the layer 5 distances of each CNN (silhouettes as input) and the velocity distribution-based distances ( Figure 7 A). Multiple regression produced significant correlations when using as predictors the CNN feature and the velocity distribution-based distances (stars in Figure 7 A). For VP, the velocity distribution and layer 5 feature-based distances explained each a unique part of the neural distances (the slight negative commonalities for some networks reflect small negative correlations between the velocity- and the feature-based distances). For DP, only the velocity distribution-based distances explained a unique variance component of the neural distances. We hypothesized that neurons that respond more strongly to videos than static images and neurons that are sensitive to time reversal of videos rely more on motion features than those that are insensitive to time reversal. Thus, we distinguished “static” neurons with SSI and VRI < 0.2, and “motion” neurons with SSI or VRI > 0.2 (stippled lines in Figure 5 C). The commonality analysis showed a stronger contribution of the velocity-based than the CNN-feature distances for the “motion” VP neurons, while the opposite was the case for the “static” VP neurons ( Figure 7 B). For DP, there was a reduction of the contribution of motion for the “static” compared with the “motion” neurons, but this could be due to the smaller number of “static” DP neurons. Even for the “static” DP neurons, the CNN-feature distances showed little or no correlation with neural distances ( Figure 7 B). Correlating neural distances with a spatiotemporal network Earlier, to assess the contribution of static features, we correlated neural responses with activations of CNNs pretrained with static images, which included animals (ImageNet). We compared the neural distances also with distances computed from spatiotemporal network units pretrained to recognize human action videos (X3D ; 38 STAR Methods ), encoding sequence information. The significant correlations between the VP neural and the X3D distances increased with layer, yielding slightly larger values than the “static” CNNs. Although X3D-DP correlations were higher than “static” CNN-DP correlations, none reached significance ( Figure S5 ). Discussion We showed that some anterior ventral STS/IT (VP) neurons required motion to respond to a monkey body, while others responded to static bodies but were highly selective for the temporal sequence of images in a video of an acting monkey. Hence, the response of some VP neurons to body actions cannot be solely predicted from their selectivity to static images. In addition, other VP neurons responded equally to static presentations of bodies and the same frames during a video, and their response during the video could be predicted by their static image selectivity. Dorsal-bank STS (DP) neurons exhibited a stronger effect of motion and stronger sequence sensitivity than VP neurons. A population analysis showed that the dynamic body responses of DP neurons could be predicted from the velocity distributions present in the videos, but not from static CNN-based features. In contrast, the responses of VP neurons to the body videos were predicted by both static CNN features and velocity differences. Our findings suggest a revision of the traditional view that distinguishes dorsal and ventral STS processing in terms of motion versus static features. First, we found that DP neurons, although dominated by motion, can respond to static stimuli. However, their response to dynamic bodies is not well predicted by the response to the same images presented statically. Second, the responses of VP neurons to dynamic bodies can be driven by motion and static shape features. Overall, our study highlights the diversity of the neural mechanisms involved in the processing of body movements and points toward the need for a more nuanced understanding of how ventral and dorsal STS neurons contribute to this process. A recent study that recorded face-selective neurons in a dorsal STS face patch also suggested a strong contribution of motion to responses to dynamic faces. However, they did not examine responses to dynamic faces in IT face patches. Furthermore, our study shows that the correlation between neural and velocity-based distances is not determined merely by differences in motion energy because neurons responded differently to videos and their time-reversed versions with the same motion energy. 39 The two velocity-based distance metrics yielded similar results, although the velocity distribution distances tended to show lower correlations than the velocity space distances for VP but not DP. One possible explanation is that VP neurons, especially those with a high sparseness, might be more sensitive to the spatial layout of the motion pattern, which is reduced in the velocity distribution, but this requires direct testing. We believe the responses of the DP neurons to dynamic bodies could not be predicted by static CNN features because the responses to the body videos were strongly dominated by motion. At the single neuron level, we found a correlation between the selectivity for static presentations of body images and the responses to the same frames during the video, but these correlations were low and depended strongly on the frame sequence. This agrees with the suggestion that DP neurons are strongly driven by the motion component in the videos and that the motion response overrides their static feature selectivity. The most effective stimuli of the DP neurons were threatening actions directed toward the observer ( Video S1 ). Such threatening displays, which are important in the social life of monkeys, include jerky movements that drive the DP neurons effectively. In contrast, socially neutral actions such as walking and grasping elicited smaller responses. This highlights the potential importance of threat-related stimuli in driving the response of dorsal-bank STS neurons, suggesting that they may play a role in the detection of potential threats. This supports the proposal that the dorsal STS functions as a “dynamic social stream.” 7 Video S1. Body videos ranked according to mean peak firing rate, averaged across DP neurons, related to discussion. The averaged responses differed significantly among the stimuli (Friedman ANOVA; p < 0.00001) VP neurons did not merely respond to the momentary body shape in dynamic displays. First, some VP neurons weakly responded to static bodies while responding strongly to the same frames in the context of the video. Second, some VP neurons were highly sensitive to the frame sequence. This sequence sensitivity was even present for neurons that responded to static presentations of the frames. Third, our population response analysis showed that velocity-based distances correlate with VP neural distances, suggesting the presence of motion information in anterior IT. Russ et al. suggested that ventral face patch AM responses to 5-min-long videos differ from those to static presentations or short dynamic snippets. However, the AM experiments did not control for eye fixation differences between these conditions, which makes the AM responses difficult to relate to dynamics per se. It is unlikely that the same mechanisms underlie the putative sequence effects during a long movie in AM 40 and the sequence sensitivity we observed in VP, because our movies were only 1 s long, and the dynamics we examined occurred at a shorter time scale. 40 Selectivity for motion-defined shapes has been reported in ventral stream areas V4 and IT, 41 but in those studies, motion served as a segmentation cue. Long-range motion direction selectivity has been demonstrated in V4 42 and may have contributed to the sequence sensitivity observed here in VP and DP. Apart from short- and long-range motion, other mechanisms can contribute to the sequence sensitivity in VP and DP. One candidate is adaptation, which is prevalent in IT. 43 Adaptation can induce differences in response depending on the sequence in which effective and ineffective frames are presented, but cannot explain the strong sequence sensitivity in which no excitatory response was present for the time-reversed video. Another candidate mechanism is expectation suppression, in which responses to expected stimuli are reduced compared with unexpected stimuli. 44 45 , Such a mechanism predicts a stronger response to the unexpected, less familiar, time-reversed videos than to the familiar original videos. However, this is opposite to the typical stronger response to the original than to the time-reversed video. A third mechanism is suggested by the observed association of strong sequence sensitivity and inhibition for the ineffective sequence. This is in line with models 46 that propose that sequence-selective neurons have asymmetric lateral connections with other neurons that encode individual snapshots from the motion sequence. Neurons that encode a sequence will receive excitatory input from neurons that encode the preceding snapshot of the sequence, but inhibitory input from neurons that encode the preceding snapshot of the time-reversed sequence. The VP neurons that encode snapshots can contribute to the sequence sensitivity observed in other VP neurons through such a network mechanism. Other mechanisms involving recurrent processing within temporal cortex or feedback from other regions, e.g., prefrontal cortex, 2 may underlie the sequence sensitivity. Also, the sequence sensitivity may have been inherited from middle STS body-responsive regions. 47 3 Although some VP neurons showed sequence-related responses, the responses of many VP neurons to the videos were well predicted by their selectivity for static frames. Previous studies with static images showed a correlation between deep-layer CNN features and IT responses. 23 , 24 , 25 , 26 , 27 , Here, we show that the correlation between CNN features and VP neurons extends to dynamic body stimuli. Interestingly, the correlation between CNN features and VP responses was stronger when silhouettes served as input to the CNN. This can be related to our observation that VP neurons keep their selectivity when the video is reduced to its silhouette version, suggesting that VP responds primarily to shape features. We speculate that the original images produce strong texture-driven responses in the CNN units, which reduced the correlations with the shape-driven neural responses. When using silhouettes as input to the CNN, its units will be driven by shape instead of by the absent texture features, enhancing the correlation between CNN unit activations and the shape-driven VP responses. The higher correlation with silhouette input was also present for ResNet-50-SIN, 28 which was trained on images in which the original texture of a shape was replaced by random textures, forcing the network to utilize shape for categorization. This suggests that this network still contains units that are texture selective. 35 We correlated the body responses with a spatiotemporal network pretrained for human action recognition, which produced numerically somewhat higher correlations compared with the “static” CNNs for both regions. Adding temporal information to the CNN did not produce a significant increase in correlations for DP, which was unexpected, because other analyses showed that motion dominated the responses of DP neurons to the videos. However, this might be due to the peculiarities of the employed CNN. Investigating spatiotemporal networks as a model for temporal cortical processing holds promise for future research. We showed that, whereas the body responses of DP neurons are well predicted by velocity patterns, this is less the case for VP neurons, which are driven more, but not exclusively, by static shape features. The commonality analysis showed that velocity and shape features explain non-overlapping portions of the response variance of the VP neurons. Moreover, VP neurons that responded equally well to static and dynamic stimuli and/or showed low or no sequence sensitivity have a relatively stronger contribution of static features than neurons that respond more to the dynamics, suggesting that anterior IT contains a heterogeneous population of neurons that vary in their motion versus shape processing. Limitations of the study The response selectivity of DP neurons for dynamic bodies was not correlated with CNN features. Since we did not examine DP responses to a wide range of static stimuli, we do not know whether the static feature selectivity of DP neurons relates to CNN features or is fundamentally different from IT. Although the DP region corresponded to the most anterior dorsomedial STS fMRI-defined body patch, it was posterior to the VP. It is unlikely that the differences in response properties between DP and VP are related to their anterior-posterior locations, as DP’s motion sensitivity is in line with prior dorsal-bank STS studies 8 , 9 , 10 , 4 , 11 , 12 , 13 , 14 , . Future studies should examine dynamic body responses in more posterior ventral and dorsal STS regions. Because we targeted a limited number of patches, we do not know whether our data generalize to neurons outside the recorded regions in IT and dorsal STS. Examining the relation between responses and neural networks trained on video data for monkey action recognition could offer valuable insights, but this falls beyond the scope of our study. 15 STAR★Methods Key resources table REAGENT or RESOURCE SOURCE IDENTIFIER Deposited data Electrophysiological data This paper https://osf.io/jsm5z/ Chemicals, peptides, and recombinant proteins Molday ION BioPAL Inc.; Worcester, USA http://www.biopal.com/molday-ion.htm Experimental models: Organisms/strains Rhesus macaque ( Macaca mulatta ) Biomedical Primate Research Centre https://bprc.nl/en (BPRDC), Rijswijk, the Netherlands https://bprc.nl/en Software and algorithms MATLAB R2020a MathWorks RRID: SCR_001622 Python Programming Language Python RRID: SCR_008394 PyTorch PyTorch RRID: SCR_018536 SPM12 SPM, University College London, UK RRID: SCR_007037 Other EyeLink SR-Research https://www.sr-research.com Tungsten Microelectrode FHC www.fh-co.com Resource availability Lead contact Further information and requests for resources should be directed and will be fulfilled by the Lead Contact, Rufin Vogels ( Rufin.vogels@kuleuven.be ). Materials availability This study did not generate new unique reagents. Data and code availability • The data are available at https://osf.io/jsm5z/ . • Original code is available at https://github.com/RajaniRaman/dynamic_body as of the date of publication. • Any additional information required to analyze the data is available from the lead contact on request. Experimental model and study participant details Subjects and surgery Three male rhesus monkeys ( Macaca mulatta; 5-6 years old) served as subjects. The monkeys were housed in pairs or triplets. The monkeys were implanted with a plastic headpost, using ceramic screws and dental cement following standard aseptic procedures and full anesthesia. They were trained to fixate continuously on a small target point for juice rewards. After the fMRI scanning, we implanted a custom-made plastic recording chamber, allowing a dorsal approach to temporal body patches. In each animal, the location of the recording chamber was guided by the fMRI body localizer. Animal care and experimental procedures complied with the regional (Flanders) and European guidelines and were approved by the local Animal Ethical Committee. Method details Stimuli Main stimuli We employed 60 achromatic videos: 20 dynamic body videos, 20 dynamic objects, and 20 dynamic faces. These stimuli were identical to those employed by in the fMRI mapping, except for their somewhat smaller size (5 instead of 6 deg), and are described in that paper. The duration of each video was 1 sec. The dynamic body videos show a rhesus monkey performing different natural actions like grasping, picking, turning, walking, threatening, throwing, wiping, and initiating jumping. The face of the monkey was blurred, making its facial expression and identity unrecognizable. The translational component of the movement of the monkey across the display, when present (e.g. during walking), was removed and the monkey’s body was centered. The 20 face videos included 12 movies of monkey faces that showed frontal face movements such as chewing, lip-smacking, fear grin, and threat. The other 8 face videos showed a moving monkey head with visible facial features, e.g. a head rotating from a frontal to a profile view. The 20 object movies included computer-rendered objects that depicted movement, e.g. an object with its parts making non-rigid movements, a rotating airplane, and cars with different motion patterns (e.g. rocking or “jumping”). For each category, the maximal extent of the centered moving stimuli fitted a 5 by 5 deg square for the singe-unit recordings. All movies were rendered with a 60 Hz frame rate. Bodies, faces, and objects were presented on top of a dynamic white noise background (size = 11 deg). The gray level of each noise background pixel was randomly sampled from a uniform distribution at a rate of 30 Hz. 6 In the fMRI mapping design (see below), we included also mosaic-scrambled videos, as described in . These scrambled conditions were not employed to define the body patches in the present study and were not presented in the single-unit study. 6 Silhouette stimuli For each body video, we prepared a silhouette version in which the pixels corresponding to the monkey were rendered black. Thus, the overall motion and shape of the monkey were preserved while the inner features of the body, its texture, and shading pattern were eliminated. Snapshot stimuli For each body video, we selected 10 frames, including the white noise background, that sampled different postures and views of the body during the video. Note that we sampled frames of some of the videos at an irregular temporal interval to ensure that we obtained a representative sample of the variety of postures and views present in a video and that the same posture was not presented more than once. fMRI body patch localizer Body patches activated by dynamic bodies were localized with fMRI preceding the recordings. The scanning procedure and details of the fMRI data analysis have been described in and will be summarized here briefly. 6 During scanning, the monkeys sat in a horizontal sphinx position with their heads fixed in an MRI-compatible chair. The chair was positioned in front of a translucent screen on which the stimuli were projected. Eye position was monitored (120 Hz; Iscan) and the animals were performing a fixation task during scanning for a juice reward. The monkeys were scanned with a 3T Siemens Trio scanner following standard procedures ; 48 49 , . We obtained high-resolution anatomical MRI images for each monkey in a separate session. To increase the signal-to-noise ratio 50 , we injected intravenously the contrast agent Monocrystalline Iron Oxide Nanoparticle (MION) in each daily scanning session. 51 To localize body patches, we employed a block design: the 1-sec videos (n = 20) of a category were presented in a block back to back in random order. A run consisted of 7 conditions, each repeated twice using a palindromic sequence. The 7 conditions were: (1) a baseline fixation block in which the fixation target (size = 0.2 deg) was shown together with a dynamic white noise background, (2) moving bodies, (3) moving faces, (4) moving objects, (5) mosaic-scrambled moving bodies, (6) mosaic-scrambled moving faces, and (7) mosaic-scrambled moving objects. The order of the blocks was randomized across runs with a balanced Latin square design. The maximal extent of the stimuli was 6 by 6 deg. We used only runs in which the monkeys were fixating (fixation window size 2-3 deg) at least 90% of the run (monkey O: 38, N: 39, G: 32 valid runs). They were analyzed with a general linear model with 7 regressors (6 stimulus conditions + baseline fixation condition), plus 9 additional head-motion and eye-movement regressors per run (see for more details: ). We employed the following contrast to identify the body patches for the single-unit recordings: moving bodies – moving objects (threshold Family-wise error (FWE) rate corrected; p < 0.05), exclusively masked with moving faces – moving objects (p < 0.001; uncorrected). The resulting t-maps of each monkey were co-registered to their anatomical MRIs so that the body patches could be identified in each monkey’s native space. 6 Single-unit recordings Single-unit recordings were performed with epoxylite-insulated tungsten microelectrodes (FHC), with an impedance of around 1 MOhm, using techniques as described previously . Briefly, the electrode was lowered with a Narishige microdrive into the brain using a stainless-steel guide tube that was fixed in a custom-made grid that was positioned within the recording chamber. After amplification and bandpass filtering, spikes of a single unit were isolated online using a custom amplitude- and time-based discriminator. 29 The position of one eye was continuously tracked using an infrared video-based tracking system (SR Research EyeLink). Stimuli were displayed on an LCD (Iiyama; 2560 x 1440 screen resolution; 1 ms GtG) at a distance of 57 cm from the monkey’s eyes. The on- and offset of the stimuli was signaled by a photodiode detecting luminance changes of a small square in the corner of the display (but invisible to the animal), placed in the same frame as the stimulus events. A Digital Signal Processing-based computer system developed in-house controlled stimulus presentation, event timing, and juice delivery while sampling the photodiode signal, vertical and horizontal eye positions, spikes, and behavioral events. Time stamps of the recorded spikes, eye positions, stimulus, and behavioral events were stored for offline analyses. In each monkey, the recording grid locations were defined so that the electrode targeted the selected body patches. Before the recordings, we performed a structural MRI in each monkey (3T Siemens Trio; Magnetization-Prepared Rapid Acquisition with Gradient Echo (MPRAGE) sequence; 0.6 mm resolution) and visualized long glass capillaries filled with the MRI opaque copper sulfate (CuSO 4 ) that were inserted into the recording chamber grid (until the dura) at 5 positions. In addition, the recording chamber was filled with a Gadoteric Acid (Dotarem) solution to visualize the borders and orientation of the recording chamber. The functional maps of each monkey were co-registered to this structural MRI, using SPM12, and the co-registration was verified by visual examination. We could visualize guide tube tracks on structural MRIs taken after the recordings, showing that we targeted within the coronal and sagittal plane the selected body patches ( Figure 1 ). The ventral-dorsal location of the recordings was verified in each recording session using the transitions of white and gray matter and the silence marking the sulcus between the banks of the STS. Tests The monkeys were performing a passive fixation task during stimulus presentation. They were rewarded with apple juice to fixate a small fixation target (size: 0.17 deg). Juice rewards were given with a fixed interval, that was titrated for each monkey, as long as the monkeys were maintaining fixation within a window of 2 by 2 deg. We examined the responses of each neuron in a “Search test” in which we presented the 20 body videos, 20 face videos, and 20 object videos in a pseudo-random order. Fixation was required in a period from 200 ms pre-stimulus to 200 ms post-stimulus onset, including the 1000 ms long stimulus presentation. A trial was aborted when the monkey interrupted fixation in this interval. In the pseudo-randomization procedure, all 60 videos were presented randomly interleaved in blocks of 60 unaborted trials. Aborted stimulus presentations were repeated within the same block in a subsequent randomly chosen trial. Neural responses of aborted trials were not analyzed. All neurons were tested with at least 3 unaborted trials per stimulus, with the large majority of neurons (90% and 93% of VP and DP neurons, respectively) tested with 5 unaborted presentations of each video. During the pre-stimulus period, a static white noise background pattern (size = 11 deg), randomly chosen from 10 patterns, was presented on top of a uniform gray background that filled the display. After the stimulus offset, only the gray background was present. Based on the responses obtained in the Search test, we selected a body video that elicited the highest response (“best”) and a body video to which the neuron did not respond (“worst”). These stimuli were employed in subsequent tests. In the Snapshot test, we presented the best body video, its time-reversed version, and 10 snapshots of this video. The duration of the snapshot presentation was 300 ms. The pre- and post-stimulus intervals were 500 ms, yielding an interstimulus interval of at least 1000 ms. During the pre-stimulus period, a static white noise background was presented. The 12 stimuli were shown randomly interleaved in blocks of 12 trials during fixation, using the same presentation schedule (except for the timings) as in the Search task. The test consisted of at least 10 unaborted presentations of each stimulus. In another test, we presented the silhouette versions of the best and worst body video (selected in the preceding Search test for each neuron), together with other 16 silhouette videos that are not the subject of the present paper and will not be described here. The 18 videos were presented in random order using the pre- and post-stimulus time intervals as for the Snapshot test in blocks of 18 trials each. The test consisted of at least 10 unaborted presentations of each video. We tested also 30 neurons using silhouette versions of the best video, its time-reversed video, and the 10 corresponding snapshots with the Snapshot test. Data analysis Responsiveness and selectivity We conducted for each neuron a split-plot ANOVA to select neurons that responded significantly to at least one of the body videos in the Search test. For each unaborted trial, the baseline mean firing rate was computed from −200 to 0 ms and the mean firing rate for the stimulus was computed from 60 to 1160 ms, with 0 representing stimulus onset. The baseline versus stimulus response was considered as a repeated-measure within-trial factor, and the 20 body videos as a between-trial factor. Cells with a significant main effect for the baseline versus stimulus activity factor (p < 0.05) or a significant interaction between the two factors (p < 0.05) were selected for further analysis. All neurons in the reported sample (n = 149 and 175 in VP and DP, respectively) had significant responses according to the ANOVA and an excitatory net response to at least one body stimulus. We evaluated the significance of the responses of each neuron tested in the Snapshot test using a split-plot ANOVA. We computed the mean baseline and stimulus-induced firing rate for each unaborted trial of the 12 conditions. The baseline time window ranged from -200 to 0 ms, whereas for the stimulus-induced response, we employed a window of 60 to 1160 ms for the 2 videos and 60 to 460 ms for the 10 snapshot presentations. We employed the same ANOVA design and selection criteria as described above for the Search test. The responses in the Snapshot test were analyzed further only for the neurons that showed a significant and excitatory response in that test (n = 133 and 146 for VP and DP, respectively; for 8 and 22 neurons in VP and DP, respectively, the Snapshot test employed silhouettes). Body-category selectivity index For each responsive neuron, we compute the Body-category Selectivity Index (BSI), as follows: B S I = R b − R n b | R b | + | R n b | ; R n b = R f + R o 2 Where, , R b and R f are the mean net firing rates to the body, face, and object videos, respectively, obtained in the Search test. The mean net firing rate was computed by subtracting the baseline firing rate from the firing rate in the response window (same windows as for the ANOVA; see above), averaged across trials per stimulus. R o In some analyses, we selected only neurons with a BSI > 0.33, i.e. a twofold greater mean response to bodies compared to the mean response for objects and faces. In another analysis ( Figure S4 D 1-2 ), we equated the frequency distribution of DP and VP neurons, as follows. First, a histogram of the distribution of BSI (bin width = 0.07; 20 bins from minimum to maximum of the BSI) was created for both VP and DP. Then, the minimum cell count within each bin was determined by comparing the distributions of VP and DP. To equate the distribution, the surplus cells present in either VP or DP were then eliminated randomly, ensuring that the cell count matched the minimum count for that specific bin. This process resulted in populations of cells having BSI-equated distributions for VP and DP ( Figure S4 D 1 ). Sparseness The Sparseness of the response to the 20 body videos of each neuron was calculated as: S p a r s e n e s s = ( 1 − ⟨ r i ⟩ 2 ⟨ r i 2 ⟩ ) 1 − 1 20 Where denotes the average and ⟨ . ⟩ is the net response of the r i i th body stimulus. The net firing rate was computed by subtracting the baseline firing rate from the firing rate in the response window (same windows as for the ANOVA; see above), averaged across trials per stimulus. Negative net responses were clipped to zero, as described before 52 , . The Sparseness can range from 0 (equal response to the 20 body videos) to 1 (response to a single body video). 53 In some analyses, we split the neurons of each region into two groups: neurons with a Sparseness below (low- sparseness neurons) and above (high-sparseness neurons) the median of the Sparseness of the population of the neurons of both regions (VP + DP). Selectivity for body videos and snapshots To assess the (within-category) body-video selectivity of the responsive neurons in the Search test, we ranked for each neuron, tested with 5 unaborted trials per video, the body videos based on the net responses averaged across 4 trials per video (employing the same analysis windows as for the ANOVA). Then, the net responses of the left-out trial were stored as a function of the stimulus rank based on the four trials. This was done for each of the 5 possible groups of 4 trials, and the net responses for the left-out trials were averaged as a function of stimulus rank. We assessed the significance of the difference among the cross-validated ranked responses using a Friedman ANOVA for the DP and VP samples of neurons separately. The mean responses of the left-out trials were averaged across neurons and a 95% confidence interval of the averaged response was computed by bootstrapping neurons (1000 resamplings; percentile method). The same leave-one-trial-out cross-validation procedure was employed to assess the selectivity of the neurons for the 10 snapshots presented in the Snapshot test (using 10 times the responses averaged across 9 trials per snapshot for ranking). This was done only for neurons that responded significantly to a snapshot (Split-Plot ANOVA; 10 stimulus conditions; same windows and criteria as above). Snapshot Selectivity Index We compared the peak firing rate to the individual snapshots with the peak firing rate during the presentation of the body video that included these snapshots. Because the responses of the neurons could vary strongly during the video (e.g. Figure 3 ), averaging a response across the full video duration can underestimate the response to specific video segments. To avoid any such underestimation of the neural response to the video, we used peak firing rate instead of average firing rate as the response measure when comparing the responses of the video and snapshot presentations (same procedure as 15 , ) 16 . To compute the peak firing rates, we first convolved the spiking activity, averaged across trials for the same stimulus, with a Gaussian filter with a standard deviation of 25 ms. Then, we computed the net firing rate of the thus smoothed response to the stimulus by subtracting the smoothed baseline response. The Snapshot Selectivity Index (SSI) was computed as: where S S I = PR v s − P R s s | PR v s | + | P R s s | and PR v s are the peak firing rate of the smoothed responses to the body video and the maximum peak firing rate across the ten snapshots, respectively. The response windows to find the peak firing rate for the video and snapshot stimuli were the same as those employed for the ANOVA-based significance testing. P R f s Video Reversal Index To quantify a cell’s sensitivity to the difference between the original video and its time-reversed version, we computed: where V S = R o v − R r v | R o v | + | R r v | and R o v are the mean net responses to the original and time-reversed video, respectively, using the same analysis windows to compute the mean firing rate as for the ANOVA significance testing. Since the neurons differed in their preference for one of the two videos, we defined the Video Reversal Index as VRI = R r v . A VRI value of 0 corresponds to no preference for the original over the time-reversed video, while a value of 1 indicates a response to only one of the two videos. | V S | The difference between VP and DP was also significant for (VP: median = -0.03 (1 V S st quartile = -0.21; 3 rd quartile = 0.20); DP = 0.17 (-0.13; 0.60); Wilcoxon rank sum test p = 0.002). Correlation between responses to the snapshots and the corresponding frames during the video We computed the correlation between the response to each of the 10 snapshots and the corresponding frames when these were presented in the video. We computed the net average firing rate of each neuron to each snapshot and corresponding frame in the video in a 140 ms window. We determined when each neuron had its highest firing rate in response to the snapshots and used that to set the timing of the 140 ms window. To obtain this estimate of the response latency for each neuron, we averaged the net firing rate in bins of 20 ms across all 10 snapshots and identified the bin with the highest firing rate. We then used the 140 ms window around that bin to capture the neuron's response. If the window started earlier than 60 ms after the snapshot onset, the beginning of the 140 ms long window was set to 60 ms to avoid taking spikes that could not have been evoked by the snapshot or frame. Using this response window, defined per neuron, we extracted the net firing rates for the individual snapshots and the corresponding frames in the video. The Pearson correlation coefficient between the vectors (of size 10) of the responses for the snapshots and the corresponding video frames was then computed for each neuron. In some cases, a portion of the response window for a video frame fell outside of the 1160 ms long interval of responses available for the video. The responses for the snapshots/frames of those cases were removed from the vector before computing the correlation. The correlation between the responses to the snapshots and the corresponding frames during the video was computed only for those neurons that showed a statistically significant response to at least one of the 10 snapshots, which was assessed with a split-plot ANOVA. CNN modeling: Networks trained with static images We employed four instances of three distinct CNN architectures, namely AlexNet, VGG16, and ResNet50, along with an additional ResNet50 architecture trained on stylized ImageNet (SIN) called ResNet50_SIN. As a control, we also included an untrained version of these networks. The weights for AlexNet (AlexNet_Weights.IMAGENET1K_V1), VGG-16 (VGG16_Weights.IMAGENET1K_V1), ResNet50 (ResNet50_Weights.IMAGENET1K_V1), and the untrained version were imported from TorchVision in PyTorch. The random weights in the untrained networks of PyTorch were drawn from a Gaussian distribution, except for the weights from AlexNet, which were drawn from a uniform distribution. To examine the responses of CNN units, we presented frames from the body videos (20 X 60 frames) with a grey background (RGB value = 128) to the networks. We did not include the white noise background, since we wanted to compute the activations to the body images per se. Additionally, we obtained the response to a version of the stimuli in which the monkey's body was reduced to a silhouette. The frames were pre-processed by rescaling and subtracting the mean and division by the standard deviation of the ImageNet data. For AlexNet models, we present the data for all seven ReLU layers, whereas for VGG16, we present the data for ReLU layers 1.2, 2.2, 3.3, 4.3, 5.3, 6, and 7, with layers 6 and 7 being the fully connected layers. For the ResNet50 architectures, we obtained responses from the ReLU layers (relu, layer1.2.relu_2, layer2.3.relu_2, layer3.5.relu_2, layer4.2.relu_2) available at the end of each of the five "stages", which we label in the Results as layers 1 to 5. We removed the units of a layer that did not respond to any of the 1200 frames. Plots of the number of responding units (features; in log units) in each layer of the networks are presented in Figure S6 . CNN modeling: Networks trained with human action videos We also included a pre-trained spatiotemporal network, X3D (X3D-M; available at 38 https://github.com/facebookresearch/SlowFast/blob/main/MODEL_ZOO.md ) The X3D model was pretrained on the Kinetics-400 dataset , which encompasses videos featuring 400 distinct human action classes, designed for action classification tasks. This spatiotemporal 3D network (1 temporal and 2 spatial dimensions) expands in the temporal dimension (frame sequences), while also encompasses optimization of spatial, 2D hyper-parameters. Notably, X3D evolves from the foundational 2D structure of ResNet, roughly retaining its stages. The layers 1 through 5 ( 54 Figures S5 and S6 ) correspond to the 'stages' 1 through 5, mirroring the organization of the ResNet architecture. The activations of responding units were extracted as described above for the other CNNs. Plots of the number of responding units (features; in log units) in each layer of the network are shown in Figure S6 . Velocity estimation We estimated the pixel-wise velocities of each body video (60 frames of the size 210 x 210 pixels) using the Lucas Kanade derivative of Gaussian filter optic flow algorithm implemented by the opticalFlowLKDOG Matlab function with the same parameter settings as in . Because we aimed to compute the velocities of the bodies, we removed the white noise background and replaced it with a gray background. We obtained a pixel-wise map of the x and y components of the velocity vector for 58 frames of each video, resulting in a tensor (58 X 210 X 210 X 2) per video. For the first two frames, the algorithm does not produce a valid optic-flow measure, explaining why we had measures for 58 frames. 6 Pairwise between-video distances Distance measure: Lock-step Euclidean distance To obtain pairwise between-video trajectory distances in an N-dimensional neural, velocity, and CNN feature space, we calculated the lock-step Euclidean distance between every pair of body videos 55 and ( V i ) as: V j Where, L ( V i , V j ) = ∑ m = 1 M d 2 2 ( v i m , v j m ) ; i ≠ j , v ∈ R N is the Euclidian distance between body video d 2 ( v i m , v j m ) = ∑ n = 1 N ( v i m ( n ) − v j m ( n ) ) 2 and V i in an V j -dimensional space at the point N of a temporal trajectory. m ∈ { 1 , 2 , . . M } corresponds to the number of cells, pixels (multiplied by 2), and units in a layer, for the neural, optic-flow, and CNN feature distances (see N Table S4 ), respectively. corresponds to the number of bins and frames in the case of the neural and velocity/static feature space, respectively (see M Table S4 ). In the case of the neural trajectories, we considered the binned responses in an interval ranging from 60 to 1160 ms post-stimulus onset. In simpler terms, we created a matrix of responses/values for each video, which we then flattened into a vector. We computed the Euclidean distance between the vectors for each video pair, resulting in 190 distance values (20 M × N ∗ 19 /2) for the 20 body videos. To calculate the neural distances, the net responses of each neuron to the videos were normalized by its maximum peak net response (bin width = 20 ms) across the body videos. Distance measure: Chi-square distance between velocity distributions For each of the 58 frames of which we had velocity vectors, we created a 2-dimensional frequency distribution of speed and direction, with a bin width of 0.5 (arbitrary units) for the speed axis and π⁄8 for the direction axis. The speed axis ranged from a low-speed threshold of 0.2 for the reported data in the Results (see Figure S3 B for results with other thresholds) to 7.7, which is the maximum speed observed among all frames in all videos, while the direction axis ranged from -π to π. Once we created the two-dimensional frequency distribution for each frame, we flattened it into a vector and concatenated all the vectors from all frames (in order) of a video to create a grand vector of size (15 speed bins x 15 direction bins x 58 frames). This grand vector represents a grand frequency distribution for the entire video while capturing the temporal pattern. We then computed the chi-square distances for all 190 pairs of videos K = 13050 and V i as follows: V j X 2 ( V i , V j ) = 1 2 ∑ k = 1 K ( V i k − V j k ) 2 ( V i k + V j k ) Best-worst preference index For each neuron that was tested with the silhouette versions of the videos, we computed a best-worst preference index (BWPI): where B W P I = R b e s t − R w o r s t | R b e s t | + | R w o r s t | and R b e s t are the net average firing rate for the silhouette version of the original stimuli that elicited the best and worst response in the Search test, respectively. The same analysis windows as for the ANOVA were used. R w o r s t Statistical analysis Significance tests for the correlation analyses of pairwise neural and model distances We permuted stimulus labels to determine the significance of the correlation coefficient between pairwise neural and velocity / CNN feature distances. To do so, we created a distance matrix of the neural pairwise distances. Next, we permuted the labels of the matrix by randomly reordering the rows and columns, i.e., the stimulus labels. We then computed the correlation between the permuted neural distance matrix and the pairwise distances corresponding to the velocity-based or CNN features, using the upper off-diagonal values of the matrix. We repeated this process of permutation and correlation computation 1000 times to generate a null distribution of correlation coefficients. From this null distribution, we obtained the percentile (Pc) of the observed correlation coefficient. We used the Pc to compute the two-tailed p-value as: 31 p = { 2 × 100 − P c 100 ; P c ≥ 50 2 × P c 100 ; P c < 50 If the p-value was less than 0.05, we rejected the null hypothesis. To assess the significance of the difference between the correlations of two distance matrices obtained for DP and VP, we utilized bootstrapping by resampling with replacement the cells of VP and DP. We computed the correlation coefficient between neural distances for the resampled cells and velocity-based or static CNN feature distances and subtracted the correlation coefficient obtained for the resampled VP from that of the resampled DP. We repeated this process 1000 times to obtain a distribution of differences in correlation values between VP and DP. We then computed the percentile of a value of 0 from this distribution and computed the p-value (two-tailed) based on the percentile with the same method as described above. Commonality analysis We utilized a multiple regression-based commonality analysis to determine whether motion and static features account for a shared portion of the response variance in VP (and DP) neurons, or instead, whether they contribute uniquely to the neural response variance. We obtained the explained variance 56 through multiple regression of neural distances from the velocity and static feature distances as predictors. To isolate the unique contribution of motion and static features, we subtracted the explained variance R M + F 2 , corresponding to static features, obtained by regression using the feature distances as a single predictor, and the explained variance R F 2 corresponding to motion, respectively from R M 2 . The common explained variance R M + F 2 was then calculated as: R C 2 . To assess the significance of R C 2 = ( R M 2 + R F 2 ) − R M + F 2 , we computed the p-value (one-tailed) as: R M + F 2 , where Pc is the percentile of the observed p = 100 − P c 100 relative to the null distribution obtained using stimulus label permutation of the neural distances as described above. Significant R M + F 2 values are indicated by stars in R M + F 2 Figure 7 . Acknowledgments The authors thank C. Fransen, I. Puttemans, A. Hermans, W. Depuydt, C. Ulens, S. Verstraeten, K. Lodewyckx, J. Helin, and M. De Paep for technical and administrative support. This research was supported by Fonds Wetenschappelijk Onderzoek (FWO) Vlaanderen ( G0E0220N ), KU Leuven grant C14/21/111 , and the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement 856495 ). Author contributions Conceptualization, R.V., M.G., R.R., and A.B.; methodology, R.V., A.B., R.R., and N.T.; investigation, R.V., A.B., R.R., and G.G.N.; writing – original draft, R.V. and R.R.; writing – review & editing, R.V., R.R., A.B., and M.G.; funding acquisition, R.V. and M.G.; resources, R.V.; supervision, R.V. Declaration of interests The authors declare no competing interests. Supplemental information Supplemental information can be found online at https://doi.org/10.1016/j.celrep.2023.113438 . Supplemental information Document S1. Figures S1–S6 and Tables S1–S4 Document S2. Article plus supplemental information
REFERENCES:
1. DEGELDER B (2009)
2. GIESE M (2003)
3. VOGELS R (2022)
4. JELLEMA T (2006)
5. BAO P (2020)
6. BOGNAR A (2023)
7. PITCHER D (2021)
8. ORAM M (1996)
9. ORAM M (1994)
10. WACHSMUTH E (1994)
11. BARRACLOUGH N (2006)
12. JELLEMA T (2000)
13. JELLEMA T (2004)
14. JELLEMA T (2003)
15. VANGENEUGDEN J (2011)
16. VANGENEUGDEN J (2009)
17. BRUCE C (1981)
18. ANDERSON K (1999)
19. BAYLIS G (1987)
20. SINGER J (2010)
21. YANG Z (2021)
22. BARRACLOUGH N (2009)
23. CADIEU C (2014)
24. KAR K (2019)
25. PONCE C (2019)
26. KALFAS I (2018)
27. KALFAS I (2017)
28. RAMAN R (2020)
29. POPIVANOV I (2014)
30. KUMAR S (2019)
31. NILI H (2014)
32. KRIZHEVSKY A (2017)
33. SIMONYAN K (2014)
34. HE K (2015)
35. GEIRHOS R (2018)
36. RUSSAKOVSKY O (2015)
37. POPIVANOV I (2015)
38. FEICHTENHOFER C (2020)
39. YANG Z (2023)
40. RUSS B (2023)
41. MYSORE S (2008)
42. SARY G (1993)
43. BIGELOW A (2023)
44. VOGELS R (2016)
45. MEYER T (2011)
46. KAPOSVARI P (2018)
47. KAR K (2021)
48. VANDUFFEL W (2001)
49. EKSTROM L (2008)
50. KOLSTER H (2009)
51. LEITE F (2002)
52. VOGELS R (1999)
53. ROLLS E (1995)
54. KAY W (2017)
55. TAO Y (2021)
56. WARNE R (2011)
|
10.1016_S2095-3119(19)62610-9.txt
|
TITLE: Evaluation of the biocontrol potential of Aspergillus welwitschiae against the root-knot nematode Meloidogyne graminicola in rice (Oryza sativa L.)
AUTHORS:
- LIU, Ying
- DING, Zhong
- PENG, De-liang
- LIU, Shi-ming
- KONG, Ling-an
- PENG, Huan
- XIANG, Chao
- LI, Zhong-cai
- HUANG, Wen-kun
ABSTRACT:
The root-knot nematode Meloidogyne graminicola is considered one of the most devastating pests in rice-producing areas, and nematicides are neither ecofriendly nor cost effective. More acceptable biological agents are required for controlling this destructive pathogen. In this study, the biocontrol potential of Aspergillus welwitschiae AW2017 was investigated in laboratory and greenhouse experiments. The in vitro ovicidal and larvicidal activities of A. welwitschiae metabolites were tested on M. graminicola in laboratory experiments. The effect of A. welwitschiae on the attraction of M. graminicola to rice and the infection of rice by M. graminicola was evaluated in a greenhouse. The bioagent AW2017 displayed good nematicidal potential via its ovicidal and larvicidal action. The best larvicidal activity was observed at a concentration of 5×AW2017, which caused an 86.2% mortality rate at 48 h post inoculation. The highest ovicidal activity was recorded at a concentration of 5×AW2017, which resulted in an approximately 47.3% reduction in egg hatching after 8 d compared to the control. Under greenhouse conditions, the application of A. welwitschiae significantly reduced the root galls and nematodes in rice roots compared to the control. At a concentration of 5×AW2017, juveniles and root galls in rice roots at 14 d post inoculation (dpi) were reduced by 24.5 and 40.5%, respectively. In addition, the attraction of M. graminicola to rice roots was significantly decreased in the AW2017 treatment, and the development of nematodes in the AW2017-treated plants was slightly delayed compared with that in the PDB-treated control plants. The results indicate that A. welwitschiae is a potential biological control agent against M. graminicola in rice.
BODY: No body content available
REFERENCES:
1. ATHMAN S (2006)
2. BRIDGE J (2005)
3. BYRD D (1966)
4. CHEN S (2000)
5. DABABAT A (2007)
6. DIEZ J (1989)
7. DUTTA T (2012)
8. GAUR H (2010)
9. HALLMANN J (1994)
10. HONG S (2013)
11. JI H (2013)
12. KHAN M (2012)
13. KUMAR D (2006)
14. KUMAR S (2004)
15. KYNDT T (2012)
16. KYNDT T (2013)
17. LE H (2016)
18. LE H (2009)
19. LI Y (2017)
20. LIESACK W (2000)
21. MANTELIN S (2017)
22. MARTINUZ A (2013)
23. MENJIVAR R (2011)
24. NAHAR K (2011)
25. OOSTENDORP M (1990)
26. PADGHAM J (2004)
27. PADGHAM J (2007)
28. PERRONE G (2011)
29. PRASAD J (2010)
30. PRASAD S (1977)
31. RAPER K (1965)
32. RUI K (2015)
33. SAIKIA S (2013)
34. SAMSON R (2014)
35. SCHEID D (2004)
36. SINGH U (2013)
37. SINGH U (2012)
38. SORIANO I (2003)
39. SU T (1998)
40. SUSCA A (2016)
41. THOMPSON J (1994)
42. VU T (2006)
43. ZHAN L (2018)
|
10.1016_j.toxrep.2022.07.011.txt
|
TITLE: Ensiling process and pomegranate peel extract as a natural additive in potential prevention of fungal and mycotoxin contamination in silage
AUTHORS:
- Sadhasivam, Sudharsan
- Marshi, Rula
- Barda, Omer
- Zakin, Varda
- Britzi, Malka
- Gamliel, Abraham
- Sionov, Edward
ABSTRACT:
A study was conducted on six animal feed centers in Israel where fungal and mycotoxin presence was examined in maize and wheat silages. Fumonisin mycotoxins FB1 and FB2 were present in every maize silage sample analyzed. Interestingly, no correlation was found between the occurrence of specific mycotoxins and the presence of the fungal species that might produce them in maize and wheat silages. We further investigated the effect of pomegranate peel extract (PPE) on Fusarium infection and fumonisin biosynthesis in laboratory-prepared maize silage. PPE had an inhibitory effect on FB1 and FB2 biosynthesis by Fusarium proliferatum, which resulted in up to 90 % reduction of fumonisin production in silage samples compared to untreated controls. This finding was supported by qRT-PCR analysis, showing downregulation of key genes involved in the fumonisin-biosynthesis pathway under PPE treatment. Our results present promising new options for the use of natural compounds that may help reduce fungal and mycotoxin contamination in agricultural foodstuff, and potentially replace traditionally used synthetic chemicals.
BODY:
1 Introduction Silage, which is one of the main animal feed sources for dairy cattle, can be contaminated with mycotoxins that are produced as secondary metabolites by filamentous fungi belonging mainly to the genera Aspergillus , Penicillium and Fusarium . When ingested, mycotoxins can have severe acute and chronic toxic effects, presenting a serious risk to animal and human health. Silage is stored under anaerobic conditions and acidification of the ensiled forages by lactic acid-producing bacteria. Most mycotoxigenic fungi are unable to grow in the acidic silage environment under low oxygen levels [21,37] . However, mycotoxins produced by mycotoxigenic fungi in the field may remain unchanged during the silage process, due to their high stability. For example, the concentration of zearalenone (ZEN) remained almost unchanged in maize silage over a 12-week period, while its main producer in the field, Fusarium culmorum , could no longer be detected in the silage after 11 days, suggesting that the ZEN had been produced before ensiling [19] . The use of various antifungal compounds is expected to prevent the growth of mycotoxigenic fungi during ensiling and limit mycotoxin contamination formed during this time. A number of studies have shown that the use of biocontrol agents can decrease aerobic spoilage and reduce or prevent fungal contamination in silage [16,17] . Maize silage inoculated with Lactobacillus buchneri and Pediococcus pentosaceus had lower yeast and mold counts than untreated silage [38] . That study also demonstrated that treatment with potassium sorbate reduces fungal contamination and aerobic spoilage in maize silage. However, application of such additives had no effect on the concentrations of Fusarium producing toxins in maize silage [16,18,38] . Since the current strategies are not effective enough to eliminate or reduce mycotoxin contamination to safe threshold levels, there is a need for alternative, environmentally friendly methods to control these toxic substances during ensiling. Plants produce a large variety of compounds that are responsible for a wide range of biological and pharmacological properties, including antimicrobial activities [41] . Some plant-derived compounds have been reported to exhibit direct antifungal activity in treated plant hosts [24] . Different parts of the pomegranate ( Punica granatum L.) fruit, especially the peel, are considered a rich source of polyphenols, such as ellagitannins, mainly including α and β isomers of punicalagin, gallic acid, ellagic acid, and its glycosylated derivatives, anthocyanins, proteins, bioactive peptides and polysaccharides [2,28,36] . Several in-vitro and in-vivo studies have reported that pomegranate peel extracts (PPEs) had a higher content of total polyphenols, as well as strong antioxidant, antitumor, antibacterial and antifungal activities [1,8,13,26,31,35] . Moreover, our recent work demonstrated that beyond its antifungal activity, PPE has the ability to inhibit aflatoxin production by Aspergillus flavus [33] . In that study, PPE inhibited aflatoxin B 1 production without affecting the fungal growth, suggesting that the extract may affect specific genes encoding enzymes involved in the aflatoxin-biosynthesis pathway. These findings led us to explore PPE as a potential inhibitor of mycotoxin production in agricultural commodities. Here, the mycotoxins fumonisin B 1 and B 2 (FB 1 and FB 2 , respectively), which are produced by several Fusarium species, were detected in all randomly selected maize silage samples collected from six animal feed centers across Israel. This indicated that mycotoxins could persist, even in well-preserved silage. Furthermore, PPE inhibited FB 1 and FB 2 produced by Fusarium proliferatum during maize ensiling on a laboratory-scale, suggesting this extract’s potential to prevent mycotoxin contamination of animal feed. 2 Materials and methods 2.1 Sample collection A total of 320 samples of silage for dairy cattle, 160 of wheat and 160 of maize, were collected from six animal feed centers located in northern, central and southern districts of Israel over a 2-year period (2018–2019). The samples were collected when the silages were approximately 6 months old. The sample aliquots (~ 500 g) were taken from silage stacks, in an area at least 1 m distant from the sides, top and bottom, using a silage drill approximately 1 m behind the cutting face of the silage stack. Upon collection into sterile plastic bags, the samples were kept cool during transport to the laboratory. Each sample was divided into two subsamples: a first subsample of 100 g was analyzed for fungal colony counts immediately after arrival at the laboratory, and the remaining sample was stored at − 20 ℃ until further mycotoxin analysis. An aqueous extract of the silage sample was prepared for pH measurement with a pH electrode. 2.2 Culturing and identification of fungal species Silage samples (100 g wet weight) were transferred to flasks containing 400 ml peptone water (Difco; Becton Dickinson, Sparks, MD, USA). The suspension was transferred to a plastic bag and homogenized in a stomacher blender (Interscience, Saint-Nom-la-Bretèche, France) for 2 min. Ten-fold dilutions were prepared in peptone water and samples (100 µl) were plated on potato dextrose agar (PDA) plates supplemented with chloramphenicol (20 µg/ml) and dichloran (2 µg/ml) to prevent bacterial contamination and growth of Mucorales fungi, respectively. Molds were enumerated using the standard plate count method, following incubation at 28 °C for 5 days, and the results were expressed in number of colony-forming units per gram of silage sample (CFU/g). Individual colonies were transferred singly to PDA plates to obtain a pure culture for further identification of fungal species by morphological analysis and sequencing of ribosomal DNA internal transcribed spacer (ITS). Fungal DNA was extracted from lyophilized mycelium using a CTAB-based method as previously described [32] . The yield and quality of DNA were assessed using a NanoDrop One spectrophotometer (Thermo Fisher Scientific, Wilmington, DE, USA). The ITS rRNA gene regions in fungi were amplified by PCR and sequenced using universal primers ITS1/ITS4 ( Table S1 ). The sequence data were analyzed and compared using BLAST against the NCBI database ( https://blast.ncbi.nlm.nih.gov/Blast.cgi ). 2.3 Mycotoxin analysis 2.3.1 Preparation of mycotoxin standard solutions Individual stock standard solutions (1 mg/ml) of aflatoxins B 1 , B 2 , G 1 , and G 2 (AFB 1 , AFB 2 , AFG 1 , AFG 2 , respectively), ochratoxin A (OTA), patulin (PAT), gliotoxin (GLIO), zearalenone (ZEN), deoxynivalenol (DON), FB 1 and FB 2 , T-2 toxin (T-2) and HT-2 toxin (HT-2) (Fermentek, Israel) were prepared in methanol. Mixed multi-mycotoxin standard solutions at three concentration levels were prepared by dilution of the single analyte stock standard solutions in methanol. All solutions were stored at − 20 ℃ until use. 2.3.2 Mycotoxin analysis by high-performance liquid chromatography coupled with tandem mass spectrometry (LC–MS/MS) Mycotoxin concentrations were analyzed in 82 randomly selected samples, consisting of 38 maize and 44 wheat silage samples. The samples were freeze-dried and ground in a laboratory grinder. Each ground sample (2.5 g) was mixed with 7.5 ml distilled water and extracted with 15 ml of extraction solvent mixture (acetonitrile/ethyl acetate/acetic acid, 10:5:0.15, v/v). After agitation on an orbital shaker for 30 min, the samples were centrifuged at 2150 g for 10 min. A 1-ml aliquot of supernatant was transferred to a 15-ml glass tube and evaporated under a stream of nitrogen gas at 50 °C. The dried residue was reconstituted with 0.3 ml of a 1:1 (v/v) methanol/water mixture, and an aliquot was filtered through a 0.22-µm PTFE filter into a glass injection vial and stored at − 20 °C prior to analysis. The samples were analyzed by LC–MS/MS as described previously [32] . LC separation of 2 µl injected sample was performed on a Nexera X2 UHPLC system (Shimadzu, Tokyo, Japan) with a 100 × 2.1 mm, 2.6 µm Kinetex C 18 column, (Phenomenex, Torrance, CA, USA). The column temperature was 40 ℃. The mobile phase were (A) ammonium acetate 2.5 mM acidified with 0.1 % acetic acid, and (B) methanol. The concentration of solution B was raised gradually from 5 % to 95 % within 8 min, then brought back to the initial conditions at 9 min, and allowed to stabilize for 3 min. A flow rate of 0.4 ml/min was used. The LC system was coupled with an API 6500 hybrid triple quadrupole/linear ion trap mass spectrometer (Sciex, Concord, ON, Canada) equipped with a turbo-ion electrospray ion (ESI) source. The mass spectrometer was operated in the multiple reaction monitoring (MRM) in both positive and negative mode within a single run. Positive polarity was applied for all analytes except for DON, ZEN, PAT and GLIO. Analyte specific detection parameters are specified in Table S2 (A, B) . Source temperature was set at 350 ℃, ion-spray voltages at − 4500 V (negative mode) and 5000 V (positive mode), curtain gas at 35 arbitrary units (au), nebulizer gas at 60 au, and turbo gas at 40 au. 2.3.3 Validation of analytical parameters Method performance and validation parameters were determined according to European Commission (EC) regulation no. 401/2006 [10] . Three wheat and three maize silage samples that were not contaminated, or only slightly contaminated with the major mycotoxins were spiked with multi-mycotoxin standard solutions at three concentrations (calibration levels are specified in Table S2 ). Extraction and analysis were performed as described in Section 2.3.2 . The spiking experiments were performed in triplicate at three different time points. Validation parameters, such as precision, accuracy, limit of detection (LOD), limit of quantification (LOQ) and specificity, were determined. 2.4 Laboratory silage studies 2.4.1 Preparation of PPE Pomegranate ( Punica granatum L.) variety Wonderful fruit were purchased from local markets. The fruit peels were freeze-dried and milled into a fine powder using a laboratory grinder. The dried powder (100 g) was extracted with 500 ml of 80 % methanol for 72 h at room temperature in the dark. The suspension was filtered through Whatman no. 1 filter paper and concentrated using a rotary evaporator (Buchi R-100, Flawil, Switzerland) at 45 o C. Then, the extract was freeze-dried and a concentrated stock solution of PPE (100 mg/ml) was prepared in sterile water, which stored at 4 °C until use. 2.4.2 Fungal strain Fumonisin-producing Fusarium proliferatum strain YO3, isolated from red onion cv. Mata Hari, was used during the field study. The strain was refreshed from − 80 °C by subculturing on PDA plates and maintained at 28 °C before each experiment. Spores (macroconidia) were collected in sterile saline from cultures grown for 4 days and macroconidial suspension was adjusted to the required concentration by counting in a hemocytometer. Inoculum concentration was verified by plating on PDA plates for determination of CFU counts. 2.4.3 Field experiments The field trial was held in 2019, in Kibbutz Yotvata, Israel, with the maize hybrid "Overland". The ears were inoculated with F. proliferatum at initial silk formation (4–7 days post–silk emergence) by injecting 5-ml of macroconidia suspension (10 5 conidia/ml) through the silk canals (inside the husk cavity and above the cob). The experiment consisted of three replicates, each with two rows. Maize ears inoculated with sterile saline water served as a non-treated control group. The field was maintained for additional 40 days until ear maturity. At this point the entire plants including the ears from the treated plots were harvested, bagged, and brought to the laboratory and transferred to the lab for ensiling. 2.4.4 Ensilage experiments The harvested ears were stripped of their husks and shelled manually; then, husks and cobs were cut using a hand cutter into 10- to 20-mm pieces. Silages were prepared as follows: 350 g of the chopped fresh matter was packed into a sterilized glass jar (0.5-l volume) with the addition of a lactic acid bacteria (LAB) inoculant ( Lactobacillus plantarum , 3 × 10 6 CFU/g, ECOSIL™, Port Talbot, West Glamorgan, UK) and compressed by hand. Silages were prepared with the addition of either 5 ml PPE at different concentrations (42 or 85 µg/g) or sterile saline (control) in three replications each. Glass containers of all plant materials and treatments were sealed with a rubber-lined lid and stored in a temperature-controlled room at 25 ℃. Jars were opened on days 2, 5, 30, and 90 for fungal CFU determination and mycotoxin analysis. Silage dry matter was determined by drying to a constant weight (105 ℃, 18 h). The pH value was measured each day the silage was opened in an aqueous extract of the silage sample using a pH electrode. F. proliferatum CFUs in the samples were determined before and after ensiling by the method described in Section 2.2 . Briefly, 50 g chopped plant material was transferred to a plastic bag containing 450 ml peptone water and homogenized in a stomacher blender for 2 min. Ten-fold dilutions were prepared in peptone water and samples (100 µl) were plated on PDA plates supplemented with chloramphenicol (20 µg/ml) to prevent bacterial contamination. 2.4.5 Mycotoxin analysis of laboratory-ensiled samples After opening the jars, 50 g of each sample was freeze-dried and ground. FB 1 and FB 2 were extracted using FumoniTest™ wide-bore immunoaffinity columns (VICAM, Milford, MA, USA) according to the manufacturer’s protocol. Briefly, 2 g of ground sample was extracted with 10 ml of extraction solvent mixture (acetonitrile/methanol/water, 25:25:50, v/v). After agitation on an orbital shaker for 30 min, the samples were centrifuged at 8580 g for 15 min. A 2-ml aliquot of the supernatant was diluted with 8 ml phosphate buffered saline (PBS). Then 10 ml of the diluted extract was passed through an immunoaffinity column, and the column was rinsed with 10 ml PBS. FB 1 and FB 2 were eluted by passing 1 ml methanol followed by 1 ml water through the column, and the eluate was evaporated to dryness under a nitrogen stream at 60 °C. The analytes were derivatized by combining methanol (250 µl), 0.05 M sodium borate buffer (250 µl), sodium cyanide (125 µl), and 2,3-naphthalenedicarboxaldehyde (NDA, 125 µl). The mixture was allowed to react for 20 min at 60 °C (water bath) and then cooled down to room temperature. Then 250 µl of 0.05 M phosphate buffer (pH 7.4)/acetonitrile (40:60, v/v) was added to the mixture. The derivatized samples were filtered through a 0.22-µm PTFE filter and quantitatively analyzed by injection of 20 µl into a reversed phase HPLC/UHPLC system (Waters ACQUITY Arc, Milford, MA, USA) with an isocratic mobile phase consisting of 0.1 M sodium phosphate monobasic salt adjusted to pH 3.3 with orthophosphoric acid/methanol (230:770, v/v) at a flow rate of 1 ml/min through a Kinetex 3.5 µm XB-C 18 (150 × 4.6 mm) column (Phenomenex). The column temperature was 30 °C. The non-contaminated silage samples were spiked with different concentrations of fumonisins to construct the calibration curves. The FB 1 and FB 2 peaks were detected with a fluorescence detector (excitation at 420 nm and emission at 500 nm) and quantified by comparing with calibration curves of the mycotoxin standard. 2.5 Mycotoxin and gene-expression assays in artificially contaminated wheat grain samples Sterilized wheat grain samples (10 g) were placed in sterile Petri dishes and inoculated with 1 ml macroconidial suspension (10 6 conidia/ml) of F. proliferatum (YO3), with the addition of either 1 ml PPE at different concentrations (100, 500 or 1000 µg/g) or sterile saline (control) in three replications each; the samples were incubated at 28 ℃ for 8 days. Then, the grain samples were freeze-dried and milled to a fine powder using a laboratory grinder. The fumonisin toxins (FB 1 and FB 2 ) were extracted and analyzed by HPLC using the protocol in Section 2.4.5 . For expression analysis of genes associated with fumonisin biosynthesis, total RNA was extracted from 100 mg of lyophilized wheat grain powder using the Hybrid-R RNA Isolation Kit (Gene All, Seoul, South Korea) according to the manufacturer’s instructions. The DNase and reverse-transcription reactions were performed on 1 µg of total RNA with the Maxima First-Strand cDNA Synthesis Kit (Thermo Fisher Scientific) according to the manufacturer’s protocol. Quantitative real-time PCR (qRT-PCR) was performed using Fast SYBR Green Master Mix (Applied Biosystems, Waltham, MA, USA) in a StepOnePlus Real-Time PCR System (Applied Biosystems) with the following program: 95 °C for 20 s, followed by 40 cycles of 95 °C for 3 s and 60 °C for 20 s. The samples were normalized using the housekeeping gene β-tubulin as an endogenous control and relative expression levels were measured using the 2 (−ΔΔCt) analysis method. Results were analyzed with StepOne software v2.3. Primer sequences used for qRT-PCR analysis are listed in Table S1 . 2.6 Statistical analysis All experiments described here are representative of at least three independent experiments with the same patterns of results. The statistical analysis of the data was performed using one‐way analysis of variance (ANOVA). If one‐way ANOVA reported a p value of < 0.05, further analyzes were performed using Tukey's single‐step honestly significant difference test to determine significant differences between the treatments. 3 Results 3.1 Occurrence of filamentous fungi and mycotoxins in silages A total of 59 fungal isolates cultured from wheat and maize silage samples were identified by ITS region sequencing, and most of them were assigned to the genera Monascus (34 %), Aspergillus (32 %), Penicillium (10 %), Byssochlamys (7 %), Fusarium , Geotrichum and Scedosporium (3.4 % for each genus) ( Fig. 1 ). Monascus ruber , Monascus purpureus , Aspergillus fumigatus , Penicillium chrysogenum and Byssochlamys nivea predominated among the isolated fungal species and were equally present in both wheat silage and maize silage samples. Other fungal species, such as Talaromyces columbinus , Sordaria fimicola , Nigrospora sphaerica and Purpureocillium lilacinum , were isolated from the silage samples at lower incidence ( Table S3 ). Total fungal counts of all silage samples showed a moderate degree of contamination, ranging from 10 2 to 10 4 CFU/g, and did not exceed the limits (1 × 10 4 CFU/g) recommended by the Good Manufacturing Practice to ensure hygienic quality of animal feed [14] . The LC–MS/MS multi-toxin method was optimized for the simultaneous detection and quantification of 13 mycotoxins in silage samples. As shown in Table 1 , aflatoxins (AFB 1 , AFB 2 , AFG 1 , AFG 2 ) were among the most prevalent mycotoxins, being detected in 34.2 % of the maize silage samples, followed by OTA, PAT and ZEN found in 23.7 %, 7.9 % and 2.6 % of the samples, respectively. However, only two samples exceeded the EU maximum acceptable limit of 5 µg/kg for AFB 1 in maize silage ( Table 1 ) [11] . Surprisingly, every tested maize silage sample was contaminated with FB 1 and FB 2 ( Table 1 ). Although the maximum concentrations were as high as 3274 µg/kg for FB 1 and 647 µg/kg for FB 2 , these levels were below the EU regulations of 50,000 µg/kg for FB 1 and FB 2 in feedstuffs for adult ruminants [11] . In contrast, a much lower incidence of mycotoxins was observed in wheat silages: out of the 44 analyzed wheat silage samples, only 12 contained one or more of the detectable analytes at concentrations above their LODs ( Table 2 ). FB 1 was detected in 18 % of the wheat silage samples, followed by AFG 1 (11.3 %), FB 2 , PAT and ZEN (each mycotoxin in 2.3 % of the samples). Concentrations of the regulated mycotoxins detected in wheat silages (FB 1 , FB 2 , ZEN) were far below the guidance values recommended by the EC for products intended for animal feeding [11] . T-2, HT-2, GLIO and DON were not detected in either type of silage. In general, the findings clearly indicated that there was no correlation between the occurrence of Fusarium , Aspergillus or Penicillium toxins and the presence of the fungal species that could produce them. 3.2 Effectiveness of ensilage in preventing fungal growth The effects of ensiling process and PPE treatment on fungal growth were assessed in the laboratory-scale silages prepared from maize ears inoculated with F. proliferatum . Fig. 2 shows the dynamics of F. proliferatum colonization of maize over the 90-day ensilage period. Quantitative analysis of F. proliferatum colonization showed more than 10 8 CFU/g in the infected maize samples before ensiling (at 0 h). There was evidence of natural background F. proliferatum infection at the field site; the fungus was recovered at a relatively lesser extent (~ 2.5 × 10 3 CFU/g) from ears inoculated with saline only ( Fig. 2 ). Two days of ensilage had a strong effect on F. proliferatum counts, resulting in an up to 3.68 log 10 CFU/g reduction in the fungal population compared to the samples at harvest (before ensiling) ( Fig. 2 ). A similar reduction in fungal load (up to 3.88 log 10 CFU/g) was observed in the silage samples treated with PPE, indicating no additional antifungal activity of the compound during ensiling. No fungal presence was found in silages after 5 and 30 days of ensiling, suggesting that F. proliferatum could not survive typical silage conditions. Maize silages revealed generally low pH values (3.75–4.05), indicating adequate preservation throughout the ensiling process. 3.3 Anti-mycotoxigenic activity of PPE FB 1 and FB 2 were detected in maize inoculated with F. proliferatum in the field experiment at mean concentrations of 88 and 18 µg/g, respectively ( Fig. 3 ). Interestingly, despite a lack of antifungal activity at the tested concentrations, PPE displayed strong inhibition of fumonisin production by F. proliferatum in maize silages during ensiling. In particular, after just 2 days of ensiling, treatment with PPE at 42 and 85 µg/g significantly inhibited FB 1 production by 50 % and 82 %, respectively, compared to silages without PPE supplement ( Fig. 3 A). The silage process appeared to reduce mycotoxin concentrations as well. Reduction of FB 1 content in maize silage with no PPE was recorded after 5 days of ensiling. However, more pronounced inhibition of mycotoxin production was detected at this time point due to the PPE treatment, resulting in an up to 91 % and 78 % decrease of FB 1 and FB 2 contents, respectively, in the silage samples compared to untreated controls ( Fig. 3 A, B). The toxins' contents continued to decline in untreated samples and were found at 15 and 4.2 µg/g for FB 1 and FB 2 , respectively, by day 30 of the ensiling. The mycotoxins were detected at their lowest levels in maize silages opened on that same day (2.6 µg/g for FB 1 and 0.46 µg/g for FB 2 ) which had been treated with PPE at the highest concentration (85 µg/g). To assess PPE’s inhibitory activity on fumonisin production independent of the ensiling process, wheat grains inoculated with F. proliferatum were treated with PPE compound at different concentrations, ranging from 100 to 1000 µg/g. PPE inhibited FB 1 and FB 2 synthesis up to 99.4 % and 97.6 %, respectively, and this effect was dose-dependent ( Fig. 4 A). Furthermore, the effect of PPE on the expression level of key genes in the fumonisin-biosynthesis cluster – fum1 (polyketide synthase), fum6 (fumonisin C-14 and C-15 hydroxylation), fum8 (α-oxoamine synthase), fum13 (C-3 carbonyl reductase) and fum21 (transcription factor) – was analyzed by qRT-PCR. The expression levels of these genes were significantly downregulated under PPE treatment at all tested doses, directly depressing fumonisin production ( Fig. 4 B). 4 Discussion Fungal spoilage and mycotoxin contamination of animal feed, such as silage for dairy cattle, remain a significant threat to farmers worldwide. The presence of mycotoxigenic fungi and mycotoxins in silage adversely affects the safety and quality of the feed that can lead to poor animal performance [23] . The results of the current survey, which evaluated the fungal incidence in wheat and maize silages collected from animal feed centers in Israel, are consistent with previous studies, where the most colonizing fungal species belonging to the genera Aspergillus , Penicillium , Fusarium , Byssochlamys and Monascus have been regularly isolated from silages [3,7,29,30] . Monascus spp., Aspergillus fumigatus , Penicillium spp. and Byssochlamys nivea were among the predominant species found in the current study. Our findings are in agreement with those of other research groups that have reported a high prevalence of these toxigenic fungi in different types of silage due to their ability to survive ensiling conditions [9,21, 27,29] . Only two species of Fusarium , F. solani and F. falciforme , were found in the maize silage samples from one feed center in the current study. Nevertheless, fumonisin toxins were detected in all tested maize silage samples and in 18 % of the wheat samples. There were no correlations between the occurrence of fumonisins and the presence of Fusarium spp. in our samples, such as F. proliferatum and F. verticillioides , which may produce FB 1 and FB 2 . This means that toxins could also have been produced before ensiling. These findings are in alignment with those reported by Schenck et al. [34] , who found no correlation between the occurrence of specific Fusarium toxins (such as DON, T-2, HT-2, ZEN) and presence of the toxin-producing Fusarium spp. in wrapped forage bales. Among Aspergillus species isolated from silage samples in the present study, A. fumigatus was the most prevalent, detected in four animal feed centers. However, GLIO, which is produced by A. fumigatus and has a variety of adverse biological effects, was not found in any of the analyzed sample ( Tables 1, 2 ). In contrast, other studies have reported a significant correlation between the occurrence of GLIO and the presence of A. fumigatus in silages [15,30] . Aflatoxins and OTA were detected mainly in maize silage samples in this survey. Yet, none of the potential aflatoxin and OTA producers among Aspergillus and Penicillium species were found in the analyzed silage samples, suggesting that the presence of these mycotoxins may be a result of the initial fungal contamination in the field before ensiling. Maize and wheat crops are usually affected by aflatoxin contamination in tropical and/or subtropical areas, although temperate regions could increase in importance due to climate change [4] . Similar to the current survey, aflatoxin contamination has been found in silage samples collected in Argentina [27] , France [29] and Egypt [9] , indicating that specific environmental conditions may influence mycotoxin synthesis. One of the prevalent species found in the current study was B. nivea . This species has the ability to produce several mycotoxins, among them PAT, which was found in silage along with growth of B. nivea ( Fig. 1 , Tables 1, 2 ). PAT, which is considered an indicator of Penicillium toxins, is produced primarily by Penicillium expansum , but this fungus was not detected in any sample examined in the present study. Laboratory-scale silage experiments clearly demonstrated that F. proliferatum , which is one of the most common field-infecting and fumonisin-producing fungal strains, cannot survive proper ensiling conditions over time. An almost 50 % reduction in fungal load was observed after 2 days of ensiling in the glass jar maize silages; F. proliferatum could no longer be detected after 5 days of ensiling ( Fig. 2 ). Several studies have already reported that Fusarium species are not commonly isolated from ensiled samples, because they are sensitive and cannot survive in an anaerobic and low pH environment of the silage [12,21,34,39] . Nevertheless, FB 1 and FB 2 toxins were found in the laboratory-scale silage samples throughout the experiment, suggesting that these mycotoxins were produced in the field and were already prevalent in the maize ears before ensiling. A decrease in mycotoxin concentrations in the non-treated samples could be explained by microbial degradation or adsorption (for instance by LAB) during fermentation [6,22,40] . Treatment with PPE at the concentrations used in the study (42 and 85 µg/g) did not contribute to fungal inhibition beyond the ensiling process. Given that PPE has been shown to be effective in inhibiting major postharvest fungal pathogens at concentrations ranging from 1.2 to 12 g/l [20,25,26] , the use of low concentrations of the extract in the present study explains the absence of its antifungal activity. This was in agreement with our previous study, where PPE was found to be active against several fungal isolates, including F. proliferatum , at relatively high concentrations, with MIC values between 1250 and 5000 µg/ml [33] . However, beyond the silage process, PPE treatment resulted in further inhibition of fumonisin production by F. proliferatum at concentrations considerably lower than those required for fungal growth inhibition. These results indicate that PPE’s inhibitory activities of fungal growth and mycotoxin production are independent of each other, and that the inhibition of fumonisin synthesis by the extract could involve downregulation of specific enzymes in the fumonisin biosynthetic pathway. Indeed, PPE treatment of infected wheat grains resulted in significant inhibition of FB 1 and FB 2 synthesis by F. proliferatum , accompanied by downregulated transcript levels of the key genes in fumonisin biosynthesis ( Fig. 4 B). Similar findings have been reported in our recent study [33] , where combined treatment with suboptimal doses of PPE and the commercial antifungal drug prochloraz led to complete inhibition of AFB 1 production by A. flavus , which correlated with the downregulation of key genes in the aflatoxin-biosynthesis cluster. 5 Conclusions The results of this study showed that filamentous fungi and mycotoxins are commonly present in wheat and maize silages in Israel; however, there was no correlation between the occurrence of specific mycotoxins and the presence of the fungal species that might produce them in our samples. FB 1 and FB 2 were the most prevalent mycotoxins, both being present in every maize silage sample, but their concentrations were below the guidance values recommended by the EC for products intended for animal feeding. No viable fumonisin-producing Fusarium spp. were isolated from the silage samples in the present study. This finding was confirmed by laboratory silage experiments, where F. proliferatum could barely survive the silage environment and disappeared within a few days of ensiling. Furthermore, treatment with PPE at relatively low concentrations demonstrated strong anti-mycotoxigenic activity against mycotoxigenic fungi. This natural antimicrobial compound, which has been proven effective against a variety of plant and foodborne pathogens [5] , significantly inhibited FB 1 and FB 2 production by F. proliferatum in a dose-dependent manner by downregulating specific enzymes in the fumonisin-biosynthesis pathway. Nevertheless, further investigation is needed to elucidate the mechanisms by which PPE influences gene clusters involved in mycotoxin biosynthesis. Although no signs of toxicity of the extract have been reported to date, further research is needed to confirm the safety of PPE for animal and human health. The potential use of PPE as a silage additive may lead to significant reductions in fungal and mycotoxin contaminations, and improved quality and safety of animal feed. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgements This research was supported by the Chief Scientist of the Israeli Ministry of Agriculture and Rural Development , Grant no. 20-06-0045 . Appendix A Supporting information Supplementary data associated with this article can be found in the online version at doi:10.1016/j.toxrep.2022.07.011 . Appendix A Supplementary material Supplementary material
REFERENCES:
1. AKHTAR S (2015)
2. ALEXANDRE E (2019)
3. ALONSO V (2013)
4. BATTILANI P (2016)
5. BELGACEM I (2021)
6. BOUDRA H (2008)
7. CHELI F (2013)
8. DIKMEN M (2011)
9. ELSHANAWANY A (2005)
10.
11.
12.
13. FOSS S (2014)
14.
15. KELLER L (2012)
16. KRISTENSEN N (2010)
17.
18. LATORRE A (2015)
19. LEPOM P (1988)
20. LI M (2016)
21. MANSFIELD M (2007)
22. NIDERKORN V (2006)
23. OGUNADE I (2018)
24. PALOU L (2016)
25. PANGALLO S (2022)
26. PANGALLO S (2017)
27. PEREYRA M (2008)
28. REDDY M (2007)
29. RICHARD E (2009)
30. RICHARD E (2007)
31. ROSASBURGOS E (2017)
32. SADHASIVAM S (2017)
33. SADHASIVAM S (2019)
34. SCHENCK J (2019)
35. SINGH B (2019)
36. SORRENTI V (2019)
37. SPADARO D (2015)
38. TELLER R (2012)
39. VANDICKE J (2021)
40. WAMBACQ E (2016)
41. WINK M (2015)
|
10.1016_j.elecom.2025.107987.txt
|
TITLE: Markov decision process for current density optimization to improve hydrogen production by water electrolysis
AUTHORS:
- Purnami, Purnami
- Nugroho, Willy Satrio
- Anggayasti, Wresti L.
- Sofi'i, Yepy Komaril
- Wardana, I.N.G.
ABSTRACT:
Maximizing the hydrogen evolution reaction (HER) remains challenging due to its nonlinear kinetics and complex charge interactions within the electric double layer (EDL). This study introduces an adaptive current density control approach using a Markov Decision Process (MDP) to enhance HER performance in alkaline water electrolysis. The MDP algorithm dynamically adjusts current release timings from three capacitors connected to the cathode based on feedback from hydrogen concentration levels. Results show that this fluctuating control strategy is more effective than static or linearly increasing methods, as it helps minimize overpotential, reduce heat buildup, and prevent hydrogen bubble accumulation. The MDP-optimized system achieved 7460 ppm in 60 min, outperforms the control condition (5802 ppm) produced under uncontrolled conditions. This work highlights a novel application of reinforcement learning to actively regulate electrochemical parameters, offering a promising mechanism for improving electrolyzer efficiency.
BODY:
1 Introduction The emerging concerns on global warming have led to the net zero emission (NZE) targets which should achievable on 2060. According to the Paris agreement, the target involves finding a carbon neutral fuel [ 1 ]. Green hydrogen is a potential candidate of carbon neutral fuel since its usage does not produce carbon footprints. Green electrochemical hydrogen from water electrolysis process makes sure the electricity source is obtained from fully renewable source [ 2 ]. Electrolysis is a technique to split water molecules into hydrogen and oxygen by utilizing electric current [ 3 ]. The efficiency of hydrogen generation through the water electrolysis method is influenced by several factors such as voltage, electrolysis duration, and the nature of the electrocatalyst. The main issue of conventional water electrolysis is its low efficiency which hardly surpass 60 % [ 4 ]. This study introduces a technique to enhance the hydrogen evolution reaction (HER) of alkaline water electrolysis. Recent advancements in the structural and operational optimization of water electrolysis systems have further demonstrated the critical role of current density and system configuration in improving hydrogen production efficiency. High-temperature alkaline water electrolysis (AWEC) has been shown to significantly enhance hydrogen output by extending the operational current density range and improving Faraday efficiency. At 120 °C and 10 bar, the system reached an efficiency of 78.52 %, although care must be taken to manage gas crossover and vapor saturation [ 5 ]. In parallel, structural innovations in proton exchange membrane electrolysis cells (PEMEC) such as replacing traditional parallel flow fields with three-dimensional titanium mesh have led to considerable performance gains. These include higher current density, increased hydrogen concentration, improved pressure distribution, and reduced energy consumption, while also enhancing current and temperature uniformity by over 13 % and 20 % respectively [ 6 ]. Experimental results further support that titanium mesh structures promote more stable and uniform temperature distribution, better polarization characteristics, and improved durability with lower voltage degradation over extended operation periods [ 7 ]. These findings underscore that both thermal and structural optimization strategies are essential complements to electrical control methods for enhancing the hydrogen evolution reaction and overall water electrolysis efficiency. Many studies continue to optimize the parameters to improve the efficiency of water electrolysis. Usually, the main focus of improving HER is to improve the current density. Current density is the amount of electric current on a certain area which describes how well an electrode interface to deliver electric currents to the electrolyte [ 8 ]. However, there are still many operational parameters that affect HER in water electrolysis beyond current density. The sensitivity analysis on multi-objective optimization model of electrochemistry and multiphase flow coupling found the temperature, electrolyte concentration, and inlet velocity greatly affects the tradeoff between specific energy consumption and the uniformity of current density [ 9 ]. Hence, high current density does not guarantee the prevention of specific energy consumption deterioration. Orthogonal collocation on finite elements (OCFE) as a numerical optimization approach was utilized to find the most cost-efficient operational setup of an Alkaline water electrolysis cell (AWEC) [ 10 ]. The optimal transient operating parameters can save the electricity cost for hydrogen production up to 17 %. The numerical simulation also can be used to determine the operational characteristics of AWEC [ 11 ]. The highest efficiency of 78.52 % was achieved at 120 °C 10 bar for 1 A/cm 2 which indicates the critical role of pressurization. However, the developed models use deterministic approach which is not ideal to model real-world problems where randomness takes part [ 12 ]. This study utilizes stochastic model to take the uncertainty in electrolysis system into the account. Further improvement in the performance and reliability of water electrolysis systems has been achieved through advanced electrode characterization and catalyst layer optimization. A novel hierarchical configuration with integrated voltage sensing wires has enabled in-situ measurements of voltage loss at both electrodes in a PEMWE system. This setup facilitates direct characterization of electrode kinetics, revealing that kinetic losses contribute over 96.6 % of voltage loss at low current densities, while also offering a valuable method for real-time diagnostics and performance analysis [ 13 ]. In parallel, innovative structural designs of catalyst layers, such as micro-grooved geometries, have been shown to alleviate flooding and enhance water-gas transport. These improvements yielded up to 4.76 % better PEMFC output performance, with multi-objective optimization confirming robust performance gains across oxygen concentration, liquid saturation, and power density [ 14 ]. Additionally, the coupled interaction between the porous transport layer (PTL) and catalyst layer has been identified as a critical determinant of PEMWE efficiency. The interfacial structure, surface properties, and catalyst layer thickness significantly influence charge transfer and local current distribution. Optimizing these parameters enhances electrical conductivity and active site utilization, thereby improving current density uniformity and overall electrolyzer performance [ 15 ]. The integration of machine learning into water electrolysis research has opened new pathways for predictive modeling, optimization, and performance enhancement of electrolyzer systems. In alkaline water electrolysis, advanced two-phase modeling combined with machine learning techniques such as artificial neural networks (ANN) and ensembled tree models has been shown to accurately predict hydrogen production rates with high precision (R 2 ≈ 0.98), effectively accounting for gas evolution, bubble dynamics, and electrolyte conductivity loss [ 16 ]. These models offer robust tools for real-time process control and operational cost reduction. Similarly, in the context of PEMEC, machine learning algorithms, including support vector regression (SVR) and ANN, have been applied to evaluate the influence of key operating parameters such as temperature, current density, catalyst type, and transport layer morphology. By employing hyperparameter tuning via genetic algorithms, SVR achieved predictive performance comparable to that of complex ANN architectures [ 17 ]. These insights enable more informed operational strategies, allowing PEMEC systems to achieve higher performance and extended lifetimes under optimized conditions. The demonstrated capability of machine learning models to predict degradation and performance trends paves the way for the intelligent design and autonomous control of next-generation electrolysis technologies. Advance optimization techniques have been applied to improve HER on Water electrolysis. The Multi electrode assembly guided by machine learning using SHAP (SHapley Additive exPlanations) and genetic algorithm achieve 1.828 V at 3 A cm −2 which reduce total time cost by 67.9 % [ 18 ]. Another solution to improve HER is by predicting the future HER prior to implement the setup. ANN with LMBP (Levenberg-marquardt backpropagation) algorithm was implemented to set water electrolysis parameter with 99 % accuracy [ 19 ]. However, supervised machine learning and evolutionary computing data driven optimization approach require large dataset [ 20 ]. Deep reinforcement learning approach has been applied to overcome this issue. The Double deep Q Neural network (DQNN) was developed to provide adjustable magnetic rotational speed control on dynamic magnetic field (DMF) assisted electrolysis system [ 21 ]. DMF is a system in which the magnet was rotated around cathode to enhance the HER which applicable in AWEC and AWEC with CD-R based organic electrocatalyst [ 22 ]. Even though the DQNN reduces the amount of required dataset, the computational power to perform training on large dataset with Deep Learning is enormous compared to traditional machine learning [ 23 ]. Therefore, this study utilizes the partial part of Deep Q-learning algorithm which is Markov decision process (MDP) [ 24 ]. By modeling a system MDP, the probabilistic events can be employed through action, state, & space interactions [ 25 ]. In this study, the MDP was applied to adjust the timing of current density modification on cathode. The aim of this study is to find the best strategy to alter HER through the current density modification. Recent studies have shown current density affects the efficiency and performance of water electrolysis. Increasing current density can enhance the HER but often leads to higher overpotentials. As a result, the energy consumption is increased. For example, anion exchange membrane water electrolyzers (AEMWE) has a high current density of 7.68 A/cm 2 which surpass traditional PEMWE although the higher current densities reduces the stability and efficiency of the system over extended operation times [ 26 ]. Another study explored the effect of electrode spacing and electrolyte concentration on current density which find the optimization of these factors minimizes the negative impact of high current density especially under magnetic fields [ 27 ]. This help to mitigate bubble coverage at the electrode which improves gas release and system efficiency. Recent advancement in PEM water electrolysis cells with elevated current density improves 10 A. cm −2 by address the challenges on heat management and fluid transport [ 28 ]. The key design parameters were highlighted through the result of experimental analysis which is current-voltage curves from PFSA membranes that indicates internal resistance and charge transfer. The finding shows the current density above 5 A. cm −2 significantly impact efficiency. A newly developed OER catalyst achieves commercial current densities of 500 and 1000 mA. cm −2 with remarkably low overpotentials of 259 mV and 289 mV in alkaline conditions [ 29 ]. This combined with an efficient HER catalyst at 1.586 and 1.657 V shows excellent stability which makes it a breakthrough for large-scale green hydrogen production system using excess renewable energy. A nickel–iron-based electrocatalyst (CAPist-L1) was developed via a one-step seed-assisted method demonstrates exceptional durability of over 15,200 h for OER at high current densities of 1000 mA. cm −2 in alkaline conditions [ 30 ]. When applied to AEM, it achieved 7350 mA. cm −2 at 2.0 V indicates its potential for large-scale green hydrogen production. Another study shows a FeWO₄-FeNi₃ heterostructure catalyst developed via hydrothermal and calcination shows a high efficiency for OER and HER achieving about 10 mA. cm- 2 at 1.43 V [ 31 ]. The catalyst was stable for 100 h operational duration at 1000 mA. cm −2 in industrial conditions. The catalyst was continue to enhance the performance attributed to efficient electron transfer, improved ion and charge transport due to its hierarchical structure. These findings suggest higher current densities can increase hydrogen productivity in water electrolysis with careful design and operational adjustments. In summary, several approaches have been proposed to optimize water electrolysis: supervised machine learning methods (such as ANN and SVR) offer accurate predictions but require large datasets and are not ideal for real-time control. Evolutionary algorithms like genetic algorithms can explore complex design spaces but tend to be computationally expensive. Deep reinforcement learning, including DQNN, enables adaptive control but demands significant training data and computational power, making them less suitable for microcontroller-based systems. In contrast, MDP is lightweight and easy to interpret makes it well-suited for embedded control applications with limited computational resources. MDP also allows decision-making under uncertainty and focuses on sequential control which aligns directly with the aim of this study to optimize the timing of current regulation. Therefore, MDP was selected as the most appropriate approach for improving hydrogen evolution without the need for extensive data or hardware modification. This study introduces a new concept to utilize the current density to maximize hydrogen production. The approach is to perform variable current density load on cathode. Different from the previous studies which only focus on the exploitation of high current densities during electrolysis by the synthesis of nanomaterial [ 31 ] and operational variable adjusments (heat and fluid transport management) [ 28 ]. This study does not require specially designed electrode structure or nanomaterial interface while perform the operational adjustment through MDP. Instead of held electric current at its peak, the additional currents were introduced to cathode at a certain period. The timing period of current release to the cathode is controlled and optimized via MDP in this study. 2 Materials and method 2.1 The water electrolysis experiment In this study the HER indicator was the hydrogen concentration. The unit of measured hydrogen concentration was in parts per million (ppm). The instrument was MQ-8 sensor (FishEye, China) with Arduino UNO R3 (Arduino, Italy) microcontroller. The sensor was attached to the microcontroller through analog digital conversion (ADC) scheme with pinout arranged as in Fig. 1 . The electric current on cathode was measured using INA-219 sensor (Shenzen TCT electronics, China) connected to microcontroller through inter-integrated circuit (I2C) scheme. Three 1 μF 50 V aluminum electrolytic capacitors Nichicon UFG1H010MDM (Nichion, Japan) were attached to the cathode and microcontroller to modify the current density. The current density was calculated based on measured current using eq. 1 . The negative terminal was connected to the cathode while the positive terminal attached to current collector pin (Vcc). (1) j = I cathode A cathode (2) Q s a = R s a + γ ∑ s ′ P s ′ s a max a ′ Q s ′ a ′ (3) R s a = w 1 ⋅ f T 1 T 2 T 3 + w 2 ⋅ g σ + w 3 ⋅ h P + w 4 ⋅ k T The electrolyte for the electrolysis test was alkaline electrolyte consists of 12 ml water with 4 g of NaOH (25 %). The cell was made of acrylic with two Nickle (Ni) needle electrodes. Ni was used for as the electrode due to its decent HER activity in alkaline environments as a cathode and its formation of nickel oxide (NiO) on the surface during OER reduces the sluggish kinetics of anode [ 32 ]. The cathode and anode were separated 8 cm from each other with the arrangement as shown in Fig. 1 . The output gases were collected in separated channel on top of cathode and anode. 2.2 MDP optimization and simulation The Q-learning model was built utilized a structured data organization based on Python's programming language NumPy library. The primary data structure was Q-table which represented by two-dimensional NumPy array initialized with zeros. The array stores the estimated rewards for each state-action pair. The states were generated as a Cartesian product of capacitance timings, conductivity, pressure, and temperature states. The Cartesian product is the set of all possible ordered combinations of elements taken from two or more sets [ 33 ]. This creates a compact multi-dimensional array which makes possible to capture all combinations of the variables. The array was indexed through a mapping of states to the respective indices. This allows the efficient retrieval and updates of Q-values. The actions available to the model were organized as a list of strings to provide a straightforward representation for action selection during the learning process. The model employed simple data structures i.e. hash map (Python dictionary) to facilitate the mapping between actions and the indices which enhance the training efficiency. The step to build the model is written in Algorithm 1. Unlabelled Table The Q-learning model was constructed by defining a multi-dimensional state space consists of combinations of capacitance timings (t1, t2, and t3), conductivity (σ), pressure (P), and temperature (T). The action space consists of six possible actions to tune the the timing of each capacitor (1) increase t1, (2) decrease_t1, (3) increase_t2, (4) decrease_t2, (5) increase_t3, and (6) decrease_t3. A Q-table initialized with zeros using numpy zeros to store the estimated rewards for each state-action pair. The learning rate (α = 0.1) controls the speed at which the model updates its knowledge from new experiences. The discount factor (γ = 0.9) balances the importance of future rewards against immediate rewards which prioritizing long-term gains. The exploration-exploitation tradeoff (ε = 0.1) determines the model behavior when choosing actions randomly (exploration) or based on the highest Q-values (exploitation). The MDP model was run over 1000 episodes to learn the optimal policy by updating the Q-values using the reward function in Eq. 1 . This equation assigns different reward values based on the conditions of the state and the action taken. In Eq. 2 , Q(s,a) represents the Q-value for state s and action a, R(s,a) is the immediate reward received after taking action a in state s (see Eq. 3 ), γ is the discount factor which determines the importance of future rewards, P(s'∣s,a) is the transition probability from state s to state s' given action a. Q(s',a') is the maximum expected future Q-value for the next state s' and any possible action a'. 3 Results and discussion The electrolysis experiment shows the MDP optimized capacitance timing is superior. Over the 60-min electrolysis duration, the MDP-optimized strategy produced a cumulative hydrogen concentration of 7800 ppm, which is 13.78 % higher than the best-performing fixed-timing configuration (2 s interval, 6726 ppm). The fixed timings of 0.5 s and 1 s yielded 6492 ppm and 6505 ppm, respectively ( Fig. 2 ). This confirms adaptive tuning significantly enhances HER performance. A linear decrement strategy, which reduced capacitance timing by 1 ms every minute, resulted in only 6871 ppm of hydrogen after 60 min. As shown in Fig. 1 , decreasing individual capacitor timing (T1 only: 6871 ppm) outperformed combined reductions (T1 + T2: 6747 ppm; all decreased: 6246 ppm). However all strategies to add current density on cathode improve the HER compare to control condition (no additional current density). The controlled current density ensures steady and sufficient electron flow on cathode. On cathode-electrolyte interface the reduction reaction occurred which transform protons (H + ) into hydrogen gas (H₂) [ 35 ]. The maintained optimal current density matches the electron transfer rate with the availability of protons. This maximizes the overall reaction efficiency. If the current on cathode is too low, there will be not enough electrons to generate non-Faradaic current on cathode-electrolyte interface [ 36 ]. In other word, the interface slows the HER. Meanwhile, overcurrent can cause heat generation and bubble formation which block active sites [ 37 ]. While this study does not simulate these mechanisms directly, the MDP framework may help indirectly mitigate such effects by adjusting current before these adverse conditions escalate. The change of average hydrogen concentration during electrolysis as the effect of timing regulation on each capacitor is shown in Fig. 3 . The plot time T1, T2, and T3 are the capacitance timing in milliseconds (ms) which depict the current release from capacitor to cathode. At the first 25 min, all capacitance timing tends to decreased. This shows the option chosen by the agent is mostly to reduce the timing to maximize the reward hydrogen concentration. This strategy is successful indicated by the increased HER. After 25 min, the timing tends to increased. This indicates the agent chooses to release the current from the capacitor slower which further maximizes the HER after 30 min. As soon as the timing peaked, the agent explore the strategy to reduce the timing again. This fluctuate tuning strategy is successful to enhances HER with linear incremental fashion. As seen in Fig. 3 , the area under the curve is getting larger indicates the increased gradient overtime. The controlled current density ensures electron flow on cathode. On cathode-electrolyte interface the reduction reaction occurred which transform protons (H + ) into hydrogen gas (H₂) [ 35 ]. The maintained optimal current density matches the electron transfer rate with the availability of protons. This maximizes the overall reaction efficiency. If the current on cathode is too low, there will be not enough electrons to generate non-Faradaic current on cathode-electrolyte interface [ 36 ]. In other word, the interface slows the HER. Meanwhile, overcurrent can cause heat generation and bubble formation which block active sites [ 37 ]. While this study does not simulate these mechanisms directly, the MDP framework may help indirectly mitigate such effects by adjusting current before these adverse conditions escalate. A significant benefit of adaptive current control is its potential to operate the system within an optimal overpotential range. Overpotential is the extra energy required beyond the thermodynamic voltage to drive the reaction. According to previous research, excessive overpotential results in energy losses and promotes competing reactions [ 38 ]. Although overpotential was not explicitly modeled here, it is closely related to current density, and thus the regulation strategy applied via MDP may help maintain a moderate range. Another consideration is the minimization of side reactions such as heat accumulation or parasitic reactions at the cathode. Excessive current density is often associated with unwanted reactions such phenomena which reduce the efficiency of hydrogen production. The competing reactions at electrode surfaces consume more energy and reactants compared to normal reaction [ 39 ]. This kind of side reactions is unfavorable for long run electrolysis process. Meanwhile, the standard operation of industrial scale electrolyzer is more than 8 h a day [ 40 ]. While our model does not simulate these parasitic reactions, the adaptive current control strategy may offer practical advantages for long-term operation by avoiding persistently high current conditions. The HER is align with the current density evolution on cathode. As seen in action-reward plot Fig. 4 , the current density evolution pattern is similar with the time series HER pattern. This linear relationship indicates the current density dictates the overall behavior of the electrochemical system including the diffusion. It is also the indication of the distinct Helmholtz layer formation behavior. Helmholtz layer is the electric double layer (EDL) formed at the electrode-electrolyte interface consists of inner Helmholtz plane (IHP) populated with electrons responsible for non-Faradaic current and outer Helmholtz plane (OHP) populated with attracted cations responsible for Faradaic current [ 34 ]. At each point in time, the HER increased as the current density increased. The convergence of average Q-values over a 60-min period, as shown in Fig. 4 , indicates a significant improvement in the performance of capacitor timing and current density states. Initially, the average Q-value remains low (under −0.5), reflecting the agent's exploratory phase as it gathers data on various state-action pairs. However, as training progresses, a notable upward trend is observed, leading to a stabilization of average Q-values at a higher level. This suggests that the agent successfully identifies and reinforces optimal timing configurations for capacitors, thereby enhancing the efficiency of the HER. The results demonstrate the effectiveness of the MDP approach to refine decision-making processes and optimizing electrochemical operations over time. Regarding mass transport, literature suggests that optimized current can improve the delivery of protons to the cathode and facilitate hydrogen bubble detachment. The efficiency of proton diffusion on diffusion layer is depends on the active sites on OHP [ 41 ]. The optimized current density control may maintain the balance between reduction rate and the hydrogen bubbles detachment from the electrode surface. However, mass transport dynamics were not included in our MDP model. Thus, while the current regulation may indirectly influence such behaviors, further modeling is required to verify these effects. This is critical since factors like OH − binding energy, local proton depletion, and gas accumulation are known to increase impedance and reduce Faradaic efficiency [ 42 ]. The heat-map in Fig. 5 reveals the relationship between capacitor timings (T1, T2, and T3) and current density on the Q-values derived from the Markov Decision Process (see Supplementary S1 for larger figure). The Q-values indicate the expected utility of various actions (increase or decrease the timing of capacitors) for different state configurations. Notably, certain combinations, such as high timings for T1 and T3 coupled with medium timing for T2 under high current density; yield significantly higher Q-values, suggesting optimal conditions for enhancing HER. Conversely, configurations that involve lower capacitor timings or unfavorable current density conditions tend to produce lower Q-values (under 0.6), highlighting potential areas for adjustment to maximize HER efficiency. Overall, this analysis helps identify optimal strategies for tuning capacitor timings to improve performance in hydrogen production. The strength of the MDP model lies in its ability to make decisions based on system feedback. The model dynamically selects actions either increasing or decreasing current density timing based on observed hydrogen concentration. This allows the system to adapt to fluctuations that may correspond to changes in reaction kinetics or transport limitations. During periods of suboptimal performance, the MDP can reduce current to allow recovery, while under optimal conditions it can increase current to maximize production. This decision-making process is state-based and contributes to maintaining more consistent performance across dynamic conditions. MDP is particularly well suited to model the nonlinear behavior of HER. HER kinetics do not always increase linearly with input parameters like current density or proton availability [ 43 ]. MDP can model different states of the system based on hydrogen concentration. The MDP framework evaluates the outcome of each action considering the current state and future consequences. In this case, additional current supplied to the cathode. This allows the system to provide dynamic response to changes in reaction kinetics. As a result, this adaptive approach makes MDP ideal to handle the fluctuating conditions and nonlinear behavior in the electrochemical system. EDL behavior especially at the OHP, influences HER efficiency by regulating ion distribution near the electrode [ 44 ]. This layer governs the distribution of ions near the electrode surface which affects the rate of proton reduction and electron transfer at the electrodes, electrode-electrolyte interface, and electrolyte. According to literature, excessive charge accumulation can destabilize the OHP and reduce available Faradaic current well-known as vanishing site problem [ 45 ]. The fluctuation of the current density by MDP may help to maintain a balanced Faradaic current supply on EDL. Future modeling efforts that incorporate EDL characteristics could further validate the interaction between adaptive current control and charge dynamics. Some limitations are imposed in this study. Primarily, stem from the complexity of accurately model and control the HER. The main limitation is the assumption of hydrogen concentration data represents another corresponding factor. Therefore, the MDP model cannot fully capture all relevant system dynamics. These factors including the bubble detachment behaviors, mass transport, and heat generation [ 46 ]. For future study, more advanced algorithms should be explored to compare the effectiveness of the MDP such as deep reinforcement learning [ 47 ]. Furthermore, multi-objective and multi-variable optimization approach should be tested to enable more detailed model. Sufficiency of active sites and mass transfer limitation as a critical electrocatalytic factor should be included in the model [ 48 ]. Therefore, the model can capture overall dynamics in the water electrolysis system behavior. 4 Conclusions This study investigated the use of a MDP to optimize capacitor timing in a water electrolysis system aimed at enhancing hydrogen production. The approach focused on real-time adjustments to current density through timing regulation, enabling the system to respond dynamically to changing electrochemical conditions. The following key findings summarize the outcomes of this study based on experimental data and model analysis. 1. The MDP-optimized control strategy achieved a hydrogen concentration of 7460 ppm within 60 min, outperforming all fixed-timing strategies of 0.5, 1, and 2 s, which yielded hydrogen concentrations below 7000 ppm. 2. The MDP model dynamically regulated the capacitor discharge timings (T1, T2, and T3) to align current delivery with optimal hydrogen evolution rates, leading to superior system responsiveness and sustained performance over time. 3. Heat-map analysis revealed high-Q value regions associated with specific combinations of capacitor timings and current density levels, indicating favorable configurations for enhancing HER efficiency. 4. Although this study did not explicitly simulate mass transport and thermal effects, the adaptive current regulation strategy showed potential in indirectly reducing adverse impacts such as bubble blocking and excessive heating. CRediT authorship contribution statement Purnami Purnami: Writing – original draft, Project administration, Investigation, Formal analysis, Conceptualization. Willy Satrio Nugroho: Writing – review & editing, Software, Methodology, Formal analysis, Data curation, Conceptualization. Wresti L. Anggayasti: Visualization, Methodology, Investigation, Formal analysis. Yepy Komaril Sofi'i: Validation, Resources, Formal analysis. I.N.G. Wardana: Writing – review & editing, Validation, Supervision, Funding acquisition, Formal analysis, Conceptualization. Declaration of competing interest The authors declare the following financial interests/personal relationships which may be considered as potential competing interests: We declare the relationship with Kementerian Pendidikan dan Kebudayaan Indonesia as a Funder for this study. Acknowledgements The authors give special thanks to Kementerian Pendidikan dan Kebudayaan Indonesia (KEMENDIKBUD) and direktorat pendidikan tinggi (DIKTI) for the funding support through the fundamental research grant No. [ 00309.127/UN l 0.A0501/B/PT.01.03.2/2024 ]. Appendix A Supplementary data Supplementary material. Image 1 Appendix A Supplementary data Supplementary data to this article can be found online at https://doi.org/10.1016/j.elecom.2025.107987 .
REFERENCES:
1. LI T (2024)
2. DASH S (2024)
3. DINCER I (2012)
4. XU Y (2019)
5. JANG D (2023)
6. WEI Q (2023)
7. WEI Q (2024)
8. ZOU Z (2024)
9. DUAN X (2023)
10. ABOUKALAMDACRUZ M (2023)
11. JANG D (2021)
12. FLAMM B (2021)
13. HE Y (2025)
14. ZHOU Y (2024)
15. ZHU Y (2025)
16. BABAY M (2023)
17. HAYATZADEH A (2024)
18. ZHANG Y (2024)
19. TAWALBEH M (2024)
20. YAO Z (2023)
21. PURNAMI P (2024)
22. PURNAMI P (2023)
23. ZHANG Q (2019)
24. MAO Y (2022)
25. GOYAL V (2023)
26. CHEN N (2021)
27. LI W (2022)
28. VILLAGRA A (2019)
29. XIAO C (2024)
30. LI Z (2024)
31. YU T (2023)
32. ROSYIDAN C (2024)
33. LIU Y (2024)
34. JIANG X (2014)
35. LI Y (2021)
36. EVENESS J (2022)
37. SAUVE E (2024)
38. MARTIN D (2020)
39. CHEN J (2021)
40. SAKAS G (2022)
41. LALVANI S (2021)
42. LIN X (2024)
43. LASIA A (2019)
44. DOURADO A (2022)
45. RUIZLOPEZ E (2020)
46. SHIH A (2022)
47. ZAHAVY T (2021)
48. XIANG R (2022)
|
10.1016_j.apjtb.2017.09.014.txt
|
TITLE: Yeast-generated CO2: A convenient source of carbon dioxide for mosquito trapping using the BG-Sentinel® traps
AUTHORS:
- Jerry, Dhanique C.T.
- Mohammed, Terry
- Mohammed, Azad
ABSTRACT:
Objectives
To evaluates carbon dioxide (CO2) production from yeast/sugar mixtures and its efficiency as an attractant in BG-Sentinel traps.
Methods
The rate of CO2 production was optimized for different yeast/sugar mixtures. The optimized mixture was then used as bait in BG-Sentinel traps. The efficiency of this bait was then compared to octenol baited traps.
Results
The yeast/sugar (5 g: 280 g) in 300 mL water generated the highest volume of CO2. The CO2 baited traps caught significantly more mosquitoes than octenol baited traps.
Conclusions
Yeast-produced CO2 can effectively replace octenol baits in BG traps. This will significantly reduce costs and allow sustainable mass-application of the CO2 baited traps in large scale surveillance programs.
BODY:
1 Introduction Chemical cues such as carbon dioxide (CO 2 ) are important for the host-finding behaviour of mosquitoes [1–4] . Surveillance programs which utilize mosquito trapping often include chemical baits such as CO 2 , octenol, nonanol or lactic acid to increase catch rates [5–11] . In Trinidad and Tobago, surveillance and/or sampling exercises utilise the BG-Sentinel ® trap [12] baited with either octenol or dry ice which generates CO 2 . However, one of the limiting factors is the cost and availability of octenol and dry ice [13,14] . The BG-Sentinel ® trap is a well-established monitoring tool for capturing mosquitoes [15] , however, the effectiveness of yeast/sugar generated CO 2 bait has not been evaluated for use in Trinidad and Tobago. Various studies have previously reported on the efficacy of octenol and carbon dioxide (CO 2 ) as attractants in mosquito traps such as the Fay-Prince trap, CDC-type and the Encephalitis Virus Surveillance traps [15–24] . These studies suggest that octenol may have species-specific effects. Kleine et al . [16] also reported that octenol differs in its effectiveness for attracting different mosquito species. However, Octenol does not appear to be a strong attractant for Stegomyia mosquitoes which includes Aedes aegypti ( Ae. aegypti ), an important vector for the spread of tropical diseases such as dengue, zika and chikungunya, which has a high prevalence rate in the Caribbean region. Canyon and Hii [6] reported that octenol significantly decrease collection of Ae. aegypti when compared to carbon dioxide using Fay-Prince traps. Shone et al . [25] reported that species such as Aedes albopictus was attracted more to CO 2 and CO 2 + octanol baited CDC and Fay-Prince traps than unbaited or octenol-baited traps. These studies all suggest that the choice of bait can effectively increase the catch efficiency of a mosquito trap. In Trinidad and Tobago, the preferred bait is mosquito traps is octanol, though dry ice is sometimes used. However this is neither cost effective nor sustainable when traps are deployed in remote areas or when large scale sampling is needed, as is the case when there is an upsurge in the number of cases of dengue, zika and chikungunya. In tropical environments, dry ice sublimes faster than in temperate areas and has to be replaced frequently. Moreover, dry ice has the disadvantage that the release rate of CO 2 is highly variable and diminishes over time [26,27] . When large scale trapping is required, such as during a national surveillance program, the use of both dry ice and octenol can become prohibitively expensive. To overcome these limitations, CO 2 produced by fermentation of sugar can be a reliable alternate that is cheap, easy to manage and durable. This paper seeks to optimize CO 2 production from fermentation of sugar and compare its efficiency to octenol for capturing mosquito in baited BG-sentinel traps. 2 Materials and methods 2.1 Carbon dioxide production from yeast/sugar mixture Three hundred (300) mL of 130, 190, 250 and 280 g/L sugar solutions each containing 3 g of baker's yeast was prepared in 500 mL Buchner flasks and maintained at 35 °C in a water bath. The rate of gas production was determined using a displacement method. The buchner flask was connected to a sealed conical flask filled with water and fitted with a displacement tube which emptied into a graduated cylinder. The volume of fluid displaced per unit time was used to calculate CO 2 production rates over 24 h. 2.2 Optimization of yeast for carbon dioxide production Four reaction flasks containing 280 g/L sugar solution were prepared to assess the effects of yeast on the CO 2 production. Aliquot 1.5 g and 5 g of yeast was added to duplicate flask, and the rate of gas production was monitored over 24 h. Estimation of the carbon dioxide production was determined as described previously. 2.3 Field evaluation of yeast-generated carbon dioxide in mosquito collections The optimized reaction mixture was then tested in field trials using the BG trap. Carbon dioxide baited traps were run simultaneously with octenol baited traps to compare the catch efficiency of the two baits. A total of 45 sampling efforts were conducted from January to May 2017 at two sample locations: (1) Open green house and (2) in dwellings. The total number of mosquitoes was counted and the number of species compared. 3 Results The volume of carbon dioxide generated from the sugar solution with 3 g of yeast generally increased over the first (4–6) h then gradually decreased ( Figure 1 ). Production rate after 1 h ranged between 3.1 mL/min (130 g/L solution) and 6.2 mL/min (280 g/L solution) ( Figure 1 ). The 280 g/L solution generated significantly ( P < 0.05) higher levels of CO 2 when compared to the 130 g/L solution. However CO 2 production levels in 280 g/L solution were not significantly higher than production from 190 g/L and 250 g/L solutions. The total volume of CO 2 generated varied between 3 L (130 g/L solution) and 5 L (280 g/L solution) over the first 7 h. After 24 h, CO 2 production rates significantly decreased, ranging between 0.73 mL/min (130 g/L solution) and 4.4 mL/min (280 g/L solution). However, over the 24 h period all the mixtures were still generation CO 2 at levels that were higher than the estimate CO 2 release rate of (1–1.8) mL/h from human skin [28] . This would suggest that the system would continue to attract mosquitoes over an extended period of time. 3.1 Optimization of yeast for carbon dioxide production The amount of carbon dioxide generated can be influenced by the rate of fermentation and the amount of yeast added. From the first experiment, the solution containing 280 g of sugar produced the largest volume of CO 2 for the longest time period. Varying yeast concentration (1.5 g, 3.0 g and 5.0 g) significantly increased CO 2 production from the sugar mixture ( Figure 2 ). Production rate after 1 h ranged between 3 mL/min (1.5 g yeast and 280 g/L sugar solution) and 10 mL/min (5 g yeast and 280 g/L sugar solution) ( Figure 2 ). The solution containing 5 g of yeast produced significantly ( P < 0.05) more CO 2 than the one containing 1.5 g of yeast ( Figure 2 ). After 7 h CO 2 production rate ranged between 6.8 mL/min (1.5 g yeast and 280 g/L sugar solution) and 14.1 mL/min (5 g yeast and 280 g/L sugar solution) ( Figure 2 ). After 24 h CO 2 production rates was still high, ranging between 5.9 mL/min (1.5 g yeast and 280 g/L sugar solution) and 5.5 mL/min (5 g yeast and 280 g/L sugar solution) ( Figure 2 ). The highest level of CO 2 was produced after 3 h averaging 8.8 mL/min (1.5 g yeast) and 21.7 mL/min for the mixture with 5 g of yeast. The total volume of CO 2 generated varied between 3 L (1.5 g yeast in 280 g/L sugar solution) and 7 L (5 g yeast in 280 g/L sugar solution) within the first 7 h. After 24 h production rates were significantly reduced ranging between 6 mL/min (1.5 g yeast in 280 g/L sugar solution) and 4.4 mL/min (3 g in 280 g/L sugar solution). However over the 24 h period all the reaction mixtures were still generating sufficiently high levels of CO 2 . 3.2 Field evaluation of yeast-generated carbon dioxide, in mosquito collections A total of 45 field trials (30 open green house and 15 in dwellings) were conducted using both CO 2 baited and octenol baited BG traps. Culex quinquefasciatus ( Cx. quinquefasciatus ) was the dominant species caught at both sites. A total of 842 mosquitoes were collected which consisted primarily of Cx. quinquefasciatus (84.1%) and Ae. aegypti (15.9%). The total number of mosquitoes collected with the CO 2 baited BG traps (620 mosquitoes) was three times higher than the number collected with the octenol baited (222 mosquitoes) traps. This would suggest that the catch efficiency of CO 2 baited traps was greater than the octenol bait. The CO 2 baited traps attracted about 4 times more Cx. quinquefasciatus (555) than octenol (153) baited traps. However both baits attracted similar numbers of Ae. aegypti . In the open green house, the CO 2 baited traps collected twice as much Cx. quinquefasciatus (216) than the octenol (112) baited traps. The CO 2 bait traps collected 20% less Ae. aegypti (40) when compared to the octenol baited (60). At the site within the dwellings, the CO 2 baited collected eleven times more Cx. quinquefasciatus (339) when compared to the octanol baited traps (41). The CO 2 baited traps also collected three times more Ae. aegypti (25) than the octenol baited traps (9). The octenol baited traps had 42% males and 58% females while the CO 2 baited trap had 52% males and 48% females. However, all of Cx. quinquefasciatus collected were females. 4 Discussion Mosquitoes respond to a complex set of cues such as carbon dioxide, lactic acid or temperature to locate a host. In Trinidad and Tobago, surveillance programs utilize the BG sentinel traps, which have been shown to be more effective in capturing Aedes sp. than other traps such as the CDC-LT [8,29] . The BG traps are normally baited with octenol, however, the cost can be prohibitive for large scale monitoring. This study showed that carbon dioxide generated from yeast/sugar mixtures can be a more efficient attractant than octenol in BG traps. The optimum mixture used in this study, which produced the highest amount CO 2 was 5 g yeast and 280 g/L sugar solution. Various studies previously reported on the catch efficiency of different attractants. Several studies have also reported that few species were attracted to octenol alone, and catch rates increases when it is used in combination with other attractants [22,23,30] . Kline et al . [31–34] also reported that different mosquito species sometimes respond differently to different attractants. The basic response pattern was that very few species were attracted to octenol alone, but in combination with CO 2 a synergistic affects apparently occurred and catch efficiency increases 2-fold or greater. This present study showed that CO 2 was about 3 times more efficient at capturing mosquitoes than octenol alone using the BG Traps. However, previous studies on African and Brazilian malaria vectors, such as Anopheles arabiensis , Anopheles funestus , Anopheles darlingi and Anopheles aquasalis have shown that CO 2 was insufficiently attractive as standalone bait. Better catch rates were obtained using CO 2 in mixed odour baits or together with body odours [35–37] . The BG traps though especially developed for capturing Ae. aegypti , have also been shown to capture Culex mosquitoes [12,38] . This present study showed that the BG traps baited with CO 2 had a catch rate about 8 times higher for Cx. quinquefasciatus than Ae. aegypti . Other studies have also reported that BG traps baited with CO 2 do have high catch efficiencies [39–41] . Ferreira de Ázara et al . [42] also reported that BG traps operated with CO 2 trapped 6 times more female Culex spp than Ae. aegypti . The high catch rates of up to 272 Culex females with CO 2 and up to 57 Culex females without CO 2 (mainly Cx. quinquefasciatus ) per 24 h shows that the BGs trap might be a useful tool for the monitoring of diseases that are transmitted by the species in urban areas in Brazil, like Oropouche fever or Bancroftian Filariosis. This study further emphasises that CO 2 attracted only female Culex, while it attracted both male and female aedes. This would suggest that CO 2 is also an attractant for Culex species which may be due to the BG's trap imitation of human odour plumes. Russell [43] also reported improved collection of Cx. quinquefasciatus in CO 2 baited traps for in French Polynesia while Muturi et al . [44] reported high levels if Cx. quinquefasciatus and Culex annulioris in Kenya. Zhang et al . [45] also showed that CDC-LT with dry ice was most effective for trapping of Cx. quinquefasciatus when compared with UV light traps and gravid traps in China. Smallegange et al . [46] also showed that traps baited with yeast-produced CO 2 caught significantly more mosquitoes than unbaited traps and traps baited with industrial CO 2 . They suggested that yeast-produced CO 2 can effectively replace industrial CO 2 for sampling of mosquitoes such as Anopheles gambiae . The use of the yeast/sugar generated CO 2 would significantly reduce costs and allow sustainable mass-application of traps for mosquito sampling in remote areas. Given the recent upsurgence of vector borne diseases such as dengue, chikungunya, and zika, greater efforts are being made to control mosquito populations with an increased in surveillance efforts. The success of the yeast/sugar mixture as bait in the BG traps can greatly reduce the cost of surveillance and increase the efficiency of mosquito capture. Conflict of interest statement The authors declare that there is no conflict of interest.
REFERENCES:
1. GILLIES M (1980)
2. COSTANTINI C (1996)
3. TAKKEN W (1999)
4. EIRAS A (1991)
5. SERVICEMW (1992)
6. CANYON D (1997)
7. OWINO E (2014)
8. OWINO E (2015)
9. POMBI M (2014)
10. SUKUMARAN D (2016)
11. SAZALI M (2014)
12. KROCKEL U (2006)
13. GIBSON G (1999)
14. MBOERA L (2000)
15. IYALOO D (2017)
16. KLINE D (2007)
17. KLINE D (1994)
18. KLINE D (2002)
19. VAIDYANATHAN R (1997)
20. VAIDYANATHAN R (1997)
21. KLINE D (1998)
22. KLINE D (1998)
23. RUEDA L (2001)
24. VANDENHURK A (1997)
25. SHONE S (2003)
26. MBOERA L (1997)
27. SAITOH Y (2004)
28. CARLSON D (1992)
29. SINKA M (2011)
30. KEMME J (1993)
31. KLINE D (1991)
32. KLINE D (1991)
33. KLINE D (1990)
34. KLINE D (1990)
35. HIWAT H (2011)
36. SERVICEMW (1993)
37. RUBIOPALIS Y (1992)
38. WILLIAMS C (2006)
39. OBENAUER P (2013)
40. HOEL D (2014)
41. ROIZ D (2015)
42. DEAZARA T (2013)
43. RUSSELL R (2004)
44. MUTURI E (2007)
45. ZHANG H (2013)
46. SMALLEGANGE R (2010)
|
10.1016_j.xocr.2021.100301.txt
|
TITLE: Recurrent myxoid liposarcoma of the retropharyngeal space causing marked dysphagia: Case report
AUTHORS:
- Massoud, Mohamed
- Salem, Osama Maher
- Mahmoud, Mohammad Salah
- Dwiddar, Nada
- Nabil, Khaled
ABSTRACT:
Myxoid retropharyngeal liposarcomas are exceedingly rare tumours with a tendency to local recurrence. We reported the first case of recurrent myxoid liposarcoma of the retropharyngeal space in a patient treated with resection and postoperative radiotherapy sessions. The patient underwent extended surgical resection and the result of PET-CT 1 year postoperatively revealed no recurrence or metastasis. A literature review completes this case report by providing prognostic factors and different lines of treatment.
BODY:
Introduction The most prevalent soft tissue sarcoma is liposarcoma. Nevertheless, liposarcoma in the head and neck are rare, making up just 1.8–6.3% of the cases [ 1 ]. Results of the literature review showed no reported case of recurrent retropharyngeal liposarcoma. We present the first case of recurrent myxoid retropharyngeal liposarcoma after surgical excision, followed by radiotherapy sessions. Case presentation A 60-year-old male patient presented to our clinic with significant progressive dysphagia for two months. He had a history of surgical excision of a left parapharyngeal liposarcoma four years before, which was followed by radiotherapy. Neck examination revealed marked forward displacement of the larynx, with a large mass in the left side of the neck extending from the level of the hyoid bone down to one inch above the clavicle. Another small mass in the upper right side of the neck seemed to be continuous, with the left one through the retropharyngeal space. No lymph nodes could be palpated on either side. A scar of the previous surgery was noted at the anterior border of the left sternomastoid muscle. Telescopic examination of the larynx and hypopharynx showed that the posterior pharyngeal wall was pushed forward; retained saliva in the valleculae and pyriform sinus. Both vocal folds were mobile with a patent airway. CT neck and chest with contrast showed a left parapharyngeal mass extending through retropharyngeal space to the right parapharyngeal space measuring 6 × 3 × 11 cm. The mass displaced the larynx, pharynx, trachea and oesophagus anteriorly, and the left carotid arteries laterally. The CT appearance showed the mass to be fatty with soft tissue elements in its lower border and evidence of few subcentimetric lymph nodes on both sides ( Fig. 1 ) . Gastrograffin swallow showed an arrest of the dye at the level of the cricopharyngeus muscle. Therefore, upper GI endoscopy was done and showed no invasions of the upper oesophagus, which was markedly compressed by the mass. A provisional diagnosis of a recurrent liposarcoma was made based on the history, clinical progression and CT findings. The possibility of distant metastasis was ruled out by PET-CT scan. Through a U-shaped incision, the mass was explored and removed. It was found inseparable from the left lobe of the thyroid gland; therefore, the tumour and left thyroid lobe were excised en bloc, and bilateral modified neck dissection was performed. Histopathological examination of the tumour reported malignant predominantly myxoid neoplasm consisting of malignant spindle cells with large hyperchromatic pleomorphic nuclei and scattered mitotic figures with occasional lipoblastic cells. The tumour invades the left thyroid lobe, and the lymph nodes showed only reactive hyperplasia ( Fig. 2 ) . The final diagnosis was an intermediate grade myxoid liposarcoma in the parapharyngeal and retropharyngeal space invading left thyroid lobe with reactive lymph nodes. Postoperatively the patient reported resolution of his obstructive symptoms, but he complained of aspiration. Fiberoptic nasolaryngoscopic examination showed limited adduction of the left vocal folds, which was followed for two months. The aspiration resolved, although the left vocal folds remained with limited mobility. An opinion was sought from the oncologists attending our multidisciplinary Head and Neck meeting regarding postoperative radiotherapy or chemotherapy, and the decision was to follow up with the patient. Postoperative PET scan was done after 1 year ( Fig. 3 ) showed complete resection of the lesion with no signs of residual or recurrent masses. Discussion The usual sites of liposarcoma are the retroperitoneum and extremities, while it is considered rare in the head and neck region [ 1 ]. According to the most recent WHO classification of liposarcoma, it is classified into well-differentiated, dedifferentiated, myxoid, round cell, and pleomorphic [ 2 ]. The liposarcoma prognosis is directly linked to the histological classification. The best prognostics are well-differentiated and myxoid liposarcomas; however, they have a tendency for local recurrence. Other factors, such as age (greater than 45 years), percentage of round cell differentiation (more than 25%) and the presence of tumour necrosis, have been correlated with lower survival rate [ 3 ]. The main symptoms reported in the literature are related to compression of adjacent anatomical structures (progressive dysphagia, dyspnoea, sleep apnoea syndrome or globus), as these painless, slow-growing tumours are usually diagnosed at a locally advanced stage [ 4 ]. In this case, the patient complained of marked dysphagia for two months, but the neck swelling had been noted about ten months before. This indicates the slow growth rate of the tumour. After the literature review, we found that only 9 reported cases of retropharyngeal liposarcoma with only one case with recurrent well-differentiated retropharyngeal liposarcoma [ 3 ]. Yueh et al. reported a recurrent well-differentiated retropharyngeal liposarcoma within one year after surgical excision, which was managed later with resection followed by postoperative radiotherapy [ 5 ]. The reference treatment is complete surgical excision with meticulous dissection, as the tumour is surrounded by pseudocapsule, which does not prevent infiltration and responsible for the presence of satellite lesions [ 3 ]. The recurrence rate reaches 80% in incomplete surgical excision compared with 17% when complete surgical resection was done [ 1 ]. In our case, the recurrence occurred about three years from surgical excision and adjuvant radiotherapy, which may be caused by incomplete resection of the primary tumour. Radiotherapy is considered useful in delaying and preventing local recurrence; however, it is not an alternative for radical resection as the postoperative radiotherapy reduced the recurrence but did not affect overall survival or metastasis in the head and neck sarcoma [ 6 ]. It is arguable whether or not chemotherapy for liposarcoma is effective. Patel et al. reported that doxorubicin and dacarbazine based chemotherapy is effective in the treatment of liposarcoma, with a response rate of 44% in his 21 cases [ 7 ]. It is believed that multidisciplinary treatment incorporating adjuvant or neoadjuvant therapy is a subject of future investigations. Conclusion We have presented a case of recurrent myxoid liposarcoma in the retropharynx after surgical excision with postoperative radiotherapy, which was managed by surgical excision with bilateral neck dissection. Radiotherapy is not an alternative of extended surgical resections, and depending on it only does not reduce the rate of recurrence. The role of chemotherapy as an adjuvant therapy requires more investigations to assess its efficacy in the management of myxoid liposarcomas. Consent The patient signed informed written consent and held in the patient's record. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
REFERENCES:
1. MCCULLOCH T (1992)
2. FLETCHER C (2013)
3. VELLA O (2016)
4. LI H (2013)
5. YUEH B (1995)
6. EELES R (1993)
7. PATEL S (1994)
|
10.1016_j.enmf.2022.07.002.txt
|
TITLE: Synthesis and characterization of oxygen-containing imidazo[1,2-d][1,2,4]oxadiazole fused-ring energetic compounds
AUTHORS:
- Dou, Hui
- Chen, Peng
- He, Chun-lin
- Pang, Si-ping
ABSTRACT:
Oxygen plays an important role in improving the oxygen balances and densities of energetic compounds. In this study, a series of oxygen-containing imidazo[1,2-d][1,2,4]oxadiazole building block-based energetic compounds (3–8) were synthesized and fully characterized. In addition, the structures of compounds 3–7 were determined using single-crystal X-ray diffraction. The properties of compounds were evaluated based on their density, differential scanning calorimetry, impact and friction sensitivities, heat of formation, and detonation performances. The insensitive properties of compound 6 (IS > 40 J, FS > 360 N) and its detonation properties (8682 m·s−1, 32.8 GPa), which are comparable to those of RDX (8795 m·s−1, 34.9 GPa), demonstrate that it is effective to improve the properties of energetic compounds by constructing oxygen-containing fused-ring backbone.
BODY:
1 Introduction In the past few decades, benefiting from the in-depth studies of the structure-activity relationships of energetic materials, various new energetic materials with excellent performance properties have been reported. Energetic molecules generally consist of two main parts: backbones and explosophoric groups. Designing and synthesizing new backbones is the most common strategy for developing new energetic compounds. 1–3 Most studies in the last few decades have focused on constructing energetic compounds with versatile single heterocyclic rings, such as tetrazoles, triazoles, pyrazoles, imidazoles, and oxadiazoles. 4 However, fused-ring-based energetic compounds tend to have good thermostability and low sensitivity because of their conjugated structures. 5–9 10 , Their design and synthesis have recently gained attention from researchers. 11 12 Among the fused-ring compounds, only a few examples of [5, 5]-bicyclic energetic backbones have been reported because of the lack of effective construction methods. The [5, 5]-fused-ring has the potential to store more chemical energy because of its relatively high string tension, which is higher than that of single-ring backbones. 12–14 Energetic compounds based on 3,6-dinitropyrazolo[4,3- 15 c ]pyrazole (DNPP), 3,6,7-triamino-7H-[1,2,4]triazolo[4,3- 16 b ][1,2,4]triazole (TATOT), 6-nitro-6H-pyrazolo[3,4- 17 c ]furazan-5-oxide ( a ), and 3-hydroxy-4-methyl-6-nitro-pyrazolo[3,4- 18 d ][1,2,3]triazole ( b ) have been reported to have high enthalpy of formation and excellent detonation performance ( 5 Scheme 1 ). The oxygen atoms in CHON-containing energetic compounds act as oxidizers that react with C and H and release large amounts of energy simultaneously. However, the effective oxygen atoms are always connected to nitrogen atoms, forming –NO 2 or a N→O moiety. The introduction of a nitro group or N→O moiety is a well-established strategy for improving the density, oxygen balances, and detonation properties of energetic compounds. 19 , In single heterocyclic rings, oxygen-containing five-member rings, such as oxadiazoles, can serve as the building blocks for high energy density compounds. 20 Regarding oxygen-containing [5,5]-fused rings, pyrazolo[3,4- 21–23 c ]furazan is the only one reported to be able to effectively construct energetic compounds. Lin et al. 18 reported that Imidazo[1,2- 24 d ][1,2,4]oxadiazole can serve as a building block for pharmaceutical materials. In this study, a similar process was used to design and synthesize energetic compound 3 through the cycloaddition of 4-amino-1,2,5-oxadiazole carbohydroximoyl chloride and 2-chloro-4-nitroimidazole. Its nitrated product, compound 4 , as well as four salt derivatives based on 4, were prepared. The new compounds were well characterized using single-crystal X-ray diffraction, IR, NMR, thermal analysis, and elemental analysis. The properties of the prepared compounds demonstrate that this methodology is effective in constructing oxygen-containing fused-ring backbones. 2 Experimental section Caution! Although we encountered no difficulties in preparing these imidazo[1,2-d][1,2,4]oxadiazole-based compounds, appropriate safety precautions must be taken. Eye protection and leather gloves must be worn. Scratching or scraping must be avoided in the process of mechanical actions of these energetic materials! 2.1 Methods 1 H and 13 C spectra were recorded using a 400 MHz nuclear magnetic resonance spectrometer Bruker AVANCE NEO 400 at a frequency of 400.13 MHz and 100.62 MHz, respectively. The chemical shifts in 13 C NMR spectra were reported relative to Me 4 Si. The melting and decomposition points were obtained at a heating rate of 5 °C·min −1 and a flow rate of dry nitrogen gas of 50 mL·min −1 using a Discovery 25 differential scanning calorimeter from TA Instruments Co. IR spectra were recorded using a Thermoscientific Summit PRO FT-IR in KBr pellets that aim at solids. Densities were determined at 25 °C by employing a Micromeritics AccuPycII 1345 gas pycnometer. Elemental analyses were carried out using a Thermo Scientific FLASH 2000 elemental analyzer. The impact and friction sensitivities were measured using a standard BAM fall hammer and a BAM friction tester, respectively. 2.2 Synthesis Compound 3 : 4-Amino-1,2,5-oxadiazole carbohydroximoyl chloride 1 (1.63 g, 10.0 mmol) and 2-chloro-4-nitro-1 H -imidazole 2 (1.48 g, 10.0 mmol) were added to 200 mL of anhydrous methanol in a 50 mL three-necked flask. Triethylamine (202 mg, 2.0 mmol) was slowly added while keeping the reaction remain at room temperature. The reaction was monitored using thin layer chromatography (chromatography eluent: the mixture of ethyl acetate and petroleum ether at a ratio of 2:3, R f = 0.5). After about 12 h, the reaction solvent was removed under reduced pressure. Then, the residue was extracted using ethyl acetate (30 mL × 3), and the combined ethyl acetate was dried over anhydrous sodium sulfate. A crude product of compound 3 was obtained by removing the ethyl acetate. Then, it was purified using flash column chromatography (column chromatography silica gel: 200–300 mesh, ethyl acetate: petroleum ether = 1:5), and then white compound 3 (1.59 g) was isolated with a yield of 67.1%. 1 H NMR (400 MHz, DMSO‑ d 6 ) δ : 6.82 (s, 2H), 8.79 (s,1H), 13 C NMR (100 MHz, DMSO‑ d 6 ) δ : 109.58, 133.94, 143.97, 150.89, 154.92, 157.45. IR ( ṽ /cm −1 ): 3468, 3310, 3178, 1636, 1593, 1564, 1537, 1488, 1361, 1340, 1279, 1187, 1124, 1027, 1005, 971, 917, 875, 854, 792, 756, 736, 712, 640, 575, 524, 444, 418. Elemental analysis (%) for C 6 H 3 N 7 O 4 (237.02): calculated: C 30.39, H 1.28, N 41.35; found: C 30.54, H 1.29, N 41.61. Compound 4 : Compound 3 (237 mg, 1.0 mmol) was slowly added to 1 mL of fuming nitric acid at −5 °C. After being kept for 0.5 h at −5 °C, the reaction mixture was poured onto 20 g of ice-water to form a white precipitate. The solid was successively collected through filtration, washed with water, and air-dried. As a result, compound 4 (201 mg) was produced, with a yield of 71.2%. 1 H NMR (400 MHz, DMSO‑ d 6 ) δ : 8.80 (s, 1H), 13 C NMR (100 MHz, DMSO‑ d 6 ) δ : 110.83, 138.09, 144.15, 150.44, 156.77, 157.72. IR ( ṽ /cm −1 ): 3177, 1621, 1607, 1588, 1547, 1522, 1501, 1372, 1340, 1301, 1270, 1189, 1124, 1007, 968, 852, 796, 774, 757, 741, 713, 652, 614, 470, 446. Elemental analysis (%) for C 6 H 2 N 8 O 6 (282.01): calculated: C 25.54, H 0.71, N 39.72; found: C 25.37, H 0.69, N 39.38. General procedure for the synthesis of salts 5 and 6 : A solution of 7 mol·L −1 of ammonia in methanol (214 μL, 1.5 mmol) or 50% aqueous hydroxylamine (1.0 mg, 1.5 mmol) in acetonitrile (1 mL) was slowly added to a solution of 4 (141 mg, 0.50 mmol) in acetonitrile (10 mL) at 25 °C while stirring. After stirring for 2 h, the precipitate 5 or 6 was collected, washed with acetonitrile, and air-dried. Compound 5 : weight and yield: 0.135 g, 90.3%. 1 H NMR (400 MHz, DMSO‑ d 6 ) δ : 7.08 (s, 4H), 8.80 (s, 1H), 13 C NMR (100 MHz, DMSO‑ d 6 ) δ : 110.84, 137.78, 144.23, 150.26, 157.63,157.67. IR ( ṽ /cm −1 ): 3179, 1618, 1593, 1540, 1526, 1506, 1486, 1448, 1415, 1384, 1362, 1293, 1270, 1186, 1128, 1059, 1010, 969, 939, 887, 864, 854, 807, 790, 777, 759, 741, 715, 582, 456. Elemental analysis (%) for C 6 H 5 N 9 O 6 (299.04): calculated: C 24.09, H 1.68, N 42.14; found: C 23.49, H 1.67, N 39.72. Compound 6 : weight and yield: 0.268 g, 85.4%. 1 H NMR (400 MHz, DMSO‑ d 6 ) δ : 8.80 (s, 1H), 9.88 (s,1H), 10.07 (s, 3H), 13 C NMR (100 MHz, DMSO‑ d 6 ) δ : 110.85, 137.79, 144.23, 150.26,157.64, 157.67. IR ( ṽ /cm −1 ): 3181, 1590, 1576, 1547, 1531, 1504, 1487, 1447, 1419, 1386, 1362, 1296, 1270, 1185, 1159, 1011, 939, 889, 865, 854, 810, 791, 778, 758, 742, 716, 637, 586, 481. Elemental analysis (%) for C 6 H 5 N 9 O 7 (315.03): calculated: C 22.87, H 1.60, N 40.00; found: C 22.82, H 1.76, N 39.21. Compound 7· H 2 O: Silver nitrate (85 mg, 0.50 mmol) was dissolved in water (3 mL) and added to a suspension of 5 (150 mg, 0.5 mmol) in water (10 mL). The reaction mixture was stirred at room temperature for 1 h. Then, silver salt was collected through filtration, washed with water, and dried in a vacuum. Silver salt (185 mg, 0.47 mmol) was added to a methanol solution (10 mL) of guanidine chloride (44.8 mg, 0.47 mmol). The reaction flask was covered with aluminum foil. After stirring for 1 h at room temperature, silver chloride was removed through filtration and washed with a small amount of methanol (2 mL). The filtrate was concentrated under reduced pressure and was dried in a vacuum. As a result, the off-white compound 7· H 2 O (0.122 g) was obtained, with a yield of 71.6%. 1 H NMR (400 MHz, DMSO‑ d 6 ) δ : 6.89 (s, 6H), 8.80 (s, 1H), 13 C NMR (100 MHz, DMSO‑ d 6 ) δ : 110.84, 137.81, 144.23,150.28,157.64,157.68, 157.89. IR ( ṽ /cm −1 ):3477, 3430, 3347, 3276, 3198, 3136, 1652, 1617, 1589, 1536, 1521, 1501, 1484, 1441, 1416, 1387, 1361, 1291, 1272, 1183, 1124, 1061, 1025, 1003, 967, 934, 860, 846, 803, 789, 775, 756, 721, 503, 453. Elemental analysis (%) for C 7 H 9 N 11 O 7 (359.06): calculated: C 23.39, H 2.51, N 42.90; found: C 23.22, H 2.54, N 42.79. Compound 8 : Silver nitrate (85 mg, 0.50 mmol) was dissolved in water (3 mL) and added to a suspension of 5 (150 mg, 0.5 mmol) in water (10 mL). The reaction mixture was stirred at room temperature for 1 h. Then, silver salt was collected through filtration, washed with water, and dried in a vacuum. Then, the silver salt (185 mg, 0.47 mmol) was added to a methanol solution (10 mL) of TATOT·HCl (89.5 mg, 0.47 mmol). The reaction flask was covered with aluminum foil. After stirring for 1 h at room temperature, silver chloride was removed through filtration and washed with a small amount of methanol (2 mL). The filtrate was concentrated under reduced pressure and dried in a vacuum. Consequently, the off-canary yellow compound 8 (0.148 g) was obtained, with a yield of 67.9%. 1 H NMR (400 MHz, DMSO‑ d 6 ) δ : 5.57 (s, 2H), 7.23 (s, 2H), 8.21 (s, 2H), 8.80 (s, 1H), 13 C NMR (100 MHz, DMSO‑ d 6 ) δ : 110.87, 137.80, 141.15, 144.24, 147.48, 150.27, 157.65, 157.69, 160.19. IR ( ṽ /cm −1 ): 3389, 3306, 3234, 3159, 3108, 1698, 1665, 1612, 1588, 1537, 1518, 1502, 1441, 1361, 1277, 1009, 923, 863, 855, 810, 788, 770, 758, 742, 729, 716, 684, 619, 602, 592, 531. Elemental analysis (%) for C 9 H 8 N 16 O 6 (436.08): calculated: C 24.78, H 1.85, N 51.37; found: C 25.14, H 2.11, N 50.06. 3 Results and discussion 3.1 Synthesis The synthetic routes for preparing compounds 3 – 8 are shown in Scheme 2 . The reaction between commercially available compounds 1 and 2 in the presence of triethylamine in methanol produced imidazo[1,2- d ][1,2,4]oxadiazole fused compound 3, with a yield of 68%. The procedure used was similar to that described in the literature. Attempts to introduce more nitro groups to the fused ring to produce compound 24 4 ′ with a mixed acid at room temperature or at a higher temperature led to the decomposition of the fused ring. When compound 3 was treated with 100% nitric acid at −5 °C, nitramine compound 4 was obtained. A solution of ammonia/methanol (3 eq) and 50% aqueous hydroxylamine (3 eq) in ethanol were slowly added to a solution of 4 (1eq) in acetonitrile at room temperature while stirring. After stirring for 2 h, the precipitate was collected and dried under a vacuum, yielding 5 and 6 . The silver salt of 4 was obtained in a high yield as a light grey precipitate through the reaction of 5 with AgNO 3 in water. Subsequently, 7 and 8 were readily synthesized at room temperature through the metathesis reaction of the silver salt of compound 4 in situ with one equivalent of the corresponding hydrochloride salt in methanol. 3.2 Crystal structures The structures of compounds 3 – 7 were further confirmed using X-ray diffraction analysis. The crystal data and refinement details are listed in Tables S1–S5 . The crystals suitable for compounds 3 and 4 were produced from acetonitrile, those for compounds 5 and 6 were produced from methanol, and those for compound 7 were produced from a mixture of methanol and H 2 O. Neutral compound 3 crystallizes at 163 K in the monoclinic space group P 2 1 /n with a high calculated density of 1.803 g·cm −3 . Its crystal structure is shown in Fig. 1 a. All atoms of 3 are almost coplanar, as is confirmed by the dihedral angles of N1–C1–N2–C3 (178.39°), N4–C4–C5–C6 (176.82°), and N7–C6–C5–N5 (177.20°). Intra- and intermolecular hydrogen bonds help form a 2D planar layered structure. These bonds can be observed between the layers. The distance between the layers is 3.128 Å ( Fig. 1 c), which indicates moderate π−π interactions and is beneficial for decreasing the mechanical sensitivities of the compound. 25 , 26 Compound 4 crystallizes in the monoclinic space group P 2 1 / n with four molecules per unit cell ( Z = 4), and the crystal density is 1.808 g·cm −3 at 296 K. The furazan ring and the fused ring are almost in the same plane, as is confirmed by the dihedral angles of N1–C1–N2–C3 (178.32°) and N4–C4–C5–C6 (173.05°). The dihedral angle between the furazan ring and the nitramine group is 57.67°. Compound 4 shows a wave-like structure in the packing diagram ( Fig. 2 c). Compound 5 crystallizes in the triclinic space group P- 1 (2) with four molecules per unit cell ( Z = 4), and the crystal density is 1.761 g·cm −3 at 296 K. The anions of 5 form an almost coplanar structure with the dihedral angles of N1-1−C1-1−N2-1−C3-1 (178.29°) and N4-1−C4-1−C5-1−C6-1 (11.81°). The dihedral angle between the furazan ring and the ammonium nitrate group is 3.97° ( Fig. 3 a). Extensive hydrogen bonding (dotted lines) can be observed between the anions and ammonium cations in the crystal structure of 5 ( Fig. 3 b). The unit cell of 6 ( Fig. 4 ), which crystallizes in the triclinic space group P- 1, contains two formula moieties and has a crystal density of 1.861 g·cm −3 at 170 K. Like the anions of compound 5 , the anions of compound 6 form an almost coplanar structure with the torsion angles of O5–N10–C5–N9 (−4.95°), N4–C2–C3–N6 (4.73°), N3–C1–N2–N1 (−1.75°), and C1–N2–N1–O2 (−1.97°). Extensive hydrogen bonding (dotted lines) can be observed between the cations and anions, as shown in Fig. 4 b. The anions in the crystal feature a face-to-face π−π stacking, which is beneficial for improving both the density and the sensitivity of 6 ( Fig. 4 c). Compound 7· H 2 O crystallizes in the monoclinic space group P 2 1 / c with four molecules per unit cell ( Z = 4), and the crystal density is 1.718 g·cm −3 at 296 K. As shown in Fig. 5 , the cations and anions are almost in the same plane, and all hydrogen atoms in the cations participate to form hydrogen bonding (dotted lines) with four closed anions in the same plane ( Fig. 5 b). The distance between the two layers of compound 7· H 2 O is 3.339 Å ( Fig. 5 c), demonstrating the existence of π−π interactions. 3.3 Physicochemical and energetic properties The densities of compounds 3 – 8 , which fall in the range of 1.76 g·cm −3 ( 3 ) to 1.85 g·cm −3 ( 6 ), were measured using an AccuPyc II 1345 gas pycnometer (25 °C, helium). The measurements are listed in Table 1 . The high densities of these compounds are attributed to the existence of extensive hydrogen bonds and π–π interactions. The decomposition temperatures (onset) of compounds 3 – 8 vary between 128 °C ( 4 ) and 189 °C ( 3 ). Neutral compound 3 and salt 7 have the highest decomposition temperatures. This phenomenon can be attributed to the existence of extensive hydrogen bonding in the structures of these specific materials. The heats of formation (ΔH f ) were calculated by employing the Gaussian 09 (Revision E.01) suite of programs and the isodesmic reactions method, as shown in the Supporting Information. The results are listed in 27 Table 1 . With oxygen in the fused ring, compounds 3 – 8 all have high positive enthalpy of formation, ranging from 465.3 kJ·mol −1 to 952.1 kJ·mol −1 . The detonation properties of 3 – 8 were evaluated using EXPLO5 V6.05 software. All compounds have favorable detonation velocity and detonation pressure. The detonation velocity and pressure of 28 6 were calculated to be 8682 m·s −1 and 32.80 GPa, both of which are similar to those of RDX (8795 m·s −1 , 34.9 GPa). Both impact and friction sensitivities were measured using a BAM fall hammer and a BAM friction sensitivity tester. Compounds 3 and 6 can be classified as insensitive compounds since their impact and friction sensitivities are greater than 40 J and 360 N, respectively. The efficiency of preparing compounds 6 from commercially available starting materials in three steps, as well as its detonation performance (8682 m·s −1 , 32.8 GPa) and insensitive properties ( IS > 40 J, FS > 360 N) comparable to those of RDX ( IS : 7.5 J, FS : 120 N) improve its application potential. 3.4 Interaction analysis To further understand the relationships between structures and physical properties, this study systematically analyzed the 2D-fingerprint spectra and Hirshfeld electrostatic surface plots using Crystal Explorer 3.1 ( Fig. 6 ). The red and blue dots on the Hirshfeld surface represent high and low close contact populations, respectively. The red areas on the Hirshfeld surface ( 29 Fig. 6 a and d) represent high close contact populations, which indicate strong hydrogen bonds such as O⋯H and N⋯H interactions. The two remarkable spikes at the bottom left corner of the plot ( Fig. 6 b and f ) also indicate the existence of O⋯H and H⋯O interactions. There are also N⋯O, N⋯N, and C⋯O interactions, which belong to interlayer atomic contacts because of the existence of π–π interactions. 38.6% of the bonds in 3 are hydrogen bonds ( Fig. 6 c). This percentage is higher than that of 4 (27.0%), indicating that 3 has better sensitivity and thermal stability. Based on the significant correlation between the sensitivity and the intermolecular interactions of energetic materials, the difference in sensitivity can be further explained through non-covalent interaction (NCI) analysis. As shown in 30 Fig. 7 , 3 contains many π-π interactions between the molecules. By contrast, 4 contains p-π interactions between the molecules. Numerous studies have shown that face-to-face π-π interactions can effectively buffer mechanical actions and enhance stability. This explains the lower mechanical sensitivity of 3 ( IS > 40 J, FS > 360 N). 4 Conclusions In summary, this study systematically investigated the synthesis, single-crystal structure, thermal stability, and energetic properties of the neutral compounds ( 3 , 4 ) and four nitrogen-rich salts ( 5 – 8 ) with an oxygen-containing imidazo[1,2- d ][1,2,4]oxadiazole backbone. Compound 3 was prepared from commercially available reagents in one step with a high yield. Its derivatives can be prepared using simple and efficient methods. Compound 6 shows favorable detonation properties (8682 m·s −1 , 32.8 GPa) comparable to those of RDX (8795 m·s −1 , 34.9 GPa), as well as strong insensitive characteristics ( IS > 40 J, FS > 360 N). It has the potential for commercial application as an insensitive energetic material. The methodology for constructing oxygen-containing fused-ring backbones could provide a reference for developing more novel oxygen-containing energetic backbones and for further increasing the category number of energetic compounds. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have influenced the work reported in this paper. Acknowledgments This work was supported by the National Natural Science Foundation of China (No. 21875020 & 22075024 ). Appendix A Supplementary data The following are the Supplementary data to this article: Multimedia component 1 Multimedia component 1 Multimedia component 2 Multimedia component 2 Appendix A Supplementary data Supplementary data to this article can be found online at https://doi.org/10.1016/j.enmf.2022.07.002 .
REFERENCES:
1. FISCHER N (2012)
2. DENG M (2019)
3. HE C (2016)
4. CHEN P (2022)
5. HE C (2013)
6. THOTTEMPUDI V (2014)
7. ZHOU Y (2020)
8. FENG S (2021)
9. FEI T (2020)
10. AGRAWAL J (2007)
11. CHEN S (2021)
12. GAO H (2020)
13. LEI C (2021)
14. TANG Y (2017)
15. BIAN C (2019)
16. ZHANG J (2014)
17. YIN P (2015)
18. TANG Y (2017)
19. HE C (2016)
20. YU Q (2021)
21. FERSHTAT L (2020)
22. DU Y (2021)
23. CAO W (2022)
24. JIANG K (2017)
25. CHEN S (2020)
26. YIN P (2016)
27. FRISCH M (2013)
28. SUCESKA M (2020)
29. SPACKMAN M (2009)
30. JOHNSON E (2010)
|
10.1016_S0022-2275(20)33431-3.txt
|
TITLE: Kidney lipids in galactosylceramide synthase-deficient mice: absence of galactosylsulfatide and compensatory increase in more polar sulfoglycolipids1
AUTHORS:
- Tadano-Aritomi, Keiko
- Hikita, Toshiyuki
- Fujimoto, Hirokazu
- Suzuki, Kunihiko
- Motegi, Kohji
- Ishizuka, Ineo
ABSTRACT:
UDP-galactose:ceramide galactosyltransferase (CGT) catalyzes the final step in the synthesis of galactosylceramide (GalCer). It has previously been shown that CGT-deficient mice do not synthesize GalCer and its sulfated derivative GalCer I3-sulfate (galactosylsulfatide, SM4s) but form myelin containing glucosylceramide (GlcCer) and sphingomyelin with 2-hydroxy fatty acids. Because relatively high concentrations of GalCer and SM4s are present also in mammalian kidney, we analyzed the composition of lipids in the kidney of Cgt
−/− and, as a control, Cgt
+/− and wild-type mice. The homozygous mutant mice lacked GalCer, galabiaosylceramide (Ga2Cer), and SM4s. Yet, they did not show any major morphological or functional defects in the kidney. A slight increase in GlcCer containing 4-hydroxysphinganine was evident among neutral glycolipids. Intriguingly, more polar sulfoglycolipids, that is, lactosylceramide II3-sulfate (SM3) and gangliotetraosylceramide II3,IV3-bis-sulfate (SB1a), were expressed at 2 to 3 times the normal levels in Cgt
−/− mice, indicating upregulation of biosynthesis of SB1a from GlcCer via SM3. Given that SM4s is a major polar glycolipid constituting renal tubular membrane, the increase in SM3 and SB1a in the mice deficient in CGT and thus SM4s appears to be a compensatory process, which could partly restore kidney function in the knockout mice. —Tadano-Aritomi, K., T. Hikita, H. Fujimoto, K. Suzuki, K. Motegi, and I. Ishizuka. Kidney lipids in galactosylceramide synthase-deficient mice: absence of galactosylsulfatide and compensatory increase in more polar sulfoglycolipids. J. Lipid Res. 2000. 41: 1237–1243.
BODY:
Galactosylceramide (GalCer) and its sulfated derivative, GalCer sulfate (galactosylsulfatide, SM4s), are present most abundantly in the brain of vertebrates, comprising almost one-third of the lipid mass of myelin. [Note: Abbreviations for sulfoglycolipids follow the modifications of the Svennerholm system ( 1 ), and the designation of the other glycolipids follows the IUPAC-IUB recommendations ( 2 ).] Knockout mice with a disrupted UDP-galactose:ceramide galactosyltransferase (CGT) gene have been generated by gene targeting ( 3–5 ). The homozygous mutant mice, which are incapable of synthesizing either GalCer or SM4s, form myelin containing elevated amount of glucosylceramide (GlcCer) and sphingomyelin with 2-hydroxy fatty acids. Nevertheless, these mice display a variety of deficits in myelin structure, function, and stability, indicating that both GalCer and SM4s are indispensable components of myelin ( 4 , 6 ). GalCer and/or SM4s are present in glandular epithelial tissues including kidney and the islet of Langerhans ( 7 ), where sulfoglycolipids are believed to be essential components of surface membrane as the amphiphilic donor of negative charges ( 8 ). Although the function of GalCer and SM4s in these tissues is not clear as yet, CGT-deficient mice should provide useful keys to delineate the roles of sulfolipids. Here, we analyze the lipids in the kidney of Cgt −/− mice in comparison with those of Cgt +/− and wild-type mice and investigate the effect of elimination of GalCer and SM4s on renal function. MATERIALS AND METHODS Mice Mice heterozygous ( Cgt +/− ) and homozygous ( Cgt −/− ) for the disrupted Cgt gene were generated as described previously ( 4 ). In the mutant allele, the Cgt gene was inactivated by insertion of the neomycin resistance gene into exon 2, which encodes the N-terminal half of the CGT enzyme. The mice heterozygous for the disrupted Cgt gene were originally supplied by B. Popko (University of North Carolina School of Medicine, Chapel Hill, NC) and maintained in the Mitsubishi Kasei Institute of Life Sciences (Tokyo, Japan) by backcrossing to C57BL/6N. The mice used for experiments were generated by interbreeding of heterozygotes. For genotyping of mice, DNA isolated from tail biopsy was digested with Bam HI, and hybridized with pCR550 (kindly supplied by B. Popko) containing a 595-nucleotide (nt) fragment of Cgt exon 2 as a probe for Southern blot analyses. A 15-kb band was detected for the wild type, and DNA from the Cgt −/− mice contained 8- and 9-kb bands. Heterozygotes had all three bands. Kidney function tests were done in the laboratory of one of the authors (K.M.) by adapting routine clinical analysis procedures to the mouse, using serum or urine samples from Cgt −/− (n = 4), Cgt +/− (n = 7), and wild-type (n = 5) mice. Detailed procedures will be provided on request. Histology Kidneys were dissected and fixed in Bouin's solution overnight. After dehydration, tissues were embedded in paraffin wax and 7-μm sections were stained for the periodic acid–Schiff (PAS) reaction followed by hematoxylin staining. Lipid extraction and analysis Pooled kidneys (2–8 g) from 7- to 12-week-old mice of each genotype and sex were extracted with 19 volumes of chloroform–methanol 2:1 (v/v) and with 10 volumes of chloroform–methanol–water 60:120:9 (v/v/v) successively ( 9 ). The pooled extracts (the total lipid extract) were chromatographed on a DEAE-Sephadex A-25 column. After washing out neutral lipids, acidic lipids were eluted sequentially by a concave gradient of ammonium acetate ( 10 ). A part of the pooled fractions was treated with 0.4 m methanolic NaOH to destroy glycerophospholipids. Acidic lipids were further separated on silica gel 60 high-performance thin-layer chromatography (TLC) plates (E. Merck, Darmstadt, Germany) in chloroform–methanol–water 60:40:9 (v/v/v) containing 0.2% CaCl 2 or 3.5 m NH 4 OH, or chloroform–methanol–acetone–acetic acid–water 7:2:4:2:1 (v/v/v/v/v). The total lipid extract and the neutral glycolipid fraction were analyzed by two-dimensional TLC, using the solvent systems chloroform–methanol–water 60:40:9 or 60:35:8 (v/v/v) containing 0.2% CaCl 2 (first direction) and chloroform–methanol–acetone–acetic acid–water 7:2:4:2:1 (v/v/v/v/v) (second direction). The solvent for the second direction was replaced by chloroform–methanol–(CH 3 O) 3 B 50:20:1 (v/v/v) for the separation of GlcCer from GalCer ( 11 ), and by 2-propanol–15 m NH 4 OH–methyl acetate–water 5:2:1:3 (v/v/v/v) for the separation of lactosylceramide (LacCer) from galabiaosylceramide (Ga 2 Cer) ( 12 ). Ga 2 Cer (16:0 + 24:0/d18:1) from porcine pancreas ( 13 ) was kindly provided by K. Nakamura (Kitasato University School of Medicine, Kitasato, Japan). The bands were visualized with orcinol (hexose-containing lipids) ( 14 ), Azure A (sulfolipids) ( 11 ), resorcinol (gangliosides) ( 15 ), or cupric acetate–phosphoric acid (phospholipids and other lipids) ( 16 ). Lipids on the plates were determined by densitometry (CS-9000; Shimadzu, Kyoto, Japan) by comparison with known amounts of authentic standards. Identification of lipids was performed by negative-ion liquid secondary ion mass spectrometry (LSIMS) on a Concept IH mass spectrometer (Shimadzu/Kratos, Kyoto, Japan) ( 17 ). Each lipid developed by one- or two-dimensional TLC was transferred to a polyvinylidene difluoride membrane (Clear Blot Membrane-P; ATTO, Tokyo, Japan) by TLC blotting, and the band on the membrane was excised and placed on a mass spectrometer probe tip with triethanolamine as the matrix ( 18 ). The concentrations of lipid-bound sulfate, sialic acids, and phosphorus were determined with an aliquot of the neutral lipid fraction or the pooled fraction from DEAE-Sephadex. For sulfate, the sample was hydrolyzed in 1 m HCl, and SO 2− 4 released was determined by ion chromatography (Tadano-Aritomi et al., unpublished observations) ( 19 ). Ganglioside sialic acids were determined as their 1,2-diamino-4,5-(methylenedioxy)benzene (DMB) derivatives by high-performance liquid chromatography (HPLC) ( 20 ). Phospholipid phosphorus was measured by the malachite green method ( 21 ). RESULTS Characteristics of CGT-deficient mice The homozygous mutant mice (7 to 12 weeks of age) exhibited a characteristic phenotype, with a body weight two-thirds that of wild-type and Cgt +/− littermates ( 4 ). The kidney weight of Cgt −/− mice was approximately two-thirds that of wild-type and heterozygous littermates, simply reflecting the lower body weight of Cgt −/− mice. Other than the smaller size, no evidence of abnormality was recognized in the kidney of Cgt −/− mice (data not shown). To examine the renal function of Cgt −/− mice, several parameters including blood urea nitrogen (BUN), creatinine, Na + , K + , Cl − , urinary osmolality, and β- N -acetyl- d -glucosaminidase (NAG) excretion were measured in serum or urine samples. The data (mean ± SD) are as follows: BUN (mg/dL), 33 ± 5 ( Cgt −/− ), 28 ± 5 ( Cgt +/− ), 30 ± 2 (wild type); creatinine (mg/dL), 1.1 ± 0.2 ( Cgt −/− ), 1.0 ± 0.1 ( Cgt +/− ), 1.0 ± 0.2 (wild type); Na + (mEq/L), 152 ± 3 ( Cgt −/− ), 149 ± 3 ( Cgt +/− ), 149 ± 4 (wild type); K + (mEq/L), 5.3 ± 1.7 ( Cgt −/− ), 5.1 ± 1.3 ( Cgt +/− ), 4.9 ± 0.3 (wild type); Cl − (mEq/L), 118 ± 1 ( Cgt −/− ), 117 ± 2 ( Cgt +/− ), 116 ± 2 (wild type); urinary osmolality (mOsm/kg), 2,137 ± 28 ( Cgt −/− ), 1,600 ± 438 ( Cgt +/− ), 1,922 ± 237 (wild type); NAG (U/L), 64 ± 12 ( Cgt −/− ), 47 ± 10 ( Cgt +/− ), 61 ± 12 (wild type). All of these data were within the normal range and no significant differences were observed among Cgt −/− , Cgt +/− , and wild-type mice. GalCer and Ga 2 Cer are absent from neutral lipids As shown in Fig. 1 , neutral glycolipids consisted of monohexosylceramide (HexCer), dihexosylceramide (Hex 2 Cer), globotriaosylceramide (Gb 3 Cer), and globotetraosylceramide (Gb 4 Cer) as well as more polar glycolipids. Consistent with previous studies ( 22 , 23 ), the profile of the male differed conspicuously from that of the female. The bands corresponding to Hex 2 Cer and Gb 3 Cer were significantly reduced not only in the kidney of the female mice of three genotypes but also in the Cgt −/− male mice. To differentiate LacCer and Ga 2 Cer, the neutral lipid fractions were analyzed by two-dimensional TLC with 2-propanol–15 m NH 4 OH–methyl acetate–water ( 12 ) as the seconddeveloping solvent (data not shown). The three bands, which migrated faster than LacCer but similar to Ga 2 Cer ( 24 ), were detected only in the kidney of the male wild-type and Cgt +/− mice. Together with their LSIMS spectra, these bands were identified as Ga 2 Cer with major fatty acid/sphingoid of 24:0/d18:1, 16:0/d18:1, and 16h:0/d18:1, respectively ( Table 1 ). As expected, Ga 2 Cer was also absent in the kidney of Cgt −/− mice. LacCer could be detected neither in the kidney of Cgt −/− mice nor in those of Cgt +/− and wild-type littermates ( 22 ) with the sensitivity of the analytical procedure used in this study. HexCer represented a minor component among glycolipids in the kidney of six mice groups (three genotypes in both sexes) ( Fig. 1 ). By two-dimensional TLC with the (CH 3 O) 3 B-containing solvent followed by negative-ion LSIMS, four faint bands were identified as GlcCer(24h:0/d18:1), GlcCer(24:0/t18:0), GlcCer(24h:0/t18:0), and GalCer(24h:0/d18:1) ( Table 1 ). In the kidney of Cgt −/− mice, the band corresponding to GalCer was absent. Instead, increases in GlcCer bands containing 4-hydroxysphinganine (t18:0) were characteristic, although they represented only minor components among neutral glycolipids in the kidney of Cgt −/− mice. Lack of SM4s produces a conspicuous increase in more polar sulfoglycolipids The bands of individual sulfolipid was identified by negative-ion LSIMS ( Table 1 ). The major sulfolipid, SM4s, comprises almost 75% ( Table 2 ) of the sulfolipids in the kidney of wild-type mice with three minor sulfolipids, that is, LacCer II 3 -sulfate (SM3), Gg 4 Cer II 3 ,IV 3 -bis-sulfate (SB1a) ( 25 ), and cholesterol 3-sulfate (HSO 3 -Chol) ( 8 ). As expected, TLC analysis demonstrated a lack of SM4s in the kidney of Cgt −/− mice ( Fig. 1 ). Instead, a substantial increase in SB1a with a small increase of SM3, respectively, was the remarkable feature of the kidney of Cgt −/− mice as compared with those of wild-type and Cgt +/− littermates ( Fig. 2 ). In the kidney of Cgt −/− mice, the amountsof SM3 and SB1a, determined by ion chromatography, were approximately 2- and 3-fold, respectively, the level of wild-type or Cgt +/− mice, while the level of HSO 3 -Chol remained unchanged ( Table 2 ). In contrast to the homozygote sciatic nerve ( 26 ), no compensatory appearance of glucosylsulfatide (GlcCer I 3 -sulfate) ( 11 ) was noted in the kidney of Cgt −/− mice. Gangliosides and phospholipids are unchanged TLC profiles of ganglioside fractions were essentially similar among the six mouse groups (data not shown).Major gangliosides identified by LSIMS were GM3(NeuAc) and GM3(NeuGc) ( Table 1 ), which comprised 40% (10.2 nmol/g) and 25% (6.2 nmol/g), respectively, of the total ganglioside sialic acids (26 nmol/g) in the kidney of the male wild-type mice. HPLC analyses of NeuAc and NeuGc in mono-, di-, and trisialosyl fractions from DEAE-Sephadex showed no significant differences in their concentrations among the six groups (data not shown). These finding on kidney gangliosides was consistent with the normal brain ganglioside composition in Cgt −/− mice ( 27 ). The composition of major acidic phospholipids including phosphatidylserine (PS), phosphatidylinositol (PI), and cardiolipin (CL), as well as sphingomyelin (SM), in Cgt −/− kidney was compared with those in wild-type and Cgt +/− littermates. No changes in the composition and concentration of these phospholipids could be observed among the six groups (data not shown). In the kidney of male wild-type mice, concentrations (micromoles of PO 4 per g wet tissue) of PS, PI, CL, and SM were 3.8, 2.5, 6.3, and 3.0, respectively. Major molecular species were identified as PS(18:0/20:4), PI(18:0/20:4), CL(18:2) 4 , and SM(24:0/d18:1) by negative-ion LSIMS ( Table 1 ). Unlike the brain of Cgt −/− mice ( 4 ), SM containing 2-hydroxy fatty acids (HFA-SM) could not be detected in the kidney. DISCUSSION Transcripts of the Cgt gene were clearly detected in the normal kidney ( 28 ) and testis ( 29 , 30 ) in addition to the cerebrum and cerebellum (data not shown), suggesting that the loss of this enzyme activity may affect the function of these tissues and consequently contribute to the phenotype of the mutants. With this hypothesis in mind, we examined changes in the lipid composition of the kidney of CGT-deficient mice. HexCer from the kidney of wild-type mice consisted of four species, of which one was identified as GalCer and the others as GlcCer. It has been reported that both GlcCer containing 2-hydroxy fatty acids (HFA-GlcCer) and HFA-SM were expressed in compensation for GalCer in the brain of CGT-deficient mice ( 4 ). In the kidney of Cgt −/− mice, GalCer was absent, as expected. In contrast, two bands of GlcCer containing t18:0 sphingosine, which are the major molecular species of HexCer in rat kidney ( 11 ), were substantially increased. No differences were seen in the level and species of SM; HFA-SM could be detected neither in Cgt −/− mice nor in wild-type and Cgt +/− littermates. It has been expected that in the absence of CGT activity, the lack of GalCer and Ga 2 Cer could stimulate the synthesis of GlcCer as well as LacCer in compensation. Unexpectedly, LacCer could hardly be detected in the kidney of six groups, suggesting its prompt conversion to SM3 and GM3. Staining with monoclonal antibodies showed that sulfoglycolipids are distributed on the lumenal (apical) cell surface of renal tubules ( 7 , 31 ). The enrichment of sulfolipids in osmoregulatory organs including kidney and intestine has suggested that sulfolipids play important roles as the ion barrier at the cell membrane ( 8 ). The present study confirms our hypothesis that the sum of sulfoglycolipids is more concentrated in the kidney of smaller animals ( 8 , 32 ), suggesting that the glycolipid-bound sulfate may participate more actively in the kidney of smaller animals such as mice. However, CGT-deficient mice lacking SM4s showed neither morphological defects in the kidney nor abnormality in parameters responsible for renal function. Although these findings do not completely exclude the possibility that GalCer and/or SM4s may be dispensable for normal kidney function, there are other possibilities that are consistent with their functional importance. First, in place of SM4s, Cgt −/− mice still express more polar sulfoglycolipids, that is, SM3 and SB1a, in the kidney. Moreover, both sulfoglycolipids are expressed at higher levels in Cgt −/− mice than in wild-type and Cgt +/− littermates, indicating that the biosynthetic pathway of SB1a from GlcCer via LacCer and SM3 is stimulated in the kidney of Cgt −/− mice ( Fig. 3 ), similar to the changes in SM4s and SM3 in MDCK cells under osmotic stresses ( 33 ). In contrast, no increase in the levels of HSO 3 -Chol as well as gangliosides was observed. Because of the formidable compensatory capacity of the kidney cells to the osmotic environments ranging between 0 and 1200 mOsm, it is possible that the increment of SB1a and SM3 can, at least partially, compensate for the lack of SM4s and allow keeping the normal function of the kidney in Cgt −/− mice. Second, it could not be ruled out that acidic phospholipids may partly compensate for the SM4s deficiency. Third, subtle abnormalities undetectable by routine examinations under normal environment, might be present in the kidney of CGT-deficient mice. Experimental manipulations, such as osmotic stress by dehydration, may uncover such a borderline functional state of the SM4s-deficient renal tubules. Brigande, Platt, and Seyfried ( 34 ) reported enhanced synthesis of GalCer-related lipids SM4s and GM4 in the mouse embryo treated with an inhibitor of ceramide glucosyltransferase.They suggested that the increases in GalCer-related lipids may be an adaptive response to prevent the accumulation of potentially harmful upstream metabolites, for example, ceramide. We find these authors' argument too teleological. Our view is that the increased synthesis of the GalCer series of lipids under glucosyltransferase-inhibited conditions, as well as that of polar sulfoglycolipids, in our Cgt knockout mice occurs simply because more acceptor molecules become available due to the metabolic block and thus can go to other synthetic pathways that have a higher K for the acceptor. Under both conditions, ceramidase is present in excess and therefore accumulation of ceramide to a harmful concentration can easily be prevented by degrading it even without increasing the synthetic side reactions. In fact, quantitative estimates indicated clearly that the increase in GlcCer in the kidney of m Cgt −/− mice is minor and most of the excess ceramide is being degraded by ceramidase. The CGT-deficient mouse should allow further analysis of the specific role of GalCer and its derivatives in the kidney. Because the CGT-deficient mouse generates neither SM4s nor its precursor, GalCer, precise dissection of the effect of the precursor and its sulfated end product is difficult as yet. GalCer sulfotransferase has been cloned ( 35 , 36 ) and we can soon expect a mutant mouse lacking the capacity to generate sulfated glycolipids. Comparison of the CGT knockout mouse with the expected sulfotransferase knockout mouse could answer the vital question concerning whether both GalCer and SM4s or only SM4s is important for function. The data presented in this article should provide the basis for such a comparison. Acknowledgments We thank Dr. B. Popko for providing the Cgt mutant mouse, Mr. T. Akiyama and the staff of the EA Center (Mitsubishi Kasei Institute of Life Sciences) for maintaining the mutant mice, and Ms. A. Tokumasu for technical assistance in histology. We also thank Dr. Y. Nagai for constant encouragement during the course of this study. This work was supported in part by a grant from the Promotion and Mutual Aid Corporation for Private Schools of Japan to I.I.; by RO1-NS24289 and a Mental Retardation Research Center Core Grant, P30-HD03110, from the USPHS; and by research grant 83A from the Mizutani Foundation to K.S.
REFERENCES:
1. SVENNERHOLM L (1963)
2. IUPACIUBJOINTCOMMISSIONONBIOCHEMICALNOMENCLATURE (1998)
3. COETZEE T (1996)
4. COETZEE T (1996)
5. BOSIO A (1996)
6. DUPREE J (1998)
7. BUSCHARD K (1994)
8. ISHIZUKA I (1997)
9. SUZUKI K (1965)
10. TADANOARITOMI K (1998)
11. IIDA N (1989)
12. OGAWA K (1988)
13. NAKAMURA K (1984)
14. SVENNERHOLM L (1956)
15. SVENNERHOLM L (1957)
16. MACALA L (1983)
17. TADANOARITOMI K (1995)
18. TAKI T (1995)
19. IJUIN T (1996)
20. HIKITA T (2000)
21. ZHOU X (1992)
22. HEY J (1970)
23. GROSS S (1994)
24. NIIMURA Y (1986)
25. TADANO K (1982)
26. BOSIO A (1998)
27. SUZUKI K (1999)
28. STAHL N (1994)
29. TADANOARITOMI K (1999)
30. FUJIMOTO H (2000)
31. TRICK D (1999)
32. NAGAI K (1985)
33. NIIMURA Y (1990)
34. BRIGANDE J (1998)
35. HONKE K (1997)
36. HIRAHARA Y (2000)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.